Click here to go to the first RED TEAM post in this thread.   Thread: KOMODO....

Reply to Thread
Page 120 of 185 FirstFirst ... 2070110116117118119120121122123124130170 ... LastLast
Results 1,191 to 1,200 of 1846
  1. #1191  
    Moderator Phil Holland's Avatar
    Join Date
    Apr 2007
    Location
    Los Angeles
    Posts
    10,999
    Quote Originally Posted by Mark L. Pederson View Post
    The human eye has approx. 20 stops of dynamic range. Also, dynamic range and bit depth are not the same thing sensor land. You can think of dynamic range as the height of a staircase and bit depth as the number of steps in the staircase.

    With respect to resolution and visual acuity - that is a deeper rabbit hole of arc minutes and because of the way the brain works and field of view, hyperacuity, etc - but it is CERTAINLY more than 100MP. There is plenty of papers you can read (Blackwell, etc.)
    We did an amazing study on Human Perception of DR last year! It was fun. And predictably, as with acuity, a moderate range was discovered. Variances of age certainly came into play, observational attentiveness, peripheral vision, light sensitivity, I would likely suspect even diet. I must be eating a lot of carrots. 20-26 stops is mostly the general range, but you'll taking all that in is pretty annoying particularly on smaller displays.

    I've done my best to educate on Visual Acuity and even come up with wacky ways of describing the experience. I've got a fancy tool for it even that's now being used world wide.

    And what can be said about what's available now, well Sony now has a 16K television/theatrical screen as of this week. This screen is initially a 63x17 foot sized display (3.71:1), however it's a tile based system that can also be deployed to DCI and UHD aspect rations (UHD mathematically works with the tile sizes the best). We're miles away from that being anywhere and/or everywhere, but that journey has begun and eventually once these are installed it will be something that needs content. Their CLED 8K and 4K are already available There's numerous screens around the world up to 32K in resolution. I'm not of the mindset we need a single camera solution for 32K anytime soon. Extremely decent 8K televisions are now sub-$5K now. Decent 4K displays range from very inexpensive to a bit up there. As I mentioned, the focus on 4K is to make it the mass market at this point.
    Phil Holland - Cinematographer - Los Angeles
    ________________________________
    phfx.com IMDB
    PHFX | tools

    2X RED Weapon 8K VV Monstro Bodies and a lot of things to use with them.

    Data Sheets and Notes:
    Red Weapon/DSMC2
    Red Dragon
    Reply With Quote  
     

  2. #1192  
    Quote Originally Posted by Christoffer Glans View Post
    Nothing. Or rather, yeah, we would have more possibilities in post,

    1) What motivates this desire ?


    2) Which possibilities in post are you missing today which prevent you from expressing yourself and making a work of value which will move and inspire others for years to come ?
    http://i68.tinypic.com/drcb4y.jpg


    Analog > Camera feel optimization http://omeneo.com
    Digital > Camera performance optimization http://omeneo.com/primers

    imdb


    "Como delfines en el fondo del oceano
    volamos por el universo incentivados por la esperanza"

    "L'esperanza", Sven Vńth
    "It's a poor sort of memory that only works backwards"
    Jung/ Carol
    Reply With Quote  
     

  3. #1193  
    Quote Originally Posted by Phil Holland View Post
    We did an amazing study on Human Perception of DR last year! It was fun. And predictably, as with acuity, a moderate range was discovered. Variances of age certainly came into play, observational attentiveness, peripheral vision, light sensitivity, I would likely suspect even diet. I must be eating a lot of carrots. 20-26 stops is mostly the general range,

    Evolved within millions of yeras based on reflected, diffused and dispersed light.
    Seen within the matching environment and variable distances, knowingly, gradually.


    NOT EMITTED light, coming from a WALL, in EDIT CUTS.
    http://i68.tinypic.com/drcb4y.jpg


    Analog > Camera feel optimization http://omeneo.com
    Digital > Camera performance optimization http://omeneo.com/primers

    imdb


    "Como delfines en el fondo del oceano
    volamos por el universo incentivados por la esperanza"

    "L'esperanza", Sven Vńth
    "It's a poor sort of memory that only works backwards"
    Jung/ Carol
    Reply With Quote  
     

  4. #1194  
    Moderator Phil Holland's Avatar
    Join Date
    Apr 2007
    Location
    Los Angeles
    Posts
    10,999
    Quote Originally Posted by Hrvoje Simic View Post
    Evolved
    Not exactly. Considering things like the sun emit light and in much of that evolution that's been sort of the primary source. We did not test using display technology. Different tonal ranges represented in different volumes in a given viewing environment, in this case pitch black revealed some interesting stuff.

    The brain does a lot of work, as do the eyes. Scaling bright and dark volumes in different shapes slowly allows a lot of key observations. In the case of measuring what people could see having tonal variation across a range on the targets was interesting. Higher contrast areas lowering the ability to see subtle shades. I'd say it's a growing study and will be applied eventually to display technology. We're already using the subtle shade method on some of the HDR targets which has provided a few interesting conversations on how best to use very bright screens.

    Much of this was like visiting an Optometrists' office. Can you see the letter? Can you see the line? How many lines can you count? etc. Keeping a target static while hunting around for the various points was interesting, sort of like Where's Waldo but sometimes Waldo is there, but hiding in the shadows or the light.
    Phil Holland - Cinematographer - Los Angeles
    ________________________________
    phfx.com IMDB
    PHFX | tools

    2X RED Weapon 8K VV Monstro Bodies and a lot of things to use with them.

    Data Sheets and Notes:
    Red Weapon/DSMC2
    Red Dragon
    Reply With Quote  
     

  5. #1195  
    Quote Originally Posted by Phil Holland View Post
    Not exactly.
    No, it evolved exactly based on Sun light reflected off the surroundings. In space.

    So naturally you focus on one spot on one distance while other, much more luminous one is on a different distance.
    The viewer being inside that environment. So light angle, type, focus distance and environment/eye accomodation properties, determining the total light energy effect (intensity and modulation) to the eye and visual apparatus - differ.

    So...NO, just because the eye can handle XX stops in natural environment and conditions does NOT mean the same or similar figure is applicable in a completely different context.
    Just like it does NOT mean that you should be exposed to 120dB of sound dynamics just because the sound system can handle it, or some director/producer had a wet dream to compensate the lack of content quality and real value with sensory overload circus.

    Because there is sensation-OPTIMAL dynamic range and MAXIMUM and exposure to the extremes is damaging.


    Quote Originally Posted by Phil Holland View Post
    We did not test using display technology.
    You don't have to do tests using display technology to know that.


    Quote Originally Posted by Phil Holland View Post
    The brain does a lot of work, as do the eyes. Scaling bright and dark volumes in different shapes slowly allows a lot of key observations. In the case of measuring what people could see having tonal variation across a range on the targets was interesting. Higher contrast areas lowering the ability to see subtle shades. I'd say it's a growing study and will be applied eventually to display technology. We're already using the subtle shade method on some of the HDR targets which has provided a few interesting conversations on how best to use very bright screens.

    Okay...if my point was not clear enough, let me stress again that reflected light and emitted light are not the same, neither are the environment and flat surface light sources, neither are self-guided viewing and served edited content...

    ...and ignoring this with display technology through insatiable technophiliac fantasias, fixations on endlessy increasing figures and oversimplifications of human vision using wishful thinking...not only is irrational behaviour but also a route which carries a penalty. On a large scale.

    As it did many times before in human history when humans got too playful and irresponsible with technology, enchanted with the possibilites.


    Penalty...is very pricey.
    http://i68.tinypic.com/drcb4y.jpg


    Analog > Camera feel optimization http://omeneo.com
    Digital > Camera performance optimization http://omeneo.com/primers

    imdb


    "Como delfines en el fondo del oceano
    volamos por el universo incentivados por la esperanza"

    "L'esperanza", Sven Vńth
    "It's a poor sort of memory that only works backwards"
    Jung/ Carol
    Reply With Quote  
     

  6. #1196  
    Quote Originally Posted by Christoffer Glans View Post
    Nothing. Or rather, yeah, we would have more possibilities in post, but we would probably have enough over our current specs with just 20 stops, 50-60MP and 16 bit. It's just that the end result of something shot now in 8K, 16bit, 16,5 stops for a 4K delivery compared to something with those specs for an 8K delivery wouldn't give you much in perceivable delivery quality. Couple that with streaming bandwidth and services that need to provide a clean and perceivably lossless 8K stream compared to the same in 4K.

    On a 4K, 75 inch screen, which is among the most common large screens for the small number of people who get large screens, we don't gain much of anything going higher than we are today. If we were to do cameras for idiots who can't expose properly, then something way beyond now is appropriate in order to give them the option to fix everything in post, but I assume that idiots aren't getting the gigs that require gear like the Monstro 8K, so for normally competent cinematographers there's not much you can do beyond the specs of today's most high-end cameras.

    I would say that the very end of reasonable specs, in order to give plenty of options in post, would be around 50-60MP, 20 stops and 16-18 bit. Beyond that I fail to see, or rather, I've yet to be convinced by anyone that there's any point going higher. Especially when we take into account how to handle that data, how to manage large scale productions, post-production, streaming and delivery.

    I guess that for special applications, such as wall-OLEDs, we would have 100MP cameras with special lenses that can cover that resolution. But these would be special cameras for very specific applications and material. No one rational would ever use that for cinema or TV shoots.

    For 99,99% of all worlds productions, I would say we've started to enter diminishing returns. And if you have any experience as a photographer or cinematographer, an 8K, 16bit, 16,5 stops camera gives you everything you need. Beyond it, there have to be some truly rational arguments to convince anyone that more MP's are needed.

    It becomes truly irrational when we start comparing spec-sheets for features that our human eyes would never see any differences between and which doesn't give us much more options in post anyway. It's like telling someone that the air you breathe in a room has 1% more air than the other room and will, therefore, give you a sense of a cleaner breathing atmosphere. All while you stand there and can't tell the difference between the two rooms, but the guy pushing the argument keep saying you're a moron for not sensing the difference.


    Think about it Christoffer.

    Here in Stockholm the cost to rent a LF or Monstro camera body is about equal to what any film worker charge per day. So the hair&makeup girl likely charge more than you pay to use a LF body...

    Then a simple calculation will tell you that any production that rent gear and has more than very few people involved has very little to gain on not using the absolute most capable cameras available. Thats something that will not change any time soon.

    And its not the human perception that is the limit. Thats wrong, basically you want to be able to shoot things that the human eye can not even see and be able to bring that into the spectrum that we can see and then have a picture that is still pleasing to look at. So yes, people will not be pleased when the camera is as good as your eye, they will still ask for more. I think its better to look at it as a endless game.
    Bj÷rn Benckert
    Creative Lead & Founder Syndicate Entertainment AB
    +46855524900 www.syndicate.se
    Flame / VFX / Motion capture / Monstro
    Reply With Quote  
     

  7. #1197  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    3,965
    Quote Originally Posted by Phil Holland View Post
    The concept of perception over desire and intent is an interesting concept and leading to a conversation of "that's probably good enough". I am not cut from that cloth and do desire a bit more from the content I personally create. Perception in practicalities is one thing, but as the world has pretty much discovered at this point you can indeed see the differences between 2K, 4K, and 8K. It's been easier to have these discussions in recent years as critical eyes have now been able to digest content side by side often by the same display manufacturers offerings in the market. I'll reference Sony, Samsung, LG, ans Sharp with unique opportunities to see the same content on different screens. For some, it's very appealing and noticeable. In actual study, the results are also there. But for others well good enough is good enough.
    It's not really that black and white where because I call something into question, I'm against progress. What I am trying to disassemble is the broader perspective on these things. All improvements start to reach diminishing returns as fewer and fewer people see differences between two end results of two different acquisition formats, that is a fact. There are fewer people who spot differences between 4K and 8K at normal viewing distances than there are who see differences between 2K and 4K.

    Maybe the real case actually is that there aren't any perceptible differences at all. I'm fairly knowledgeable about how academic studies work, with control groups, falsification, and verification procedures in order to actually get scientifically significant results and statistics. If there are studies which cleared falsification procedures with other researchers and did the studies in proper ways towards how psychological/biological studies are performed, then that data publication would be handy to have, but so far I've yet to see actual scientific studies in this area. What I've mostly seen are tests made from industry people in which the tests are greatly open to contamination, interpretation, and biases. Where the result is either concluded from viewing situations that never occur, blind tests without control groups, pre-study biases affecting the test groups and so on.

    The human psyche is so easily influenced that the biases we face when discussing these things are too influential to be ignored. And so far I've yet to see verified published studies for this. What we instead mostly have is an inductive argument about the most rational conclusion.

    The argument has many sides beyond just the technical, which is important to remember.

    It is a fact that all the images we see with our human eyes have a diminishing return to their quality where we cannot perceive any differences anymore. If we reach this diminishing returns in our final output images, there's no point in pursuing improvements in that quality since it cannot be perceived further than what it is. - This one is pretty obvious and logical, and anyone opposing this need to hold up and think about it once again. The limits of our perception in this regard should be based on scientific studies and results, but just as important is to remember that even if the scientific studies show that a number of people can discern differences, that might be less than 1% of the entire study group. Meaning, if the study points out the breaking point where we cannot see differences anymore, that is not the same as when diminishing returns start and our biases take over to decide our opinions about specific qualities. The real-world application is therefore not at the actual breaking point, but when a majority of the study group start to be unable to tell differences anymore.

    This is the diminishing returns I've been speaking about that we are now entering for cinema and television work.

    Then there are the post-production benefits, which is where I diverge from Steve Yedlin a bit as he doesn't see a point of going beyond 4K or 6K. For this, it's not really about the limitations of our human eyes, but a matter of what we can do with the material in post in order to get to that final image. This one is a bit trickier, because while we can point out that a limitless acquisition that enables you to do whatever you want with an image in post is the best for those who like those options, it's not really a realistic scenario for real-world situations. Going by this, the only real way forward is light-field technology, which enables total manipulation of the final image. This is something that might just be how everyone makes movies down the line, but the real problem with that is storage limitations and data handling. It's not viable to do a real-world production with all that light-field data and it won't be for a long time. So then we go back to traditionally captured images and look at their real-world applications. But what is enough specs before it becomes irrational?

    That all depends on a specific project of course, but if we're speaking about cinema and television, we are talking about lots of footage and lots of manipulation of that footage in post. It needs to be debayered properly down to true RGB pixels, it needs to be able to have little to no noise in order for secondary color grading accuracy and VFX work, it needs to have enough stops to replicate a true human eye perception. So where are we today with this? In the most high-end systems? Are we able to produce images that maximize even beyond what the human eye can perceive?

    We can go further, we can produce 10-16K cameras, we can increase stops to 20, we can produce close to noise-less images at higher sensor sensitivities and it will all give us better options in post, but when is it enough so that we actually won't gain much in terms of what we can actually do? These cameras aren't for idiots, it's not for those who just point at something and then fix everything in post, they're made for professionals who know how to expose an image and how to handle the footage they capture. How much more do we need?

    There's a balance to what is rational as a workflow and what is actually needed to create the images you imagine. We can technically push things further and further, but at what cost and to what gain in value?

    We've become like wine testers who talk about nuances between different wines and when someone does a spoof test on us, with cheap wine, we fool ourselves through our biases into thinking it's a finer wine than what it actually is. Because all wine is actually more a matter of taste, not a matter of tester's pinpointed values that define price-points. Numerous occasions I've witnessed high profile professionals who watch something under the best conditions only to not be able to pinpoint what cameras were used for acquisition, even when the acquisition was made with cheap-ass cameras. Therefore, we've entered an era in digital cinema where we no longer can be sure what acquisition format that was used because diminishing returns makes us unable to perceivably pinpoint the differences in what we're seeing.

    And all this actually has more to do with Komodo than what it first seems. My argument is that because we've hit a point of entering this diminishing returns, there's lesser value in pursuing higher specs for the image quality and more important that we look into the practical and physical attributes of a camera. If we keep the high-end specs that previously demanded much greater processing power and cooling and instead work towards putting those specs into more versatile camera bodies with faster boot times, faster rig times, easier transportation, new camera angles, and so on, it becomes what is actually opening up possibilities for cinematographers and filmmakers, NOT the number of stops or K's over what's possible today. I've done projects with big old Alexas, where the end result might have great color science, but the limitations of the camera body created very dull camera angles and restricted, locked-down imagery. And even if our current Red cameras are small, they are still heavy and can't fit on lighter gimbals easier to enable new interesting possibilities.

    This is the point of Soderbergh's mentality of using smaller camera bodies and be fast during production. The industry discussion on this has ended up stuck in a technical loop that isn't really of any benefit to the actual working process of cinematography, a process that should now be more focused on how you practically can handle the camera than what the acquisition specs are.

    If Red released news tomorrow of a new 12K, 20 stops, low-noise high-end camera it wouldn't be as much of a "splash" in digital filmmaking possibilities as previous improvements in digital cinema has been. But if Komodo can do stuff close to what a Dragon 6K can do, in that small camera body, it actually opens up new ways of shooting, new actual possibilities rather than possibilities that are esoterically insignificant due to diminishing returns in final output images.

    There's a technical side to progress, there's a creative side and a perceptive side. All of them need to happen at once, if you just increase technical progress, while the perception of that progress is insignificant and the creative progress gains little to nothing, there's nothing really gained by pushing the technical side. For me, if Komodo delivers well on specs, it will be a much more important step for Red than a new 12K DSMC3. It's much more significant to the camera market than I think Red themselves realize.

    This is why I passionately debate these things. Because I really don't want Komodo to be lacking in specs for that camera body size and price; as it's potential is higher than a DSMC3 could really achieve, as long as people look past spec sheets and at the reality of cinematography.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600
    Reply With Quote  
     

  8. #1198  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    3,965
    Quote Originally Posted by David Rasberry View Post
    The eye may sense a range of 20 stops, but it is the brain that provides us with an image derived from that data. What the brain presents us with is more typically seven to ten stop DR range at any given moment biased toward values that are the most critical for pattern recognition, reaction time and survival.

    Suzy and I shot an interview with Dr. Semir Zecki, a neurological researcher specializing in the visual cortex. What he discovered while presenting images of art, people, and nature to research subjects wired with neural sensors embedded in the visual cortex is quite amazing. He found that visual response to beauty, ugliness, and the sublime as defined by Kant are universal to all humans regardless of culture, class status, or education. They each are hardwired by evolution into specific regions of the visual cortex. The emotional responses appropriate to each concept follow from stimulation of the associated region of the visual cortex. So there is a strong universal biological response associated with specific philosophical aesthetic concepts.
    Exactly, technical studies of the human eye often ignore the psychological studies of perception. It rather becomes an appeal to authority fallacy in order to drive an argument for the technical, rather than a viable conclusion of how we actually perceive reality.



    Quote Originally Posted by Hrvoje Simic View Post
    1) What motivates this desire ?


    2) Which possibilities in post are you missing today which prevent you from expressing yourself and making a work of value which will move and inspire others for years to come ?
    I'm missing maybe some more stops that are noise-free in order to push the perceptive DR in an image, especially in HDR deliveries, but it's a rare need and I can probably do enough with the acquisition footage from most high-end systems available today. The desire is to match the perception of an image closer to what our eyes see, but as mentioned, we're close to diminishing returns and it's hard to justify pushing for more when most of the time the images are compressed in their contrast for a more beautifully contrasted image. Many times I think people confuse the technical with the creative and getting more more more possibilities gets confused with possibilities actually needed for creating good images. I rarely hear our worlds greatest photographers, cinematographers or artists actually pushing for more technical specs, they want more support for their actual images created and more often than not, the technical cutting edge isn't really the thing they desire, but other aspects, often ignored by the technicians responsible for innovation. I'm happy with what the Dragon 6K, the Gemini 5K and the Monstro 8K can do, I rather want those specs into smaller bodies with faster operational attributes so that I can actually capture rare moments and interesting angels and camera movements without having to pay a fortune. This is why I see potential in Komodo, far more than others as it seems.



    Quote Originally Posted by Bj÷rn Benckert View Post
    Think about it Christoffer.

    Here in Stockholm the cost to rent a LF or Monstro camera body is about equal to what any film worker charge per day. So the hair&makeup girl likely charge more than you pay to use a LF body...

    Then a simple calculation will tell you that any production that rent gear and has more than very few people involved has very little to gain on not using the absolute most capable cameras available. Thats something that will not change any time soon.

    And its not the human perception that is the limit. Thats wrong, basically you want to be able to shoot things that the human eye can not even see and be able to bring that into the spectrum that we can see and then have a picture that is still pleasing to look at. So yes, people will not be pleased when the camera is as good as your eye, they will still ask for more. I think its better to look at it as a endless game.
    Yes, of course we want the very best, but I think that you confuse what I say about final images with the acquisition specs of a system, this is common. The final images produced today has reached into diminishing returns where people really can't tell the difference, even professionals, between cameras of varying cost and specs. But for acquisition, we want, as you say, more specs to be able to work with the material in post into that final image. What I'm arguing for is that we don't stare blindly at just pushing these specs forward past what is rationally valuable for post-production. Here we have a cost versus quality balance of handling all that data, but we also have a balance between what is needed and what is not. The high-end systems today all give plenty of possibilities for the end result desired. We can improve more on it, of course, but if we blindly just stare at acquisition specs without regard of what is enough in regards to what we actually need, we blindly push technology forward instead of looking at other valuable areas of camera attributes, for example, camera body size and weight. Komodo is, in my opinion, much more of an innovation than pushing current high-end systems further in acquisition specs.

    In what way would a 12K, 20 stops camera change your current final outputted image using a Monstro 8K? Or would it look similar, maybe even perceptively the same when people experience looking at the final result?

    I would say that specs which actually would change how we use cameras would be pushing dual-iso or maybe even tripple-iso features together with noise-free sensitivity up to ISO25000. This actually opens up possibilities for cinematography. Beyond that, there's very little now with high-end systems that actually would create such differences in the final image that makes it opening up possibilities. What is possible is not the same as what is needed and what you need is more important than what is possible.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600
    Reply With Quote  
     

  9. #1199  
    Senior Member Jaime VallÚs's Avatar
    Join Date
    Dec 2006
    Location
    New York City
    Posts
    1,762
    Soooo... Can someone summarize what we do know about Komodo? Pretty please? I don't think I have the fortitude to read through almost 1200 posts to get an idea of what's what.
    Jaime VallÚs

    AJV Media
    Video, photography & graphic design
    www.ajvmedia.com
    Reply With Quote  
     

  10. #1200  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    3,965
    Quote Originally Posted by Jaime VallÚs View Post
    Soooo... Can someone summarize what we do know about Komodo? Pretty please? I don't think I have the fortitude to read through almost 1200 posts to get an idea of what's what.
    - 6K with REDCODE RAW (Compression unknown)
    - Small-sized body that seems close to or smaller than a Canon R (but cube-sized)
    - 4K over SDI (So, recordable to Atomos and similar)
    - RF mount that enables an adapter with built-in ND filters
    - Using Canon camera batteries in a hot-swap fashion
    - Has some connection/connectivity to the Hydrogen (based on the price-point being phrased in conjunction with the phone. It may also mean that those who bought the Hydrogen gets a discount, so, not confirmed).
    - May be used with other Red gear hinted by a picture from Soderbergh's "Let Them Talk" shoot using a 7" display with the same interface as on regular Red cameras (Could also be that he used normal Red cameras as well, so not confirmed)

    Other than that, only what Jarred wrote in the first post and what has been shown through his images on Instagram. There are no big news really and Jarred mentioned no news given at IBC about the Komodo. Several of them are in field testing so whenever Jarred spills the beans we may see some footage from different people at the same time (I'm hoping for a "Let Them Talk" trailer, but I think it's too soon in post for that production).

    If I missed something someone will fill it in.
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600
    Reply With Quote  
     

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts