Thread: Using in-camera trackers for virtual camera

Reply to Thread
Results 1 to 8 of 8
  1. #1 Using in-camera trackers for virtual camera 
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,254
    I've been advocating for opening up the possibility to use the in-camera tracking (gyro etc.) to be able to post-stabilize footage in the same way that the SteadXP does. But so far it's only been met with "well, some VFX people on big productions are already doing this". Which doesn't really advance anything for the common Red user.

    But there's another application that I'm much more intrigued by and it's the way such tracking can be used for getting a virtual camera without the need for post-camera tracking software. I've been in talks with SteadXP about this, but I think they are too small as a team and their user base is primarily GoPro users and amateurs rather than producers of larger productions. Their tracking essentially gives you a virtual camera in post, but it's locked into their own software and the synced files aren't open to be transferred to other softwares.

    What would it take to be able to make a script that pulls the in-camera tracking of a Red into a virtual camera in After Effects or Fusion? The things that need to be inputted by the user, or drawn from the metadata is the sensor size (and sensor crop) together with what lens. Then the ability to scale the virtual camera.

    I find it rather weird that we are still doing camera tracking the old way using Syntheyes, AE, Fusion camera tracker etc. when we should be able to just pull in a camera track virtual camera directly from the metadata of the camera. Why is this not a thing that is available as a script or plug-in? If I had the coding knowledge I would have done this years ago. Can someone explain why this has not come to be yet?
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600
    Reply With Quote  
     

  2. #2  
    REDuser Sponsor Gunleik Groven's Avatar
    Join Date
    Dec 2006
    Location
    Norway
    Posts
    13,399
    Precission needed simply.

    Calibration and many more reasons.
    It is doable though, but is as of yet not a really low-hanging fruit.

    You need to adjust for Gyro drift which is essentially a huge headburn, if you want to rely on gyro only. And it's not really suitable for... a lot without additional data placing the camera in the 3D room.


    This little thing has caught my attention though.
    https://www.intelrealsense.com/lidar-camera-l515/

    Things will come. Eventually.
    Last edited by Gunleik Groven; 01-26-2020 at 01:28 PM.
    Reply With Quote  
     

  3. #3  
    What Gunleik says. And also Christoffer. Gyro data knows extremly little about position all you get is rotations and with tge current gyros rotation data of poor quality. I been deep into on the link you can download and test it for free. Works on most pcs and on macs up to high sierra.

    We made it with the lates stk so should also work with komodo wich hopfully has better gyro. The fbx exported xan be brought into fusion, nuke, flame etc. And has the rigght fov acording to sensorsize and lens used or lens data input by user.

    www.syndicate.se/axis
    Björn Benckert
    Creative Lead & Founder Syndicate Entertainment AB
    +46855524900 www.syndicate.se/axis
    VFX / Flame / Motion capture / Monstro
    Reply With Quote  
     

  4. #4  
    I've been petitioning for apps like Nuke to read the per frame gyro to help guide 3d tracking. As mentioned above there's no positional data and what you do get is not precise but it could be really useful to help inform the tracking process. The 3D tracking algorithms are remarkably good these days but the more data the better...

    cheers
    Paul
    -------------------
    insta: @paul.inventome
    Reply With Quote  
     

  5. #5  
    Quote Originally Posted by paulcurtis View Post
    I've been petitioning for apps like Nuke to read the per frame gyro to help guide 3d tracking. As mentioned above there's no positional data and what you do get is not precise but it could be really useful to help inform the tracking process. The 3D tracking algorithms are remarkably good these days but the more data the better...

    cheers
    Paul
    Its a bit complex we been looking into a lot and yes. It would be easy for syntheyes to read r3d´s and use the pixel pitch, resolution and focal length / zoom values to assist their pixel track. But, as even zoom data is jittery as F#€#% if not on a fixed lens and the rotational data is very much drifting and very inaccurate its only good as an indicator on what direction the camera spins. Somewhat over a few frames or such.

    So the question is if that data would be much good, as if you introduce more noise than you remove it does not really improve things.

    Axis gives as good as it gest FBX cam of the camera move which can be filtered in nuke,fusion or what ever 3D camera app and used but as the drift and noise is so big its not really any good for high frequency or low frequency rotations so leaves it kind of useless.

    Here is a film where you can clearly see the issues. When stabilising using the data the Image drift and shake more than it did when shot simply.



    Here is a rough demo if you want to try yourself but again its a bit of a ghost chase as there is not rally anything to find until we have steadyXP type gyros with high frequency. As I understand for 25p you kind of need 120hz or such and use the redraw time of the sensor to also compensate for rolling shutter.

    Steadyxp does all this quite well, red should cooperate with them and offer a steadyXP module that uses the internal data together with the steady XP gyro. I would easily sacrifice a little bit of compassion to get a steadyXP file in each r3d container when shooting. Such feature would as I see it really push things forward in terms of red as VFX / steady cam / gimbal cameras. With the resolution they offer the steady XP aproach to achieve smooth motion is really something valid. Getting something like that together I see as much more important than kicking out new / different camera / sensor models. But possibly thats just me.

    https://vimeo.com/377408405





    A good trick I find to improve tracking is witness camas. Put a go pro on the camera rig an preferably on a known distance form the nodal point and give both to syntheses. then main camera can be with short focus etc but ref camera keeps a sharp small sensor / closed iris image for tracking. Here I did is with our B&L´s as I was a bit worried for the 3d track, a bit to close to main camera angle but still a huge help as it has a much wider fov.

    Gopro mic by Björn Benckert, on Flickr
    Björn Benckert
    Creative Lead & Founder Syndicate Entertainment AB
    +46855524900 www.syndicate.se/axis
    VFX / Flame / Motion capture / Monstro
    Reply With Quote  
     

  6. #6  
    REDuser Sponsor Gunleik Groven's Avatar
    Join Date
    Dec 2006
    Location
    Norway
    Posts
    13,399
    Witness cameras are good.

    And if you can have them shutter-sync'ed with your main camera you get even further.
    2 is better than one...

    But I do think a "live" LiDAR + precise gyro with some sort of drift-correction + an optical camera is probably where this should be going.
    All shutter synchronised.

    And with a bit of AI in the soup. :)
    Not at least to ignore parts of the image which is just "noise"

    But waddaIknow
    Reply With Quote  
     

  7. #7  
    Senior Member Christoffer Glans's Avatar
    Join Date
    Jun 2007
    Location
    Stockholm, Sweden
    Posts
    4,254
    Quote Originally Posted by Gunleik Groven View Post
    Witness cameras are good.

    And if you can have them shutter-sync'ed with your main camera you get even further.
    2 is better than one...

    But I do think a "live" LiDAR + precise gyro with some sort of drift-correction + an optical camera is probably where this should be going.
    All shutter synchronised.

    And with a bit of AI in the soup. :)
    Not at least to ignore parts of the image which is just "noise"

    But waddaIknow
    Yes, this is where I'm going at it. There's tech out there which can record position in 3D space but there's nothing seen in the market for filmmaking. If someone were to develop some sort of "box" to put on the camera that syncs up with the timecode of the filmed footage, giving you a perfect virtual camera in post, that would be a huge game-changer and cut all time spent on tedious camera tracks in post.
    Just the fact that our phones do augmented reality live-tracking the environment should speak volumes of what a dedicated technology for film could be.

    I just wish I had the engineering skills to do something like it, but I just want to have that kind of tech. That and a 6K Z depth mapping of shot material
    "Using any digital cinema camera today is like sending your 35mm rolls to a standard lab. -Using a Red is like owning a dark room."
    Red Weapon 6K #00600
    Reply With Quote  
     

  8. #8  
    REDuser Sponsor Gunleik Groven's Avatar
    Join Date
    Dec 2006
    Location
    Norway
    Posts
    13,399
    I have been looking a little bit into this occasionally.

    So we do have "a box" collecting these data.

    But it is just not convenient enough yet, to give people a touch-and-go user experience, which is in fact what you ask for.

    it sort of always explodes into "Intels volumetric stage" size, when you put all the bits together.

    That said, I am a techno optimist, and I think these things will be convenient in my lifetime... :)

    We started off with structuring the input data needed.

    We have come a bit that way.

    But we really also need to get to the point where you don't need a PhD in informatics to take out the benefit.

    That will come.

    In the meantime, solutions like nCam aren't bad, if you can accept the lack of precission and other limitations.

    For what it does, it is great.

    I have decided to play a bit wait-and-see and build "what can actually give great benefits today" approach to the problem.

    On a research level a lot of these things are being closed to "solved". Or at least "in the ballpark of being resolved"

    But then there is the game of waiting for available tech and convenience to catch up.

    As Bjørn (and I) mentioned... A gyro is not really the answer to this problem in itself. At least not today.
    But it is a small and significant part of it.

    Possibly.

    Alongside other tech.

    As long as it is only incrementally "better", I don't see how for example Quine can benefit from releasing what we have.
    Then I think it is better to try to make people used to generally better ways to produce and structure your data.
    THAT has a much bigger impact NOW.

    But as tech gets more usable and generally available, I think we can build realtime or post camera positioning into a very robust system. If that is really a huge demand.

    But for now we have taken the position that:
    It can be done. We have a pretty good idea on how it can be done conveniently.
    In the meantime we just need to prepare the productions to have solutions that makes it even thinkable to take advantage of such possibilities when they are ready on a prosumer level.

    If that makes any sense.

    You need to hit the friction/benefit ratio of a "product" to do this.

    It is not that it is impossible to do.
    Reply With Quote  
     

Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts