✨Tutorial BorisFX Syntheyes 2024 en Español🔥3D Camera Motion tracking para CGI y VFX Brutal!🚀

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hello! Welcome to the channel. Today we have a video tutorial that you have been asking me for for a long time. And all of you who are interested in camera tracking have been asking me, non-stop, to make a Syntheyes tutorial. For all of you who don't know what Syntheyes is, Syntheyes is one of the most advanced camera tracking applications out there. on the market right now and it is used both in production and at the user level and it is outrageous. And today, finally, thanks to the people at Boris FX who have had the ability to leave me a license so that I can show you how Syntheyes works, we are going to see what it is about and we are going to start seeing this software that is super powerful and that You are going to be amazed by him. And stay until the end of the video because I'm going to give you a code so you can have a 15% discount on Syntheyes. Of course, I have to warn you that Syntheyes is a super complex program full of menus, options and a lot of functions. So we will not be able to see everything in a single tutorial. So today we are going to do the most introductory part and little by little I will release new installments so that we can see from start to finish how this amazing software works . Well, let's get to it. Well, this is the first thing you are going to see when you attend Syntheyes 2024. And as you can see, it is a fairly simple menu in which we are going to be able to choose what we want to do. We are in the first of the rooms. Syntheyes calls all these tabs that you have at the top rooms. This is the one that opens by default, the one called Summary. And from here we are going to import our material and we are going to start tracking. Well, I'm going to select this scene that I recorded. And here I have this dialog to choose what are the characteristics of that material that I am going to import into Syntheyes. I know that my frame rate is 24 frames, I'm going to leave it at 24. And that it is a 16-bit png sequence. I'm going to select 16 bits. You would also have the option of Half and Float. You were working, for example, with EXR or consequences of DNG. And in my case, I'm going to leave it at 16 bits. And here I have the option of preprocesses. Here, for example, if you had recorded in logarithm, you would have the option to adjust what the image is like or even apply LUTS. In this case, since I have recorded with my mobile phone, simply from my work, a very simple scene, because I don't have to do anything. I'm just going to say OK. You also have the option of More, where I have a lot of extra parameters. For example, you could change the input and output frame, choose whether the type of frames per second corresponds to film or film with drop frames, and whether it is PAL or NTSC, whether it is interlaced or not. And what field we want to use. In my case, it's progressive, so I don't have to say anything. The depth of the image process, as we had seen before, and how we want it to be saved. We also have the option of anamorphic squeeze. If we had recorded with a slow anamorphic, from here we can enter the value and make our image adjust and be perfect. I'm just going to say OK. And as you see, I have my image loaded within the first tab, which is the Summary tab. The middle mouse button and I drag what I can do is make a pan with my image and with the mouse wheel I can zoom in and out. Here I have the timeline and as you can see, the Scrub, since I can go forward or go back, is super fast. And here I have the option to display the frame I am in. From this first dialog, I have the auto option, a gigantic green button, which will be the first one we are going to see to process our image and extract the camera tracking. I have several options, such as saying if a zoom lens was used , if it was recorded on a tripod. This option, which is called corner, what it will do is look for areas of the image that have corners and that may correspond to structures that human beings have created. It's giving me a prompt to save, what I'm going to tell it is that I want to save in the same folder where I have the original material, I'm going to say save. I have corners activated and there is also the fine tuning option. If I add fine tuning, what it will do is take a little longer to process the image, but the result will be more precise. But now let's see how the results here are not an absolute thing. When I obtain a result, I will refine it until I reduce the pixel threshold, the error threshold and have perfect tracking. Well, in this case I'm going to say fine tuning and I 'm going to press the auto button. Syntheyes made the track for my scene. I have a lot of points here, some of them are in red. This here indicates the margin of error in horizontal pixels. The ideal, as we have already seen in all the tracking tutorials that we have on the channel, is that our threshold is below one. If it can be below 0.5, then much better. In addition to having finished, notice that we are in the solve window. Within the solver window I have a lot more options with which I will be able to try to modify this result that Syntheyes had at first through the auto. I also have the option, notice that there are a lot of what he calls rooms here, to see the features. The features are all these lines that you see here, what it is doing is making traces, we are going to zoom in a little so that you can see it better, it is going to make 2D traces of a lot of points in the image and from all That information will decide which ones are more precise or which ones are better over time and it will use them and convert them into these that we have seen that are green. This is what Synthes calls a blip, there are big blips and small blips. This has nothing to do with the tracker points that he has given us, this is simply an analysis that he does of the image and from there he will get all the information he needs to reconstruct the camera lens and the movement of the same, which is outrageous. I also have the option, if I come here to the trackers window, to add manual tracks, manual points that I would be interested in generating if I want to add a manual point, well imagine that I am going to zoom in a little more, I want to add a point in this area here. If I press the C key and click, it automatically creates a track zone for me. These are normal 2D tracking, but all that information will be used by Syntheyes to generate the camera movement. And how are you going to do it? He is going to do it thanks to what is called parallax, he is going to track points and the difference in movement between the points that are in the foreground, in the background and in the middle plane, It's going to do some analysis and calculations to try to determine the real position of my camera, the lens that has, well, a lot. Well, nothing, I have this point in 2D that I just added, notice that this is the size of the sample and this is the area in which the search is going to be done, the same as it works in After Effects or Blender, this is exactly the same. And to make a track, all I have to do is press play, I don't have to do anything else. Note that since I have the option to play forward, right now if I wanted to play backwards in reverse, I simply come here and change the arrow. When the arrow changes automatically, down, you see the play has changed too. In my case I'm going to do it forward and I'm going to press play. What has happened? Well, that point quickly disappears. Well, with the keys, with S and D, I will be able to advance frame by frame and at the moment when my sample disappears, what I am going to do is deactivate it. Notice how he has already deactivated it automatically, it is with this button here, activate and deactivate the sample and I am going to block it. Because that point from there, I no longer want it to be taken into account. If I want to add another point, for example, an area that remains visible for longer, the truth is that none of them remain very visible. For example, this point here. I also have this option here, where I can move and refine where my analysis zone is. I can do the same thing from here. If I press, you see, a miniature appears that tells me where I am selecting the track area. Well, I'm going to track this one and notice that automatically, when it disappears, it deactivates, okay? I block it again and these points will be added to my tracking analysis. If I come to solve it, since I have added two new points, I could change the mode from automatic to refine and I could also tell it Slow but Sure, which means that it is slower but safer. That is, it will be more precise. Well, I'm going to activate it too and I'm going to tell it that I want to do a refine. My margin of error right now is 1.5 pixels and what I'm going to do is try to lower that amount much more. First of all, for this I am going to go to Features and here you are going to see a little more how the blips that we mentioned before work. I might also take a look at the points you created. I'm going to solve again to see it more clearly and see if there aren't any that act in a strange way. Well, apparently they are all quite well. If you notice, by simply moving I get a pretty good idea of ​​what is happening and here I also see how my camera works and the nine points that it has created that positions my scene, okay, it makes up my scene, but I have very few points. I would like to have more points. How are we going to achieve that? Well, we're going to go to Features and here what I'm going to do is select advanced. Within advanced I have this dialog here and what I'm going to do is change from normal to Small. If you notice, you see a black and white view and this and a blur applied and this is what you are seeing, Syntheyes when looking for areas in what he calls the Small Blips. If you notice, here I have the option of Small Blip Size and Small Blip Density. If I reduce this number, if you notice, the blur increases or decreases depending on what I do. The greater the blur, the greater the areas to be searched by Syntheyes and the less blur, the more density it will have to be able to choose areas to focus on for tracking. What I'm going to do is go down from 7, I'm going to go down to 4 for example and for the Big Blip Size we're going to leave it at 16. We're going to modify the density depending on the frames that our scene has. In this case, I have about 300 , so I'm going to tell him that I want 300. If I go to the Big, this is what Syntheyes is seeing. I become normal and for this to have an effect what I am going to say is Blip all frames and notice how that arrangement of points has changed but my margin of error has not changed because I have not done a Solve again, I have simply done a reanalyzed image telling it to follow other patterns from now on when looking for points. Well, if I want this to become a Solve , I simply go to Solver, I'm in Refine and I'm going to say Go, my margin of error has changed. Now I have 1.4 , that is, I have improved a little, the speed of my point mesh has also been modified. I will be able to lower this margin of error even more. To do this, another important thing would be to calculate the distortion of the lens because I'm going to come here and tell you to calculate distortion. When we are doing camera tracking , lens distortion plays a very important role. I can have perfect camera tracking but I have to replicate that distortion in everything I am adding on top, if not the objects at certain points even though the scene is perfectly tracked, it will appear that they are moving, that is to say, that they are slipping and that they are not nailed perfectly. What is normally done, a distortion model is extracted and can be used in various ways to remove distortion from the original plate, that is, from the video material that I have recorded or to add distortion to the objects that I am applying to the scene and that match the original distortion of the lens because all the objects or all the elements that I have added in my VFX plane have the same distortion as the original lens and that, if you understand, is also taken into account in what we are going to see Now, it's called Lensworkflow, here I have the Lensworkflow that was mentioned and notice how I have two options: Undistorted, One, and it tells me that I only have one step or Redistorted, this would be the one that is going to do is remove the distortion from my plate. original with which I could export that information, now we will see how and directly add any object on a plate from which I have eliminated the lens distortion and in this second step, the Redistorted one, what we will do is use a distortion that will affect the elements that we are going to add to our plane and then we will eliminate that distortion of the global with which we will be returning the plane as it was, with the elements being formed based on that distortion of the lens that we have calculated. We will see this in depth when we do the exports you will see that there are a lot of export options from Syntheyes to a lot of applications and we will see it in depth for now, I am going to say that I am going to use the first model which is the one-step Undistorted What I'm going to do, Syntheyes, is to eliminate the distortion of my image, I'm going to say ok and now what I have to do is say GO again because notice that the threshold has gone back down to 1.08 pixels which is already much better if When calculating the distortion, I use this option called More, I have a lot of options here. This is the classic algorithm that Sintyce uses to calculate the lens , but it has added new and much more modern ones that work much better. The newest one you have, this one here called 4th order Radial, I'm going to activate it. Notice how we have a lot of parameters and what we're going to do is calculate 1x1 because I'm not going to activate them all at once and I'm going to go By activating 1x1 we are going to refine the track so that you can see how the margin of error changes. It is important to do it 1 by 1 because the calculation of the previous one is going to be based on the first one , so if we do it 1x1 the result will be It's going to be much better if we activate it all at the same time, the result may be worse than we expected, so I'm going to activate the first one called C2. I'll paint it here, click Calculate Go . Look at how the margin of error has increased and now I'm going to activate the next two U2 and V2 , I'm going to say Go again. You 're going to find out a little more , pay attention to how the margin of error is still high, and finally I'm going to activate the last three C4, U4, and V4 , and I'm going to say Go. My margin of error is now 1,126 pixels, it is greater than what I had before and if I want to see how this has been translated into the image, if I come here to this lens dialog, when running lens , look at this distortion that Syntheyes is generating here I see As my lens is distorted , you see that is because I have also recorded my phone with the angular lens and it has a lot of distortion, especially at the edges. Well, we are going to try to improve this margin of error a little and lower it , okay, Because right now it has been raised to 1.1, I am not going to come to Features to the advanced control and I am going to raise the Big Blip Density also to 300 and within the Maximum Tracker Count 300 again Blip All Frames Notice how the density has changed again but I have a lot of points plotting there, I'm going to select this option here called Peel All, it's going to eliminate all the ones that no longer comply with these new parameters that I've given it, I'm going to go to Solver and we're going to say Go Again, notice, 0.8 higher point density I already have a much better point cloud and these are a pretty good result. Another of the super important things about using SynthEyes is that it will allow us total control over any aspect of our camera tracking , notice. here that our scene, this is the 0 of our scene and it has automatically detected that the ground in this area here, but from what it seems it is a little inclined. As I already told you, I have practically total control over all aspects of the camera tracking and this when we are working with software like this that is so professional, we will immediately notice the difference in all the options that we have that it is true that at first it scares you a little at first because there are a lot of options, but you will see that in the end what you have is a lot of control that in other software that is more automated you will not have and the more control on our part we can have, it will be much better. Well, if I wanted to modify my scene, for example, I can come here to this tab called 3D and notice how I have several options , which are repositioning, rotate and scale I can rotate elements or the set of elements for example, because all the track points, if I click here Whole, I am selecting all the objects and now if I, for example, select rotate from the point where I click For example, I'm going to click here , you see, I'm rotating my scene. I could straighten it. Let's go now to this part here and say that from here I'm also going to straighten it a little and now what I'm doing is affecting my coordinate axes . placing the ground of the image, where I think it should be and at this point here , this area here, the zero or the origin of my image of my scene will be , sorry because I would be interested in the origin of my scene was more or less in the position of the camera, so instead, I have the whole shift button activated and what I'm going to do is move my entire scene to zero, I'm going to go to another view and here you see the same thing , now zero is in the point where my camera is located. If I come here to coordinates, notice how I have a lot of points and a lot of coordinates that Syntheyes has generated. I could also tell Syntheyes that what I want to do is generate a zero or a origin that I decide and that I am also going to leave the coordinate axes that are going to be generated from that point. I can take all this information and eliminate it and add simply three points that are going to be my origin, a vector that can be X and another point in the same plane as the rest to define with those three points the origin of my scene and the rotation and orientation of it before that, imagine for example that I need we are going to the Dialogue of Trackers and I know that I am going to add a 3D object and that I want the center of my scene to be at a point that, for whatever reason, Synthey has not selected , imagine that I want my main object to be here, in this area here I am going to add a new manual tracker within the manual trackers that I have not mentioned here we have how we want this search to behave not only do we have this option which is the default option, we have a lot more we are going to see all of this in much more detail in a next installment of Syntheyes because I tell you that there are a lot of options, I simply want you to have a small idea of ​​how Syntheyes works. I'm going to add my tracker here and here I could tell it, for example, that instead of looking at all the RGB channels, it should only look at red, for example, if we tell it red or green or blue, then that's the channel. in which Syntheyes is going to try to get the information. For now I'm going to leave it as default and I'm going to start tracking forward. It left right away and on top of that it moved a lot. I'm going to expand the search area and in Instead of clicking play, I could also move frame by frame. I have to make the search area larger because here the camera moves quite quickly. Now it's going much better. Here , the point is lost, but nothing happens. I'm going to change the direction and I'm going to hit it. track and here something has happened here it has gone crazy you can start from here I disconnect and block because I am going to want this to be my zero even if it is not there in all the frames nothing happens because I could come and solve it and say go and now that point It exists for the entirety of my plane. Here I have the point that we have already chosen. I am going to go along and if I want to delete all this information, I simply come here with this asterisk number 3. I want to eliminate all these axes and we are going to choose the points by hand, that is, The first one is going to be zero. I'm going to choose another one that is on the same axis and another one that is on the same plane. For example, this one here is going to tell me that he is going to apply that coordinate system. I say yes and look like now. my scene has changed based on those points that I have generated, my point of origin is that track that we had added manually and my scene is in the position that I consider to be correct and we are going to see another of the super important things that must be done always in Syntheyes which is to tell you what erroneous information we want to eliminate so that the margin of error is as low as possible. We had not done it in the first step and we are going to do it now and that is if we come to track we press clean up trackers or the shortcut capital letters C and notice how I have a lot of options, here I could say, for example, bad frames, I'm going to select it and I'm going to be able to choose what I want it to do if I want it to delete those points that are wrong in frames that are incorrect or disable them, for now I'm going to tell you disable and I'm going to say fix, eliminate or try to solve all these problems, you're already telling me that there are short life trackers, which means tracking track points that live for a very short time , there are three 17 with errors that are too high based on this threshold here and some unsolved which means that it has not been able to solve and that they are 20 well, I am going to say fix and again we are going to tell it to refine and look like now if our threshold of error has dropped to 0.6 is that it is a lot, that is a much more solid result if I want to try and add something I am going to disable hold and I am going to add something because for example I don't know a pyramid I am going to add a pyramid here and we are going to see how it behaves perfectly, you see it follows the movement of my camera perfectly and it already tells me that any 3D element that I add there is going to look great. If I want to move this object as I have whole selected, I simply select it and I can reposition it and notice how that object was perfectly anchored to that camera analysis that Syntheyes has done and it What's going to blow your mind like it has blown mine is how fast it is brutal, the speed with which it does the calculations and the speed with which I can move through the scene and through its timeline is incredible. and also you see that the scene is perfectly oriented I have the ground in the correct position I have the scale again super good things that syntax has is that the points are not based on scales of meters or centimeters we are the ones who are going to being able to say what the scale of our scene is and which is also something super important, another of the options it has is roto masking, that is, if I have an element in my scene that I do not want to be included in the tracking, what I can do do is a mask to tell Syntheyes to eliminate that part and that that part is not taken into account when making the track but we are going to see all this in more depth in the following installments that is a question that comes to me a heap which is better MochaPro or Syntheyes? and I have to tell you that it is not that they are better or worse, it is that it has nothing to do with MochaPro is a planar tracking system also from Boris FX that is outrageous and that now integrates the Syntheyes engine in its new version MochaPro 2024 to make camera tracks directly without leaving MochaPro and it will allow me in a super fast way to make camera tracks with a much higher precision than what, for example, the native after effects track has the Syntheyes engine but it has a very reduced in which the parameters that I can touch are very few, it is oriented, it is designed to make a fairly automated track and extract information from scenes that are not too complex. If I have a very complex scene, what I need is to use software like Synthey, in addition to allowing me to do camera tracking , you see that it will allow me to calculate lens distortion , it will allow me to do a lot of moving, it will allow me to do a lot of things because it is a complete software with production quality and it has a lot of options and it also allows me to customize a lot of aspects and so that you can try it yourselves, stay until the end of the video because I'm going to leave you a discount code so that you can have 15% on the Syntheyes subscription which is amazing, thank you To the people of Boris FX we already have our scene with a very very acceptable error threshold of 0.6, we could lower it further but we are going to see more advanced functions in future installments for now I am going to say that this is a margin that suits me and seems correct to me and what I am going to do is export my scene file I have the option to export and I have a lot a lot of options we are going to see how to take scenes from Syntheyes to 3d software but we are going to see in the next tutorial for now what we are going to do do is take it to after effects and you have the option to export it for after effects directly from here look at how after effects is and I have several options copy the data in the clipboard and then paste it take the 2d information of the planar trackers that we have not seen but we will see copy the information of the raw trackers and what I am going to use is the JavaScript option, this option will allow me to select more options and generate a Java file that I will be able to open directly from after effects because in this default path I will save it and Notice how this dialog opens, which tells me a lot of things from the After Effects version option, from which frame I want to start, if I want to add scaling to the scene, the shutter angle so that it defines what the motion blur is going to be like and a very interesting option that is maximum exported trackers after effects as you all know and if you don't know you will realize that software like this has a problem when we add many nulls if I take all the nulls all the track points like nulls after effects are It is going to hang directly because it is not capable of managing all that volume of null objects. I have the option of limiting it in this case to 20, which is very good. I can also add a relative size to those nulls so that they are not gigantic in the entire scene. I have a lot of options, for now I'm going to leave everything by default, I'm going to say ok, it tells me about the lens distortion workflow , notice how it is directly opening after effects , open after effects, I have my scene and I also click on the sequence of images as you have added this effect here synthize advanced distortion which what you have done is eliminate the distortion of the entity right now my image has the distortion of the entity eliminated and any element that I add will perfectly follow the movement and will be applied without distortion in the next module, in the following synthize video we are going to see the other workflow that implies this distortion and then eliminate it and make the image look as it did originally but with the objects distorted based on that lens that Syntheyes has calculated The only thing I have is my scene here with all the points that we have brought and in this case I have only put 20 and with the orientation that we have defined they follow the movement perfectly and here I could add any object, I could add a text , that is, convert it into 3D and notice how it continues perfectly. We are going to add the motion blur. The movement of my scene. If I want it to be positioned right at that point, I am going to copy the position. Another thing is that the anchor point is very far. We are going to place it in the lower corner and again we paste the position with a small plugin called moveancorpoint and you see that it is super fast to move anchor points having to do it by hand because you see now my text is there perfectly following the movement of the plane and taking into account the distortion of camera you see that it is a wonder to use synthize it is super fast it is super professional and in fact it is used in production it is used both in film and television to make camera tracks and you see that once you know a little about the ins and outs of how it works Syntheyes is not as difficult as it may seem at first because the truth is that when you open it for the first time you are really struck by the number of parameters there are and the amount of things and it scares you a little but you see how In the end, the concepts, if you have seen more of my videos on motion tracking , which I have a lot on the channel and I especially recommend the one on the fundamentals of motion tracking, I am going to leave you the card here where you will understand a lot of concepts related to motion. I use tracking a lot, I use it in my work and I have to tell you that I am in love with how well it works. Syntheyes is one of my favorites. What did you think of Syntheyes? It's amazing. If you liked the video, please leave me a like. If you are not subscribed to the channel, you can subscribe and you will find out about everything I upload, especially when I upload the second part of Syntheyes because we are not going to keep carrying our After Effects scene. We have to use 3D software and add something mythical there because we will see all of that in the second part of Syntheyes and since you don't follow me on Instagram, what are you waiting for? There you have the link to my channel I upload a lot of content that you will not find on YouTube and these are my networks @kspmn and well what is promised is debt here I leave you the QR with the link so that you can have a 15% discount for the subscription of Syntheyes from the Boris FX page and that you can start using it in production now before I forget if you like the content that I upload these tutorials on VFX on CGI, on motion tracking that I upload to the channel and you want to thank me there You have the little button to make a small contribution and make this channel continue to be profitable. If you want to leave a euro or two euros and have a coffee for your health, there you have the little button. See you
Info
Channel: KSPMN | Tutoriales VFX y CGI!
Views: 2,958
Rating: undefined out of 5
Keywords: match moving, tutorial, tracking, boris fx, boris fx mocha pro, adobe after effects, planar tracking, tracking de objetos, keentools facebuilder blender, matchmove, 3d tracking after effects, matchmove tutorial, vfx, como usar after effects, syntheyes, syntheyes tutorial, syntheyes tutorial español, 3d matchmove, camera tracking, tracking profesional, cgi, syntheyes to blender, syntheyes after effects, tracker, tracking de camara after effects, trackeo de camara after effects
Id: lkfvjHmpZMQ
Channel Id: undefined
Length: 33min 47sec (2027 seconds)
Published: Fri May 10 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.