Vision Programming with AprilTags | Jumpstart 2023

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign programming with April tag sessions I'm Andy gusser this is Dylan I'm a Java programmer by trade and been with the 7028 binary Battalion since 2018 so we're going into our sixth year and I've been mentoring doing programming and control systems yeah I'm Dylan I'm a junior I did FTC for two years and this is going to be my third year in FRC uh last year I was the lead programmer for the team so April tags are basically just QR codes that your camera can use to detect kind of generally where it is on the field they can be printed by any just normal printer on printer paper and it can get a 3D position of where it is on the field compared to your camera uh there are 30 different ones and they're all eight by six inch streaming for this event is brought to you by first updates now we'd like to think the following fun sponsors SolidWorks is free for first teams over 80 percent of U.S engineering schools and 370 000 plus companies use SolidWorks to design great products SolidWorks can help you design a great robot on desktop or on the cloud go to solidworks.com First the register your team at Kettering University over 30 percent of the student population was in high school robotics these same students have received a portion of over 7 million dollars in scholarships scholarship applications for first students are now available get more information at kettering.edu first and these are just some pictures of off of the field at one of the preseason events and we just ran them through after we got back and this is just an example of what it looks like uh so some reasons why April tags are have benefits over reflective tape uh you don't have to have the LEDs of a limelight so you're not blinding yourself while you're programming it they're less susceptible to interference from like banners on the field screens behind not getting any false positives um you can get a 3D position and orientation to where they are from the robot each Tag's unique so you can kind of guesstimate or you can actually calculate where you are on the field if you know where your camera is on the robot and where the tag is on the field each tag is unique and you don't really need to calibrate your pipeline at the competition as much uh so Photon vision is it works kind of like the Limelight software where you can just go through you can edit your pipeline it runs on the Raspberry Pi there's limelights and lower they can identify April tags reflective tape and colored shapes and Photon Library makes it easy to integrate with robot robot code compared to other Solutions so for Hardware you can use a pi camera you can use live cams 720p elt camera um you can for co-processors you can use a Raspberry Pi three or four Limelight or glow worm for the Raspberry Pi we would recommend a Raspberry Pi 4. for April tags or color shape tracking other co-processors that run Java and CS core may work like a Jetson or maybe a Mini PC if you have that and for a driver cam you can also use the same as the other cameras and fisheye cameras uh so getting started with Photon Vision um you can Flash the photon Vision image onto your device for Limelight and glow worm you do it via the USB and import the hardware config for a Raspberry Pi you just have to flash it to a Micro SD card and you need to update the jar to the latest version and then you configure your pipeline I talk about how it's a little easier to update um so I was just going to mention here it's it's nice you can update it without having to reflash it where if you've used the Limelight before you have to connect and Flash it and it takes a couple minutes and then you have to re-import your configuration Photon Vision has an offline update capability that you can just upload the jar and it updates itself you don't have to reimport the configuration so I thought that was a cool feature about it and for configuring your camera uh first exposure you want to have that as low as you can the lower the better you get less motion blur and less latency uh if you get it done enough it'll like blot out unneeded details so you can get a little bit higher FPS with your camera um the animation at the bottom kind of shows how Global shutter and rolling or slow drilling shutter works uh for your brightness with a low view on it with low exposure and you want to increase the brightness so the image can be processed uh you uh higher brightness seems to work a little bit better with April tags and uh for a resolution just kind of the same as with other Imaging the higher resolution lower FPS but you can also see further and you can got more detail but lower resolution you've process it a little faster but lower resolution should also work fine especially since now the April tags are lower resolution than they were going to be originally and for tuning your pipeline for decimating it reduces the sampling frequency increasing the decimate will lead to an increased uh detection rate while decreasing the detection distance for blur it just blurs the image you really only want to use it if the image has a bunch of noise on it and for refining edges using it with a decimate can increase the quality of the initial estimate for Threads it just is operating on more threads on your actual CPU of the max air bits the number of bits on a tag that need to be corrected to identify it a higher value means that more tags will be detected while a lower value reduces false positives I use zero for the attack 16 H5 which is the one that we use now that for the FRC higher number makes sense with a higher resolution camera and if you're running the camera at that higher resolution for uh pose estimation the number of iterations done for the April tag algorithm to converge on where it thinks the April tag is and for the margin cutoff it's just how much the margin detector has before it rejects a tag and for calibrating the camera you just need to 3D print the checkerboard uh use something like a ruler dial Caliper measuring tape whatever to verify that what size the squares are and I would recommend just riding like in a corner here off the grid what the actual size of it is because you do need to enter that into uh for example um the software just to so it knows the size of it um you just have to capture pictures of it on the software to calibrate the camera you want to get a different angles different distances from the camera and it uses it it's used to measure the 3D accuracy better I guess all right so I have Photon Vision running here on my laptop and I have a webcam that works with it a lot of webcams don't but luckily this one does so it's easy I don't have to plug in a separate uh separate camera right here so after you get it installed and running you can access it at Port 5800 so we have that open the first thing it'll do it'll pop up and say you didn't set your team number go to the settings page so there's a link you go to your settings page set your team number that's so that it can connect to the network table so it can talk to your robot um so then when you're back on the dashboard you can see over here there's a different pipeline settings and different cameras I only have one camera so I can only see one you can have multiple pipelines so multiple a pipeline has a configuration of how you're going to detect things so you can have different types of pipelines you can do reflective tape which is you know our old reflective tape like you use with a limelight it can do colored shapes which we messed around with last year a little bit for the cargo detection because there was a circle and it was colored ended up not using that but it was something that we tried out and then now April tag detection so you choose April tag and then by default it'll be in 2D processing mode let me see if I make this full screen you can see a little more there we go and that's not my tag that's this so the tags need to be pretty flat so I like to put it on a clipboard if it gets bent a little bit it starts to not detach the thing if we bend it like that so I put it on the clipboard so that it detects it so it sees it here and it labels it this is tag one if you remember on that first screen there were 30 different tags so this tells us this one was tag one I think I have a zero here yep take zero let's use that one is tag zero and it will do them both at the same time then we can do the cool thing you do the 3D processing and now you can see the orientation that it's turned in both this direction and like this so there were the settings that we talked about those were down here I just have the default I've got some the things that we just talked about threads of three so and then at the top let's have to dismiss this it tells me I'm running at 12 frames per second so I think my camera on here is only 30 frames per second so I'm not going to be possible to do any better than that but sometimes if you reduce the resolution or the exposure you might be able to get better faster processing see now I'm getting up to 15. um you can definitely reduce the resolution this is fairly High reduce the resolution now I'm getting 30 31 so that's about the max I'm going to get with this camera um higher isn't necessarily better most important is this latency because that's how long did it take to process the image and give you the data it's nice to have lots of them you know 30 frames per second 50 frames per second a robot runs at 50 iterations per second so more than that you're not getting a ton of value out of but most important is getting that latency low and then the other thing it shows us here on this output it's a little different because a screen resolution here we go so it shows me what it sees for the angle and the distance that it is away from the camera so after we've calibrated we want to measure and see is it accurate or is it not so I use the tape measure to do that at home when I was measuring it yeah so we got that on there and here is where we do the calibration so on the camera page you can select the camera that you're using enter your field of vision I don't know what this one is 70 was the default and I just left it at that but then you choose a resolution I'm going to choose a lower one and say that you're going to use the checkerboard there is down here where you can download the checkerboard to print it and so I downloaded it and printed it at a hundred percent no scaling and then measured on the bottom here you can measure that you know your six inch Mark is actually six inches and then if everything prints properly you can leave these at the default it's just it's an eight by eight checkerboard and one inch if you didn't print properly or use a different checkerboard you can do that and then you choose start calibration and it sees the checkerboard you want to make sure that those colors appear and what it's doing is learning about how your camera distorts a picture as it's turned and on the edges and how your lens causes any kind of distortion so you just want to get it to see it take a snapshot get it into a different position take another snapshot get them into a different position and you have to do this up to 12 or 12 or more times and then it will allow you to save it there we go and then I can hit finish calibration and it says it was successful for that resolution and then it shows over here all of the resolutions that you've calibrated oh I just need that date I don't know how that works wow oh there we go okay so it opened in another tab open the Stream I didn't know I did that so I guess you can click on the picture and you can get a full screen here all right any questions so far or any questions about full-time Vision yeah go ahead do you need to do anything special for fisheye lenses they say don't use fisheye lenses for April tag detection I guess there's a different calibration mechanism for that and they don't support it as of now so you can you can run multiple cameras in here so you could still use that for your other pipelines or you can have a driver cam so you can have multiple cameras and you can have one of them in driver cam this will run driver cam then that'd be a great use for a fisheye okay do they allow you to input your own calibration if you calibrated with a bishop um I don't think so possibly because you can export the settings and then it's like a file and maybe you could edit it and then upload it I'm not sure but they have a Discord that I have a link at at the end and chief Delphi and the it's an open source software so there's a community that supports it and they're really excited to talk about it so if you have questions you can always ask them over here yeah yeah it will for this year probably not next year but maybe so I wouldn't depend on that yeah so we're kind of thinking we'll talk about a little bit later how the April tanks how you can use that to estimate where you are on the field kind of thinking at this point probably using the April tags to figure out where we are on the field but still using the reflective tape to look at the Target not sure yet but depends on what the game is yeah in case you couldn't hear that on the stream Corey was pointing out that the tape is changing so make sure you pay attention to that yeah we can share the slides with the jump shark group and you can get it out there yep yep and we're also I'm on chief Delphi all the time so you can ask the questions out there too yeah uh have you tried it on anything but the Limelight as far as the FPS goes it's been relatively fine we've done not like a high stuff but we've been like just in the medium range of FPS is what we do most of the time so then you can still get the distance and also the frame rate we've been usually going like 480 by 320. I think it was yeah yeah and this one we had it on a limelight it worked pretty well I've tried it on a Raspberry Pi 4 and uh Jetson the Jetson it doesn't use GPU so it's just using like a Raspberry Pi and we can get I want to say 50ish frames per second if you turn the resolution way down but then the detection won't be as far so uh the question was can you use a limelight to do April tags you can run Photon Vision on the Limelight and Limelight hasn't made any official announcement yet there's some talk that Brandon has shared on on chief Delphi that says they are adding something but they haven't really said what so you can run Photon Vision on it so that would that'll definitely be an option and we have it running on this one it works pretty well expect it probably the Limelight is going to release something any other questions at this point and then we'll go on all right so then what do you do programming wise you got you have the tags and you can get some information about them but you know what does the program look like so we've got just a little bit of programming here first you add your vendor Library um if you guys don't know how to do that it's built into vs code you have to add it they'll give you a link you put it in now you can see all of the photon stuff that's available um so you have to go into that UI and get your camera name you can set your camera name in there as well but you have to know what it is so on that first line there we've got camera equals new Photon camera and we have to have our camera's name there because you can have multiple cameras so you can have multiple instances of this Photon camera and then the next step here you ask the camera to get the latest results you have to do that to get the last data and then we have this result object and then we can ask that result object doesn't have any Targets you have to do that step if you don't do that step and you ask for targets you'll get an exception and you really don't want your robot to fail because it'll crash the robot so don't forget to check if it has targets if it does um here we're just saying give me the best Target so Photon lib kind of has its algorithm to figure out what it thinks the best Target is you know you can have multiple tags that you can see so you can have as many tags as possible it's going to figure out which one does it think is the least ambiguous the best option you can also get a full list of all of them if there's some reason you want to iterate through um and look for a specific tag or a specific Target and then once you have your target there's options to get things about the target so the the yaw pitch and roll the next slide I have a picture of what that is but that's the orientation of the tag like this like this like this um so you can get those as well as this get best camera to Target that'll give you a 3D translation so like that phone over there I can say it's you know nine meters away it's turned 30 degrees it's 90 degrees upright it'll give me all of that information about from the camera to the tag so then the question is well what do I do with that data so the basic strategy this is kind of what we do with reflective tape you can use the yaw which is the left and right in the camera put your crosshairs in the middle if the tag is off to the right well I need to turn to the right to get the target in view so left to right is pretty simple and then for the distance to the Target you can use the pitch how high or low does the Target appear when you're farther away if it gets lower when you're closer it gets higher and then because you know how high off the ground your camera is and the angle that your camera is mounted you can use just basic trigonometry to get an estimate of how far away the target is so yaw is side to side pitch is up and down and roll is rotation so that's what we do with reflective tape I don't expect that this will be really useful with April tags at least you know this year because you're processing it 30 frames per second by the time you get to the next iteration of your robot you still have your old Target data but you're already moving so it's it's not going to be as useful for that but it is an option any I guess any questions so far just about the basic targeting and and the base of code and then we'll go on to that strategy all right so to get into the advanced strategy we need just a little quick background on some of the the classes that WPI live provides WPI live is the framework that we use for our robots um so it has a bunch of stuff built in that can give us geometry and do math for us so we have a translation that is a a point so in two-dimensional space that's an X and A Y in three-dimensional space it's a you know X this way y this way and a height a rotation is that simple on a two-dimensional plane it's just a yaw because we don't have any other in the three-dimensional plane it's the Off Pitch and roll and then a pose is the combination of those two things so um you know your robot is on the field it's two meters toward the next Alliance it's one meter this way it's turned 30 degrees and it's it's standing up right on the floor so it's uh pitch and roll usually are going to be 90 and zero but the tag may not be we don't know what the tags are going to look like and on the 2022 field they were not all mounted flat and then a transform is what I talked about before where it's like the distance to that bone and the orientation that how it's different from me so if I'm facing it and it's turned 30 degrees it's kind of like the map to get to that destination and then there's two dimensional versions of those and three-dimensional versions are new for 2023. the the nice thing about WPI live is that you know it's got these built in I can say I have a pose I know my robot is here I have a transform I can apply the transform to the pose so I can find out from I know I'm on this position on the field I know the distance and the orientation to that phone I can figure out where that phone is on the field so our basic pose estimation the pose estimation strategy find my slide here oh here we go all right so this is what the field looks like our alliance stations are here along the Y plane and then the X's down field we know the pose of the tag on the field we know that FRC is probably going to tell us that we don't know if we're going to be chasing tags I I doubt at least this year there would be any kind of tag on a robot because it it can't be bent um so and what they did in 2022 they were all over the field if you remember in that picture that we had where you could see the orientations so we expect they're going to tell us where the tags are and how they're oriented and then we know where the camera is on our robot so this is my little robot I've got the center of my robot the camera is here and this is the camera lens so I know it's you know whatever one meter forward a half a meter off the floor and it's turned at an angle so this would be a transform from the camera to the robot is what the red line indicates so then Photon Vision when it sees a tag it'll give us a transform from the camera to the Target so Photon Vision tells us where the tag is in relation to the camera so what we're trying to find out is where is our robot on the field so we can take that transform from the camera to the Target and invert it that's just built into WPI live to invert it and now I know from the tags pose how to get to where my camera is so because I know where the April tag is now I can know where my camera is and because I know where my camera is on my robot I can figure out where my robot is so this is where April tags give us more than Vision tape you know maybe in some of our targets over the years there have been ones that we know the orientation and we can tell from their size where they are but in 2022 it was a ring there was no way to know how you were looking at that ring in your camera so the strategy would be figure out where you are on the field and where you need to be so for 2022 you have the big ring Target in the middle if I know where I am on the field I know exactly how I need to turn to that Target to score a cargo and then WPI live they haven't released it yet but they're planning to provide a capability to to easily figure out if I'm looking at this tag where is it on the field they're going to have a built-in utility class to do that and maybe it'll even include where the tags are that you won't have to go look through the manual and try to enter that data so then WPI lab does all the math for us it makes it really easy so any questions on that pose estimation strategy and why we would do that all right so then WPI live also includes a pose estimator so we're going to see these tags and as you saw when I had it up it's not 100 reliable that it's going to see it accurately every time and that it's going to be perfectly right I had one of those field pictures where it like thought one of them was turned a different direction than it actually was so the pose estimator can take those as well as your odometry so odometry is what we used before where we use our encoders and our gyro to get a pretty good estimate where we are on the field and that works great during autonomous as long as like nobody hits you from the side and you move but throughout a match your gyro drifts a little bit but your wheel scrub or another robot pushes you or you're against the wall and you spin your wheels well then your odometry gets off and you don't know where you are on the field anymore so you can use the pose estimator to fuse that with your April tag data to get a better estimate of where you are on the field and WPI live has that built in and you can configure how much you trust the different measurements you'll have to look up the documentation on how to do that it's a little bit complex but you can you can tune it to say I want to trust my odometry more I want to trust it less maybe I want to trust my gyro for my orientations but I want to trust my April tags for where I am on the field you can tune all of that and then there's different variations of that depending on the kind of drivetrain you have there's a differential drive mechanim and swerve because they have different dynamics of course the the way that the wheels turn to make the robot move is different for those different Drive trains so let's take a look at some code so is that way too small up there readable ish all right so this is a pose estimation subsystem a sub system will automatically run in the background so I've got here just two Targets that I set up in my garage and where they were on the floor now this is where I think WPI lab is going to make this simpler and so I only did two of them but I have to set up my 3D pose for where those tags are so they're X Y and Z so this way this way and up and down and then their rotation yeah pitch and roll and I had them hanging on the wall so their roll and Pitch were zero and they were turned opposite so I was looking at them when I was at zero zero looking at downfield they were looking toward me so they're turned 180 degrees and then I've got my standard deviations and this is how you can figure how much you trust your different um components of the pose estimation so we'll just kind of skip over that a little bit but I've got it where I trusts the um local measurements less because I wanted to to test out the vision tag so I could do something like push the robot sideways and see that it would move where the vision thought it was even though I might not want to actually trust the vision more in a real scenario so then here's the Constructor for the Post Drive estimator I had to look up the example to figure out how to do this so I took the example and I copied and pasted it in here for the most part and I already had my swerve code written so you have to give it what is your current orientation what are the current positions of your wheels where are you on the field and so I just said I'm starting at 0 0 which would not be accurate most of the time because you're going to start somewhere in an autonomous position on the field and then you feed in your kinematics which is your odometer used for odometry like where are your wheels on your robot and how do they drive and then those standard deviation so then here's where the fun happens periodic automatically gets called every 20 milliseconds by my robot so 50 times a second we're gonna do what we did in that slide where we get the latest result and then I can actually get a time stamp of the time when Photon vision put the data onto the network table so when it when it thinks it saw the target which is nice that's a new feature of the new network cable so that I can find out I don't have to do math with latency and make a guess I can Photon Vision just tells me this is when I saw it and then I'm just going to check is this a new result by checking if it's a new time stamp because it like on my laptop here I was getting 30 frames per second but my robot's giving me 50 times per second so every once in a while I'm going to get the same data for two iterations and I don't need to process it twice and then I'm going to check if it has any Targets if it doesn't I'm just done I get out and then I just grab onto that previous time stamp for later then I get the best Target like we talked about I can now say which Target am I looking at so the that ID is is it is it April tag zero one all the way up to 29 because there's 30 of them and then I'm just checking the ambiguity making sure that that looks good and that I have a pose and then I'm looking up in that map that we saw at the top where is the Target physically on the field so that's just a constant now I know where the target should be on the field and then I get camera to Target so that tells me from the camera what is my transform to the tag and then my camera pose is that line that we saw or we know where the camera is on the robot we're going to invert that so we can get uh the camera to the Target in the other direction like we flipped that Arrow around and then now I know where my camera is and then I know where my camera is on the robot now I the vision measurement is an estimate of where my robot is on the field so that's what the code looks like for that robot that we had pictured so we don't have to do really any math WPI live takes care of all of the math for us and then the pose estimator we feed it that information and it it doesn't need to know how high we are off the floor it just needs to know where we are on an XY plane and our yaw so it uses a 2d position where up to this point we were using three dimensions because the tags were expecting they're probably going to be up off the floor and they can be turned in any direction so now that we're just on an X Y plane we do 2D and then you tell it what time the data came from because it's always going to be in the past because you saw the tag you got the data over the network and now you're processing it and the pose estimator automatically can like unwind time and apply that and then redo the calculation so that's really slick that it does that for you and then you feed in just like we do with odometry before you feed in your gyroscope orientation and your modules their positions and their seat and so now you can get your current pose from the pose estimator by saying get estimated position so if you don't feed it any Vision it works just like odometry did before where it uses rotation and encoders to figure out where you are all right any questions on that hopefully I didn't lose you guys all right yeah yeah yeah so for on the stream the question was could you use the April tag data and your position to move your robot autonomously around on the field and yeah that's that's why it's really great if you um went to the 2022 game at all you know the first 15 seconds is autonomous and so like we ran a five cargo Auto and so it was very important that we drove where we expected to be because we had to go pick up the cargo off the floor and drive all the way over to the driver station accurately enough for them to roll the ball in and there was kind of a little ramp there and if we got too close the ball wouldn't go in so we used there's built-in stuff to run trajectories they call it so you can use that odometry data today to drive your robot with that odometry data and then you would instead also use this pose estimator and plug that data in and use those trajectories to just be more accurate where you are on the field um you can also use it you know you could just get this pose actually that's what we'll do in the next thing here is I have an example where I'm going to chase a tag and so the the robot can follow the tag so we'll we'll get into that just a little bit different where it's not it's not using that trajectory it's just saying where am I where do I need to be any other questions at this point all right so chasing a tag I was gonna actually drive my robot in here but I'm a little worried about tearing up the carpet so instead I brought a video and so here I'm holding the tag on this clipboard in front of the robot and it's following the tag around and as I you know move back and forward or rotate the tag it's trying to stay right in front of the tag turned toward it so if I rotate like this it tries to drive the robot around like that or if I push it it tries to go forward and backward and so you could do this um you know kind of like that basic strategy that we said where you can see the tag you know if it's left or right forward or backward but we're going to actually do it with a pose estimation yeah sure so the the question was what happens if it doesn't see the tag what would it do um we'll look at the code a little bit but if it sees no tag at all to begin with it doesn't start moving but yeah yep if it sees the tag and then it loses the tag it continues to drive where it saw the tag last so I actually kind of did some tests like that where I held it up and then hit it behind myself or or covered it up real quick and the robot would drive to where it expected that it needed to be um and I did that because you won't see the tag every iteration necessarily because like you said you might turn it or if you sometimes I would accidentally get my finger over it just a little bit and that was enough to make it not see it for one iteration or you know it was in my garage with kind of inconsistent lighting sometimes you turn it you wouldn't get it or it would even in this picture you can see it's rippled a little bit if it if it's not straight it doesn't seal it so it it just said where was it last time I saw it I want to keep going there and only update if I see it again there we go all right so much like before we've got our robot on the field so we can take our pose estimation and we know where our camera is on the robot we can estimate our camera folders and now we have our camera pose and Photon Vision gives us that camera to Target so we can so now this is the tag we're trying to chase so I guess the picture is the same as the one before this time we don't know where the tag is on the field because we're going to try to chase that tag so it it tells us you know from the camera where is the tag so from that we can estimate where the tag is on the field so in the other case we were going to say we know where the tag is so let's flip that around now we can know where our robot is in this case we know where our robot is because we have our pose estimation we're going to flip it around to figure out where the tag is if it's on the wall over there I know now it's over there and then we can take where the April tag is and we can apply just a fixed transform to say I want to be you know three meters in front of that tag so we create we have this tag to goal transform and that's something that's a constant where it's always I want to be I want to see the tag 180 degrees because I'm going to look at the tag and I want to be straight in front of it and I want to be you know one meter away so now we know on the X Y plane we knew where we were from our pose estimate and we know on the X Y plane where we want to be we could calculate a trajectory and try to drive there but I went a little bit simpler route where I used a profiled PID controller with a trapezoidal profile so that what that actually means is a PID controller or something you can use to to try to get to a Target we won't go into the detail about how to tune it or anything but you can just know that I know I want to be somewhere and it can proportionally integral and derivative inputs to try to get to that Target and the trapezoidal profile gives us like a motion profile where it'll accelerate cruise and then decelerate where if I don't use a trapezoidal profile it'll just race there so it would start from you know standing still race at maximum speed and then slow down a little bit as it approached instead I want a smooth motion to try to get there so we used a profile PID controller and the constraints on that are configurable so I can say how fast does it accelerate how fast does it Cruise so I can have it safely operate in my garage and not worry about breaking anything and then there's separate controllers for each x y and yaw and we're operating just on a two-dimensional plane because we're we don't care about how high we are off the floor or and our robot isn't going to tip forward and backward we don't have any control of that so so we can look at the code for that so here's our trapezoidal profile constraints we have a Max velocity and a Max acceleration the velocity is in meters per second and the acceleration is in meters per second per second and then we have a Omega that's how fast can we rotate and that I believe is in radians per second and radians per second per second so then I had which tag do I want to chase that it was id2 where this one happens to be one it was two I had zero and one hanging on my wall to help me know where I was and then I had two was the one that I wanted to Chase and then this is that fixed uh transform where we say where do we want to be in relation to the tag once we see it and I want it to be one and a half meters in front of it nothing side to side and or up and down and then I needed to be Pi radians which is 180 degrees from the target I want to be in front of the target not behind it and I've got my Photon camera I got my drivetrain and my pose estimator is giving me a post provider just to get where my robot is and then we have our profiled PID controllers that we passed our constraints and some rudimentary values to make the robot Drive so then my controllers I can set a tolerance so I can say I'm trying to get to a goal how close to that goal do I need to be before I say I'm at the goal and I just did 0.2 meters to make that simple and three degrees so once it's within that it's supposed to stop and then because it's you know rotation all the way around and then I can keep rotating I have to set my rotation controller to be continuous and it goes all the way from negative 180 to 180 and then it wraps around so that's just setting that up let me just reset everything and then execute this is where we'll see some code kind of like what we had before so this is my pose estimator that I had before I'm going to get where does it think my robot is on the field so we kind of talked about how that works to get to that point so now we have an estimate of where we are on the field and that's in two-dimensional space and I'm going to transform that to three-dimensional space I know I'm sitting on the floor and I'm going to assume my robot's not tipping over so my pitch and roll are zero so those are just to get us into a three-dimensional space because our tag is in three-dimensional space so I get that latest result again check if I have a Target and this is where your question if we don't have a Target what happens it just keeps the last Target and tries to chase that this just kind of filters us down to make sure we're looking at the tag that we want to Jace and then it's not ambiguous and then if it's not we're now we have our Target we're going to hang on to the last Target because we need to look at that if we don't have a new Target now we take where our robot is on the field we apply that transform to figure out where our camera is on the field because we know where our camera is on our robot so now we have an estimate of where the camera is on the field and we get that transform from our camera to the Target and we apply that to the camera position so we now have an estimate of where the target is on the field so where I'm holding the Target on the field now I have an estimate of that and then the last step is to say well I need to be one and a half meters in front of it 180 degrees not rotated so now my goal pose is in front of that Target one and a half meters so let's go back to my picture now I have this position and my robot was over here how do I get from here to here is the next question so that's where we use our profiled PID controllers and we save our x controller the goal is where the target's X is our Y is where the target's Y is and the goal pose and our Omega is where we need to be and then once we have our goals we can feed in where are we now and the controller will tell us what we need to do to react so we said I need to be at this x position I'm currently at this x position how fast should I drive along the X plane to try to get there so I have my separate controllers for X Y and Z did you have a question oh okay yep we're done so the last thing is take those values feed them into the drive and the robot will move and that was actually the last thing streaming for this event is brought to you by first updates now we'd like to think the following fun sponsors at Kettering University over 30 percent of the student population was in high school robotics these same students have received a portion of over 7 million dollars in scholarships scholarship applications for first students are now available get more information at kettering.edu first SolidWorks is free for first teams over 80 percent of U.S engineering schools and 370 000 plus companies use SolidWorks to design great products SolidWorks can help you design a great robot on desktop or on the cloud go to solidworks.com First the register your team don't forget to like subscribe and ring the bell to stay up to date on our new videos keep the conversation going and provide your input to our content watch our live shows at twitch.tv forward slash first updates now join our Discord at discord.gd forward slash first updates now and check out fun FTC on Facebook Instagram and Twitter and first updates now on Facebook Instagram Tick Tock and Twitter [Music]
Info
Channel: First Updates Now
Views: 21,391
Rating: undefined out of 5
Keywords: robotics, FUN, chief delphi, first updates now, FIRST robotics, FRC, Robots, Robotics, FTC, FIRST Tech Challenge, Behind the Bumpers, FIRST Robotics Competition, Tyler Olds, FTC Top 25, FUN FTC, Behind the Bot, first forward, rapid react, rapid react robot, frc rapid react, ftc robot, powerplay ftc, powerplay robot, frc 7028, apriltags frc, vision programming frc, jumpstart frc
Id: TG9KAa2EGzQ
Channel Id: undefined
Length: 50min 32sec (3032 seconds)
Published: Mon Dec 05 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.