Unity line of sight checking using sensors [AI #08]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
kia ora i'm the kiwi coder and this is episode 8 of ai in this episode we'll be attaching a sensor to the agent the agent will receive information about objects that enter the sensor's field of view and can use this information to make decisions for example when the player enters the sensor's range here the ai decides to start attacking the player obstructions are also respected by the sensor to prevent the ai from being able to see through walls cool let's get into it i'd like to say a massive thank you to all the patrons supporting this channel this channel would definitely not be where it is today without you guys so thank you very much if you're interested in the project files used to create these videos then please head over to patreon from the link in the description below so just before we get started i want to go over the shape that we're going to be creating for the sensor it's basically a wedge where we can control the distance the angle and also the height so the wedge itself has pretty much got six points of interest at the bottom center the top center the bottom left and top left relative to the agent and the bottom right and top right relative to the agent the mesh itself is created of five sides the top side and the bottom side which both have one triangle the left and right sides which both have two triangles and the far side which for now you can just think of as a flat side which also has two triangles and later on we will subdivide it to give it this curved shape here start by creating a new script called ai sensor and attach that to the agent and create four new public properties one for the distance angle height and a color to draw the sensor now we need to create a mesh to represent our sensor so just create a private property and create a new function called create wedge mesh which is gonna do the bulk of the work here so we just uh instance create a new mesh here and return that now the number of triangles uh to start with is just gonna be eight the number of vertices each triangle has three vertices so we can just multiply the number of triangles by three now we need to allocate an array for our vertices using the number of vertices we just calculated above and similarly an array for the number of triangles which is equal to the number of vertices because i'm ignoring index in here to build our triangles it's easiest if we just define those six points i mentioned earlier the bottom center is the origin of the wedge which is zero zero zero the bottom left we calculate by taking the forward axis of the agent and multiplying it by the sensor's distance and then we rotate it to the left and the right around the y axis using our angle parameter the top center top left and top right they are just the same positions as the bottom center bottom left and bottom right except we shift them up by multiplying the up axis by the height of the sensor and adding that on to the corresponding bottom point we need to keep track of whereabouts we are in the vertices array so we just create an integer here the left side and the right side of the wedge are both going to have two triangles these far side will also have two triangles the top and the bottom sides they'll only have one each so it's about to get a little bit hairy for the left side we need to define two triangles so the first triangle let's start the bottom center then move out to the bottom left and then move up to the top left for the second triangle we continue starting at the top left moving back towards the top center and finally to make a loop back down to the bottom center similarly for the right hand side we need to do the same except going in the opposite direction so this time we start at the bottom center go to the top center and out to the top right for the second triangle continuing from the top right down to the bottom right and then back towards the bottom center similarly for the far side except this time we start from the bottom left we go to the bottom right go up to the top right for the first triangle for the second triangle we go from the top right over to the top left and down to the bottom left the top and the bottom are a bit more simple because there's only one triangle we just use the top vertices for the the top triangle and for the bottom we use the bottom vertices but in the opposite order this is just to ensure the normals always point outwards from the center of the mesh so the triangles array that we declared earlier this can be initialized really simply because there's no vertex sharing we just loop over the number of vertices we have which is also the same as the indices for the triangles and initialize it like that finally we can assign the vertices and the triangles are ready to the mesh and call recalculate normals inside on validate i'm going to recreate the mesh each time this is so if we change the angle or the height or any of those parameters in the inspector that's this function will get called now inside on draw gizmos we set the color of the gizmos to the mesh color and finally we can draw the mesh at the agent's transform position and rotation and voila we have a mesh i should have made this uh this yellow because it's like a wedge like a piece of cheese you can also adjust the transparency of the mesh and the angle and if you adjust the angle too much it looks completely wrong so let's fix that we need to split the mesh into segments where each segment is kind of like a pizza slice and each segment has got four triangles two for the far side and then one for the top and bottom each the two and two there is just for the left and the right sides of the wedge to calculate the sides of each segment uh we basically need two angles the current angle which is like the left side of the segment and the delta angle which is the angle of the segment which is calculated from the total angle of the wedge divided by the number of segments we have we now loop over each of the segments incrementing the current angle as we go so for each segment we want to redefine those bottom left and bottom right points we were using before the bottom center and the top center don't change for each segment so we can get rid of them so the left is calculated from the current angle and the right is calculated from the current angle plus the delta angle now finally we just move our far side top side and bottom side definitions inside that loop this has effectively subdivided our single wedge into multiple wedges to give us a nice curved edge and looks good at any angle awesome so now it's time to do the bounce checking for the sensor and i'm going to test this in edit mode so i'm going to add execute in edit mode here instead of updating the sensor each frame i'm going to add a scan frequency to control how frequently the sensor scans its environment the layer mask just describes which layers that the sensor is interested in the array of colliders is a buffer to store the results from our physics operation along with the count the scan interval and the scan timer variables just control when we next scan the scan interval can be calculated by taking 1 over the scan frequency which we also do inside onvalidate just in case we change the frequency in the inspector we subtract the time dot delta time from the scan timer each frame and if that scan timer is less than zero then we just increment the scan timer by our scan interval finally we call a function called scan which we'll define right now to start with i'm just going to get all of the objects around the agent using physics.overlap sphere passing in the agent's transform the sensor's distance as the radius and also the colliders array because this is the non-allocating version of this function adding some debug draw here we can visualize the overlapped sphere check and the objects which it found so here i'm just drawing a wire sphere for the uh the entire overlap check using the sensor's distance as the radius again and the non-allocating version returns the count which is the number of objects it actually found and wrote into the colliders array so we need to use that count when we render each position of the collider now when i drag a pickup into the sphere and set the layer mask on the sensor to pick up we can now see a sphere is drawn at the pickups position so now it's time to actually store a list of objects that are within the sensor so that colliders buffer that we have here this is storing the objects that are within the radius of the agent but we need to filter that down to just the objects that are within the center so create a new list of game objects this is going to store the objects that are actually within the sensor's bounds rather than just within that radius now create a public function called is in sight and for now we'll just return true which takes a game object as a parameter the count variable here returned from the overlap sphere function just returns the number of objects that are wrote into the collider's array so we just know how many of those objects are valid so we just loop over the colliders array using that count variable and just check if each of the game objects in that array is in sight and if it is we're going to add it to our objects list and just to know i'm clearing the objects list before before doing this loop each time so now i'm just going to add a little bit more debugging information and this time we're going to print out the uh basically draw a sphere at the position of each object in the object's array this time rather than the collider's array and i'm going to set that to green because that will basically represent the objects that are within the bounds of of the set of the sensor cool and you can see the pickup is now turning green that's mainly just because our is in sight function is just returning true every time back in the ai sensor scripts we can fill in the is insight function now to actually do some proper bounce checking so the first thing i'm going to check is just if the the object is within the height of the sensor and we can do this just by checking if the direction to the object from the sensor's origin is uh is greater than the height of the sensor or it's less than zero switching to side view here we can just verify that the pickup turns green when it's inside the bounds of the sensor so now the next thing to fix is just checking if it is actually within that angle radius there first thing to do is just zero out the y value this just ensures that the angle that we're calculating is completely horizontal and no vertical element is taken into account so we can just calculate the angle between the forward axis of the agent and the direction to the the pickup in this case and just check if that angle is greater than the angle for our sensor we can already see it's gone red when it's outside the sensor and now green when it is within the sensor awesome and the final thing to do is really just check if if there is an object in the way of the agent and the object that's in the sensor we also want to disregard that in this case we need to do a raycast so i'm going to create a new layer mask called occlusion layers which just specifies which layers occlude the object or act as blockers basically so i'm going to shift the origin and the destination points up by half height and that's just so the ray is cast from the center of the wedge i'm going to use physics.linecast passing in the occlusion layers if it found something then just return false so just testing this out um it's not currently working just because i need to set the occlusion layers to default and now when i move the object in and out the pickup correctly turns green and red cool so now it's time to actually integrate this sensor into our agent so i'm just gonna add the ai sensor to the ai agent script just so our states can get an access to the sensor so the fine pickup state needs to be completely rewritten because previously it was just sort of using fine object with type and it just kind of knew where the pickups were in the world so this time i'm going to create a variable called pickup and just initialize that to null when we enter the state and just check if if the agent doesn't have a pickup then it's going to call a function called find pickup this function is just going to look at the sensor on the agent and just check if there are any objects there and if there is an object it's just going to return the first one and finally just call that function inside the update loop so now once the agent has found a pickup um we need to collect the pickup so i'm going to make a new function called collect pickup passing in the pickup to collect and here we just set the agent's navmesh destination position to be the pickup transform position so if find pickup returned a pickup then we just need to call that a collect pickup function now i'm going to move the pickup outside the range of the sensor and in play mode just gonna move the pickup inside the sensor uh okay cool so it worked but there is a crash and the reason for that crash is because the pickup has been deleted but we're still accessing it inside the debug drawer so i'm just gonna get rid of that for now because we don't need it anymore but there's another thing i'm going to do is i'm going to actually create convert this objects list into a private variable and then convert the main property just into a public property and just going to return the private list here but first i'm actually going to remove all of the null objects and you can just do this using remove all using that predicate there and just switch over the scan function to use the the private version of the objects so testing this out the moving the pickup into the um into the sensor the agent collects it and there's no more crash if the agent can't see a pickup it doesn't really know where to go so i'm just gonna create a random script that generates random points within a world bounds and just have the agent kind of wander around until it finds a a pickup to collect so to do this i'm just going to create a well-bound script with a max and min sort of transform properties which i'm going to use as sub objects of the arena and assign them to the world bound script this is just like a really simple way to generate random positions within the size of your game world i think something more sophisticated is i'm gonna need like down the road because not all uh sort of worlds are square but for now this is this is okay so inside the update function uh just after we call find pickup i'm gonna call like basically create a block called wander and here we're just gonna check um if the nav mesh agent doesn't have a path like if it's not walking anywhere then i just wanted to walk somewhere random so here i'm just going to get the world bounds just using uh gameobject.find object of type and uh just basically generate a random point between the min and max values uh of the world bounds so here we just get them in the max properties from the world bounds and then we can generate a random position uh just using random dots random.range for each of the components on the vectors so for the the x the y and the z and finally just set the navmesh agent's destination to the random position so now testing this out i'm just going to enable the path for the debug nav mesh agent just so i can see like where the agent is trying to go um and it looks like it's yeah so it's now correctly wandering around in the world which is pretty cool um but i'm wanting it to basically uh to find a pickup but um it kind of has to get somewhere near a pickup and once once the pickup becomes within the range of the sensor it should turn green and the agent should collect it yay okay so it actually found two at once and then it manages to to kill the player amazing the attack player state is not using the sensor just yet um it's only the fine weapon state that is currently using the sensor so there is still a bug at the moment where if the player if there's an object between the player and the agent the agent will try to attack the player like you can see here we can fix this pretty easily just by checking if the player is in sight so i'm just going to get rid of these set firing calls inside the attack player state and create a function called update firing and in this function all we're going to do is just check if the agent sensor uh just call the is insight function passing into player transform if it is then we set the weapons firing to true if it's not then set it to false so yeah pretty simple the agent still knows exactly where the player is and we'll walk straight to it and i'll address this in a future video but for now at least the agent doesn't try to shoot the player through balls which we can see working here the final function to implement is like a filtering function for the sensor if i have multiple layers selected here like pickup and character the agent doesn't really know which type of object is looking at whether it's a pickup or a character for example so looking at our find pickup function again we can see that it's just returning the first object in this object's array and that could either be a character or a pickup and in this case we only are interested in pickups in the ai sensor script i'm going to create a new function called filter and this is going to return the number of objects on a given layer inside the objects list and write them into the buffer first we just need to convert the layer name into an integer using layer mask.name to layer and we need to keep track of the number of objects that we found while we loop over the objects list so here we just check if the object's layer matches the layer that we calculated then we basically write the object into the buffer incrementing the count as we go if the buffer length is equal to the count we know that we've filled the buffer and we have to return finally we can just return the number of objects we found back in the fine weapon stage we can now utilize this function to find the pickups that were within the sensor so here i'm just allocating a buffer of size one just because i was taking the first object if you were interested in multiple pickups and wanted to take the most central one for example then that buffer would need to be larger so here i'm just using the filter function passing in the pickup as the layer name just adjusting the count there and instead we returned the first object in that pickups buffer that we allocated so testing this out is a little bit convoluted this sensor has got both the character and the pickup layer set so the sensor is seen all objects on both the pickup and the character layer but because we're filtering out only pickup objects inside the fine weapon state then if i move this pickup onto the character layer the agent will no longer see it even though the sensor will and if i move the pickup back into the pickup layer now the agent correctly sees the pickup cool and that's it for this video if you made it to the end then uh amazing thank you here's your reward psychedelic pizza slices we'll see you in the next one okay
Info
Channel: TheKiwiCoder
Views: 10,759
Rating: undefined out of 5
Keywords: unity line of sight, unity cone of sight, unity field of view, unity line of sight raycast, enemy line of sight unity, unity ai line of sight, enemy line of sight, unity line of sight cone, unity field of view raycast, ai line of sight, unity enemy line of sight, unity tutorial, unity line of sight 3d, unity line of sight fog of war, unity line of sight shader, unity3d enemy line of sight, unity
Id: znZXmmyBF-o
Channel Id: undefined
Length: 19min 24sec (1164 seconds)
Published: Sun Feb 21 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.