>> Hi, everyone.
My name is Paulo Souza, and today I will give you
an overview of some of the AI features that
come with the Unreal Engine. To demonstrate that, we have got
this very simple stealth game, and we’re going to use that
as a basis for our enemy AI. We have built some visual
features in this actor, like the laser light effects
you are seeing. Those will serve
as a visual cue to the player, representing the enemy's visual
cone and the current AI state. We have also built an actor
that can be thrown, like a rock. We’re going to use this
to demonstrate how to make the AI
aware to noises. With all that said, let’s start. Creating AI in Unreal is facilitated
by the gameplay framework, a built-in set of features that helps organizing the logic
and interaction between entities
in your application. For us to understand
the AI framework in Unreal, we need to understand
the relationship between pawns and controllers.
A pawn is a type of actor that can be an agent
in the world, and every pawn can be possessed
by a controller. The pawn is the body
in our analogy. And if the pawn is
the physical representation of an agent in the world, then the controller class
will be the soul, where we can set rules
for the pawn's behavior. Continue that analogy,
and the behavior tree, which usually runs
inside a controller, would be the brain, the object that actually
makes the decisions, and command the controller
to do things. And finally,
the blackboards exist to support the behavior tree
with data it can use
to make decisions. The blackboard serves as the
brain's memory in our analogy. Let’s continue
to set up our pawn to have an AI controller and create
our first behavior tree. Before we start,
we will make sure our enemy pawn has a pre-configured
AI controller class. We’re going to use a clean AI
controller Blueprint for this. Then, we’re going to create
our first behavior tree, which we’re calling EnemyBT. After that, we’re ready
to configure our AI controller to run that behavior tree
we have just created. Double-click our behavior tree,
and we’re ready to start. But what is really
a behavior tree? Behavior trees is a model
that describes switching
between a finite set of tasks, that allow developers to create
very complex logic composed of these simple tasks. It’s built as a hierarchical
set of nodes that control the flow of
decision-making of an AI entity. It’s very easy to expand without worrying how simple
the tests are implemented. It’s also easier
to understand the code. It’s very easy to debug. And it’s widely used
across the industry, and that is why we’re going
to talk about this here. You usually find
four types of elements in a behavior tree, composites,
decorators, services, or tasks. Composites define
the root of the branch, and the base rules
for how that branch is executed. Decorators are also known
as conditionals. They define whether or not
a branch in a tree, or even a single node,
can be executed. Services will execute
at their defined frequency, as long as their branch
is being executed. And tasks, which are the nodes
that actually do things, can have decorators or services
attached to them too. It’s very important
for you to understand how the execution
flow of BTs work. BT branches are executed
in order, from the top to the bottom
at the higher level, and from the left to the right
at the branch level. Which means
that branches to the left have execution priority
over the branches to the right. That is important
to understand before we can start organizing
our decision-making tree. It’s also very important for you
to understand the difference between composites,
sequence and selector. A sequence composite executes
every child or branch in order, until one of them fails. A selector composite finds
and executes the first child that does not fail. What it does, it makes
sequence composites very useful when you want to execute a list
of tasks one after the other, but need that the entire
branch tasks to be canceled if one of the tasks fail. The selector, meanwhile,
are more useful when you need to make a decision
and select which task or branch you want to execute based on,
let’s say, a set of parameters. We’re going to make it
much easier to understand once we start demoing this.
So, let’s do it. Let’s start by adding
a sequence node. From root, only composite
nodes can be added. After that, we’re going
to add three tasks, by copying and pasting it and connecting to our sequence
composite node. As we said before, you can see
that we have three tasks that are executed in order,
one after the other. Let’s then select
one of the tasks and add a decorator
that forces it to fail. And what we’re seeing here
is that since the second tasks in that branch fails
during execution, the entire sequence node
is aborted, and none of the tasks
to the right are ever going to be executed. Okay. But what if we change
to a selector composite? Well, what we can see is that it’s only executing
the first task. As we mentioned before,
the selector composite finds and executes the first child in the branch
that does not fail. And what if we change
the order of the nodes? As we mentioned,
branches execute from the left to the right. And if we mouse over the circles
on the task nodes, you can see the execution
index of these tasks, or the order of the execution. And if we play the game,
you can see that since the first task
will always fail, the selector composite actually
tries to execute the next child. So, let’s start building our AI.
We will create a sequence node and add a few actions,
like wait and move to tasks. If we play the game as it’s now, our move to task
will always fail. Because this task specifically needs a vector variable
to be set to work. For that, we’re going to need
to set up a blackboard and create that variable. The blackboard is a place
where we can store data to be used
for decision-making purposes. It can be used
by a single AI pawn, and some variables
can actually be shared by a squad of AI entities. It can also be used
with behavior trees, but behavior trees
do not necessarily need to use
blackboards to work. To create a blackboard,
right-click the common browser, then find artificial
intelligence, blackboard. Later on, set it
as your blackboard on your behavior tree. After that, we will create
a new key of type factor value. We’re going to call it
patrol location, and we’re going to use it to store location data
for this behavior. To write the value
to our blackboard, we’re going to create
our first behavior tree task. Create a new Blueprint object
based on BT task Blueprint base. To implement logic
when this task is executed, override the receive
execute AI event. Let’s start by adding
a set vector parameter value, and let’s promote
the blackboard key in this function to a variable,
recalling blackboard key. Let’s add a random
location node. Let’s set the origin
to the control pawn location, or the AI entity location. And do not forget
to make the radius of that random node
a variable, too. Both radius and the blackboard
key variables, they have to be marked
as public and visible, so we can change them later
in the behavior tree. After we compile and save,
we can see that our new task, get random location,
is already on the list. And that we can also
customize those variables. We’re going to set out
blackboard key as the patrol location. But if we play the game, you see
that task get random location is actually never finishing
the execution. And the reason for that is
that every behavior tree task has to finish with a failure
or a success condition, so the behavior tree can control the flow of the execution of
the other tasks in that branch. So, now we’re
finally good to go. You can see that our enemy
is finally patrolling the area, picking a random
location around itself every time
the behavior tree cycles. We’re going to call
this our patrol behavior. Let’s now add
another behavior to our AI. We have mentioned
before that selector nodes behave differently,
and that you can use them when you need to pick
a single branch to execute. But if we play the game, our selector node
would just pick its first child. But what if we add
a higher priority child with a valid task? Well, it will only
select that branch. And I know I am risking
being repetitive here, but that is really
what selector nodes do. So, as we said before,
the selector node will try to pick the first
branch that does not fail. So, what we really need here
is that this first branch to fail given
a certain condition. For that, we’re going
to use a decorator node. And the decorator
node we’re using is the blackboard-based
condition. It checks if the
blackboard key is valid. For this behavior,
we’re going to create a new blackboard key
called target actor. We’re also going to
set it as an actor, so we can reuse its location in other parts
of our behavior tree. In our decorator node, we’re now going to select
our target actor. And since that target actor
was never set, that conditional
will always fail. So, the selector node
now picks the next child. Now, we will be able to switch
between these different branches by setting that blackboard key
using the perception system. The AI perception system
provides a way for pawns to receive data
from the environment, such as where noises
are coming from, or if the AI
was damaged by something, or even if the AI
sees something. This is accomplished with
the AI perception component. It acts as a generic
stimuli listener, and is able to gather
its stimuli from other actors. Let’s now open our enemy
AI controller, and then add
an AI perception component. Inside that, we’re going
to add a new sense. We’re going to use
the AI sight config. And then configure our sense,
sight radius, and the angle. We’re going to do
some workaround, we’re going to have to select
all these items, because the detection
by affiliation feature, it’s not currently exposed
on Blueprints. And the last workaround is to
edit the config defaultgame.ini and add these two lines
to make sure that only the pawns we want will be detected
by the perception system. And the reason for that is that we want to select
those actors manually. For this, we’re going
to use this component, the AI perception
and stimuli source. And then we’re going
to add the senses that we want the perception
system to detect in this actor. So, let’s now get back
to our enemy AI controller, and then on our
AI perception component, we’re going to use the on-target
perception updated event. Then we’re going to get
a blackboard and set value as an object. So, the blackboard key name we want to write to
is the target actor, and the object value
is going to be the actor we have just detected. Let’s head back to our behavior
tree and do a few things. Let’s change the wait time. Let’s add a move
to the target actor, and a really nice feature
is that you can actually rename those nodes to something that makes more sense
for later reference. Our enemy pawn is pre-configured
with a Blueprint interface that uses an enum to switch
between different states, neutral, investigating,
and alerted. This changes the lighting
effects callers and serves as a visual clue to the player. To be able to change that
from the behavior tree, we’re also going to create
a new behavior tree task that calls that same function
using a Blueprint interface. And now, we just need to add
that new task to the branch and select the proper state
for that behavior. We have now covered
most of the features of Unreal's AI framework. So, let’s now talk
about debugging. Unreal comes with a built-in
visual debugger for AI features. So, let’s check that. To show
the gameplay debugger UI, press the tilde key during play. You may have to configure
another key in the project settings,
the gameplay debugger. Make sure you select a key
that works for your keyboard. The gameplay debugger presents
this UI overlay with information about your enemy pawns
and AI controllers, including information
from the perception system and the behavior trees. You can change
between different modes by pressing the num pad keys
on your keyboard. You can also press tab to
decouple from your player pawn and fly around
like a spectator pawn. Let’s try the perception
system debug mode. You can see that it shows the
enemy's sight radius and angle. And we’re able to see if the
player has been detected or not, represented
by this green sphere. If you pay attention, even though our player
is in the detection area, it takes a few seconds for
the behavior tree to be updated. And the reason for that
is that the patrol branch is still being executed,
even though the blackboard value was already set
by the perception system. We can fix this
by selecting the proper abort mode
in the decorator node. Some decorator nodes
can force the end of the execution of itself, abort lower priority branches,
or even both. In this case, we will set it
to abort everything, and re-evaluate
the behavior tree from the root. And now, as soon as we enter
the sight detection are of our enemy, our behavior tree
is re-evaluated, and the right branch
is picked almost instantly, getting the AI
in the desired alert state. Let’s now add another behavior
to our enemy AI. We want our enemy to be alerted
of noises close by, and head to that location
to investigate it. First, let’s create a new vector
blackboard key called target location. Then, we’re going to copy and paste the blackboard
decorator node and change the blackboard key
to the target location key that we have just created. Let’s copy and paste
some of the other tasks. Let’s set the alert state
to investigate. We’re going to add a move
to node to our target location, add another wait task,
and we should be good to go. We’re now going to create
yet another behavior tree task to clear the blackboard
value when we leave. We’re going to use the clear
blackboard value node for that. Just finish it exactly
like we did before. Back to our behavior tree. Let’s add the task
we have just made, and clear the target location
blackboard value. We now just need
to make sure that the actor who is going to trigger
the hearing sense has the AI perception
stimuli source component, and that he has the proper
senses registered as a stimuli. Here, we’re going to detect
when this actor hits a surface for the first time,
and then trigger a noise event using the report noise
event method. Let’s now head back
to our enemy AI controller. We will need to make a change
to our target perception event. We will need to check
if the stimulus type of a given event
is of type AI sight, and treat it correctly.
For the sake of simplicity, let’s leave the target actor
blackboard key as the conditional
for our sight sense. And if it’s not
a sight detection, then we’re going to assume
it’s a hearing stimuli. And if it’s, then we’re going
to set the blackboard key target location to a vector, and that vector is going to be
the actual location the perception system detected,
the stimulus location. Now, we just need to make sure
that our perception system is able to detect noises, too. And we’re adding an AI
hearing config to it, changing some of its data,
and doing the same workarounds that we did before
with the AI sight config. And if we play our game, we can now distract
our enemy AI with noises, moving it out of our way,
at least temporarily. Our enemy is able to randomly
patrol areas around, but its movement
is not really that smart. Since it’s completely random,
it can get stuck in areas, or sometimes just be
facing a wall indefinitely. We’re going to improve this by making our AI aware
of the environment, and for that, we’re going to use
the environment query system. The environment query system
allows an AI entity to query the environment,
or the level, and get usable data from it. EQS queries can be used
to instruct AI characters to find the best
possible location that maybe provide a line
of sight to attack a player, or the closest health pickup. It’s very useful when you need
to make important decisions that need an understanding
of the environment around the AI entity. Before we start, let’s make sure
we have EQS enabled. Go to settings, plugins, and then look for
the environment query plugin. We’re now going to improve
our patrol routine by changing the way
we pick the patrol point. Let’s replace this task
with an EQS query. Let’s then create
a new environment query, select the proper folder,
and give it a name. Double-click to open it.
And dragging off the root, you can see we have different
kinds of generators we can use. We’re going to select
the cone generator, and then we’re going to set
its cone degrees to 200. Let’s play the game, and quickly
get into the gameplay debugger mode and press 3
to enable the EQS debug system. What we can see here
is that the EQS query is generating this cone
in front of the enemy, and picking a single position. That is going to be set
as the patrol location which is going to be
used by our behavior. Let’s make it
a little bit smarter by adding a distance test. We’re going to leave it
as filter and score, we’re going to test the distance
to the querier, or the actor, and we’re going to set
the fielder type to minimum. What it’s going to do, it’s going to give
a higher score to the points that are farther
from the querier. And now we can see
that the EQS query not only generates these points, but also is giving a score
to each of these, in this case, based on the distance
to the carrier. The farther,
the higher the score. Let’s now add an overlap test. We’re going to set it
as a future only test. We’re going to change
the overlap shape to a sphere, the radius is going to be
75 centimeters, and we’re going to add an offset
so it does not hit the ground. And what we can see is that our
overlap check is filtering out every point that
does not collide with a wall, because of the offset
that we added. But what we really want
is the inverse condition. So, we’re going to disable
the bowl match, and that is going to give us
the inverse condition. It’s going to filter out
all the points that are barely touching
the walls, giving us a much clearer path,
a path that seems more natural. Oh, let’s add yet another test,
that is going to be a dot test that checks
the angle from the querier to the actual item
that is being evaluated. And what it does,
is it’s scoring higher the points that are
in front of the querier. And now our EQS query is trying
to select the farthest point that that is the most perpendicular
to the enemy direction. We can change that, adjusting
the scoring factor of the tests. We’re going to change
the rotation weight to .25, and the distance weight to 0.75. Basically, normalizing
the scores of these two tests. And now that we changed the
scoring factors of these tests, our EQS query will do its best
to select the farthest point in front of the enemy. But, because the distance test
has a bigger weight, it takes precedence,
and in cases like this, it will allow the enemy
to turn when it faces a wall. Using the gameplay debugger
is not always ideal to test EQS queries. For that, Unreal has
the EQS testing pawn. It’s an actor type
that you can add to the world and select an EQS query,
and visualize it live. It’s very useful
to fine tune values, or test a query
in different locations. You can also select the data
you want to see, like the test result labels,
or filter the points that fail, and highlight the ones that the query would probably
pick during gameplay. Everything is live,
and it’s right in the actor. Let’s now try another example
by adding a subroutine to our investigate behavior. After the enemy moves
to the target location to investigate a noise,
we’re going to run an EQS query to find places
that the player may be hiding, and do it three times
using a look decorator. Our EQS query consists
of a cone generator with an angle of 360 degrees. We do a dot test to check
the angle of the point, but here we use a negative
scoring factor to give it a higher chance to select
a point behind the enemy. And this is
the most important one. We’re doing a trace test
to check if the enemy has line of sight
to the point. In this case,
we’re going to filter the points that test positive
and have no line of sight, or the ones that are
actually hidden behind walls. To finish that, we’re going
to add a distance test to get the points
with the higher distance. Using our EQS testing pawn,
we can see what this query does. It’s actually trying to select
the points behind the enemy, but it only considers the ones
that are currently here, that the enemy
has no line of sight. And this kind of query
can be used in other situations, like when the enemy needs line
of sight to the player, or when it needs
to find the best point to take cover from it. As we can see, during
the investigation routine, the enemy now tries to look
for possible locations where the player
could be hidden, adding this interesting level
of intelligence to our AI behaviors.
And here we conclude this video. We have covered from the basics
of how behavior trees work, to how to make your AI entities
react to events using the AI perception.
We have also learned how to organize different
behaviors in your game, and finally, the basics
and some advanced EQS queries to make your AI smarter
about the environment. I hope you have learned a lot about Unreal Engine's AI
framework today. Thanks for the audience,
and goodbye.