>>Valentin Galea: I am Valentin. I have been working
in the video game industry for more than
ten years now, out of which four
at Splash Damage in London, United Kingdom. Now, Splash Damage
has a rich legacy. It started way back in 2001
with Enemy Territory, then Quake
Wars Enemy Territory. Brink, you might know,
is from that game. Then we switched to
Unreal Engine Power Games. We helped a bit with
the Batman Arkham games. We had our own free-to-play
shooter called Dirty Bomb, and you might know us
recently from our collaboration with Microsoft at the Gears
of War franchise. What are we going to talk
about today? This is a bit of the agenda.
We are going to talk about how we structured
our themes and Projects, how we set forth
creating standards, and most importantly,
how we validate those standards. Then we are going to move
into compilation, continuing with the unit
testing automation, and good stuff like that.
Then at the end, we are going to talk about
the so-called "Splash Engine," and how do we match that
with the latest UE4? Just a bit of a disclaimer -- we are here to share
some of our learnings. Some of these are
from our AAA Projects, but most of them
are from prototypes, or various "secret sauce"
game Projects we had, all of our cumulative
UE4 experience. Feel free to find
some inspiration, agree or disagree
with some of the techniques that I am going
to show you today. We are about 300 employees, and we are split across multiple
ongoing Projects at various stages.
Some can be for AAA, some could be prototype,
and so on, so forth. Some of these Projects
use the Splash Engine, which at its core
is just vanilla UE4, plus a couple
of our own enhancement fixes, and stuff like that. We are going to cover
this a bit later on. How do we start setting
the foundation over games, starting with the game modules,
the project layout? We have a master Perforce server that allows us to do
nice merges within the projects. That is the root of it. Then we have, let us say,
Project A again, which follows
a straightforward structure. There is a couple
of documentation folders, or the row Assets. Then you might know the traditional
UE4 folder structure, the Engine and the game project name the same
as the master game project, so in this case,
Project A and so on,
so forth for other projects. Then we have the Splash Engine, like the master root
of Splash Engine. We strive to have multiple game
projects, as opposed to one big monolithic
catch-all module. Why is that? Because it helps the good
architecture, encapsulation. It allow fast iteration; you kind of isolate
your changes to -- you are working, let us say,
on the weapon, you just touch
the weapon module, let us say. That allows faster
hot-reload linkage. Also promotes re-use of
some of these game modules. Maybe they get so good that
you want to use them across. If we take a bit of an x-ray
vision in the structure, let us say you have
these genetic components, and you have a module called
"Runtime," which is the game-facing one, where all
the business logic resides. Then you might have testing
that allows you to validate this logic. Then you might have
Editor-only modules for some enhancements
on the UI side of it. Or you might have interface,
which is just header files, like glue between
multiple game projects. Going a bit into depth,
this is kind of well-known; there is the Unreal Build
Tool specific file, with some dependencies
and some defaults there. Then you have
the implementation part, and the interface part,
then so on and so forth. This is the traditional view
for folder structure. But it still can get a bit hairy
to do all of this, so we have a quick way
to automate this. It is just a glorified batch
file that allows you to say, do you want an Editor module?
Do you want a test module? It will create
the folder structure for you and put in some
sensible defaults. Just a batch file can get
a lot of quick wins, easy. All right, so you have
this nice folder structure. How do you go about
filling it with good code? Well, good code is supposed to follow good
coding standards, right? A splash value, one of our core
guiding principles is mastery, especially when it comes to C++, and we try to be careful about
the way we architect our code. Now, we did not have
an established coding standard. There were some attempts
at documenting these practices, but they are
really scattered around in various documentation places. What is wrong
with this approach? They grow big,
they grow out of date. It is hard to maintain them. They are usually
in a separate location; you have to go out
of your way to go, how is this supposed to work?
From the code, you are supposed to go
in the documentation. Then only the gods
of programming get through the coding standard,
and who are we kidding, right? Nobody has time to write
or read the documentation. You just code, code. At Splash Damage,
I pioneered a -- and the guys helped me
to have a different approach -- where the standards are
actually source code files, so you can actually
affect the build by messing up
with the coding standard. Because it is
like source code, it means it can participate
in code reviews, and I will talk about this. It has a system for easy
reference and searching. Going into
a bit more detail -- there are just two main files; a .h file which shows
the more architectural layout of your classes,
and a C++ file, a .cpp file, which deals
with more in-depth C++ rules. They sit on the game side code.
Recently we open-sourced them, so you can check them
for yourself in GitHub and see if you agree or disagree with some of the choices
we made there. This is a bird's eye view
of the standard; I hope everyone can read it. Going a bit into detail, you can see there is
a combination of these islands of heavily-commented areas, and islands
of actual source code that show you a bit of how
you should architecture stuff. Then it is either
UE4 guidelines -- in this case, you see there are
some various tidbits about how you should structure
your U-classes, or it can be about how you should write
the actual C++ code. In this case,
it is something about how you should use
good rules for your destructor. Going back, you see there is
this nice tag system. Everything inside
square brackets is used for quick reference
or searching, or referring to that
particular piece of code. In this case,
[class.virtual] refers to the rules about how you
should structure or combine together the overrides
for virtual methods. Or in this case,
the ecs.gc refers to the portion of the rules
that are supposed to guide you about how you should structure
your objects, like you should defer them
with pointers and stuff like that,
through UProperties. Or how you should
combine together, in our opinion
the replicative variables through
the replicated functions, and so on and so forth. Now, the best usage of this
is through code reviews. This is a screen shot from
Perforce's own code review tool called "Perforce form", and this is me telling someone
you should use the class.member definition.
The good rule about this is, it takes pressure a bit
from the code reviews; it is not me dictating to
the guy because I know better. It is because we all agreed
to follow these common rules, right,
so it is easier, and everybody knows what we are
talking about when I am saying func.arg.readability
or something. The standard itself
is continuously evolved like that through code reviews. Normally you would send
your piece of code to one or two colleagues;
a change in the standard, you would send
to the whole team, and if you get enough up votes,
that makes the new standard. Now, a funny unintended
consequence was that the member, because the real classes,
it was, like, how we structure
Character Classes, turns out you
can actually instantiate the coding standard Actor
in a scene, and add a Static Mesh to him. I think in this case
he is preparing to fly away, maybe he saw some dodgy code,
or something. All right, so equally important
-- content standards. Why is that? Because poor practices
lead to poor results, right? They will compound over time,
you will lose productivity. You will have huge cook times,
deployment times. Also, the UE4 Editor
is relatively easy to modify to kind of add
productivity enhancements to it. The ground rule
we follow starts with naming. If you have good naming,
everything else follows nicely. Every Asset follows
kind of the structure. There is a base name,
and then a prefix. This already leads to less
confusion, better searchability. The Prefix spot just uses
initials from the class name. If it is a Static Mesh,
it will be SM. If it is Skeletal,
it will be SK, and so on, so forth. Here is a couple of
examples of how we encourage people
to name their Assets. You see the nice prefix,
like T for Texture, and a bunch of suffixes
to make it more clear. Now another type of Assets
are Blueprints, and these are
kind of content code, so they follow the same idea
as a coding standard. They live on the game
content side, and they are just basically
nicely laid-out Blueprints with nicely-formatted parts, and lots of good comments
and tool tips, showcasing the techniques
that we want to follow. In order to further drive this
a bit, we have some validation, and the best use case
for this validation was that it forced people
to comment out their Blueprints. All right, so we have
this content stuff, but how do you go
about validating and making sure that
they are actually followed? This happens primarily
in the CI, in the build form. You either get validated
after you submit, or more complex checks
doing nightly builds. Primarily there are three ways
we validate; the naming I showed you,
the Blueprints, and more general
Asset validation. The naming validation
is just a small Editor commandlet that kind of uses
that initials rule to see if you have Texture,
it should be T, and a couple of manual
whitelisted exceptions. Also, more importantly,
we disallow naming stuff like "Test" or "Error"
or "Warning," because you do not want
to go into logs and be confused. An interesting thing that goes
a long way of helping with this is that we intercepted
the way you do new Materials, new stuff in the Editor. Here I am creating
a cubemap Texture, and it already creates a nice
initial for this, right; I can only concentrate
on the name, taking out the guess work. This already leads
to good practices. Another interesting
thing we did, we disallowed random
imports of data, so you should not import
your stuff from the Desktop, then you would send it
to the recycle bin, and they are, like, oh, what was
that master source Asset? You should put them in the
raw folder in Source Control, and only import from there. That is a more
important rule we follow. Blueprints validation -- again,
a small Editor commandlet, just checks if there
are some tool tips, if you have at least
one common node, if you did not name
your names like "New Function," or "New Item," whatever. We did not go too much
in-depth with this. If you have the chance,
you should try Epic's own Blueprint compiler,
which is way more complex. Maybe you can leverage it
to check more complex rules. More general Asset validation, like everything
from your textures, models, physical Assets
and stuff like that. This is done under CI
on the build machines, and primarily checks
for missing or bad references, and arguably stuff that could
hurt you during cook times. Also importantly,
it disallows referencing stuff from developer
or test folders, because a lot of people kind
of do their small experiments in their own personal space,
and then they forget about this, and then they just link
it into production, and that leads
to a lot of problems. We disallowed this,
and we catch it early. We found an interesting
kind of system to allow this. We actually leveraged
the cooking process. Literally, we cook everything by elevating
the warnings as errors, so anything that would sweep
under the carpet that you will disregard,
now it is immediately an error so you get a chance
to fix it immediately. Obviously there
are some caveats here; this might not scale well for
all of the big ongoing projects when you cook all, and all your Assets
could take days. That is why it is important
to have this as early as possible
in your prototype stage. If you start doing this
in production, it is a bit too late.
But I still advocate for it, and if you do it later
in the stages, you should look into other ways, maybe leveraging
the Asset registry to kind of walk the dependency
and see the problems there. But still, this got us
very big benefits, the CookAll. I am going to show you
how it tied in with pre-commit. All right, so let us move
into actual compilation, obviously how that plays
an important factor here, right? Either you cook or you compile
or you build lighting. We kind of discovered
a rule of thumb is, the more hardware
threads you have, the best,
obviously to a point. 8 to 16 is really nice,
16 to 32. But after the point
it kind of plateaus; there is no longer a benefit
investing in faster hardware. We also experimented
with distributed compilation. You can either use off-the-shelf
solution like Incredibuild, which are very easy to use, but that are relatively
expensive, so you need to budget for them. We also tried Fastbuild, which
is an open source alternative. You can find the integrations
on GitHub, although the onus is on you
to maintain it better. I keep talking about the CI
system, the Build Farm. There are actually
several Build Farms, depending on the projects. If you are a prototype,
you only get one machine. If you are
a full production project, you might get 30 machines --
your own mini cloud. Interesting here, like the best
kind of configuration we had is the AMD Threadripper,
the first generation. We did not look
into the second yet. That is a good bang
for your buck. It does a full rebuild in 4.21
in under 15 minutes. What do we use
to orchestrate all this? Primarily, we use TeamCity, but lately we started
using Jenkins as well. That is why it is important
to use the so-called
architecture code, because we want to be
independent of the orchestrator. Why? Because in the past,
we were too ad-hoc with this, and it did not scale. For example, too tightly,
everything was like in TeamCity, so good luck moving it out. Too specific for a project --
if you have another project, you need to do everything
from scratch. Good luck if it breaks
and try to debug locally. That is why now
we are leveraging Epic's own BuildGraph system.
What is BuildGraph? BuildGraph is just
an alternative to the traditional
batch file scripts that do your cooking
and deployment. It is just an XML-based
infrastructure scripts. This allowed us to have
good re-use, like if you condense
all your major operations into a set of codified scripts, that promotes standardization
and good use. Also, another
interesting thing we did, we unified all
the calling paths, so everything uses BuildGraph. Like Epic itself uses it,
but they did not do this, and we went a step further. No matter if you have Visual
Studio on the command line -- on the Build Farm,
everything uses the same system, so if you modify a flag,
it will get modified once, not in 10 places. It is quite powerful
out of the box. Epic provides
a couple of good examples. They use it to distribute,
I think, the Engine itself, so if you want to get started, there is some
good information there. We modified it
quite a lot ourselves. The most we got out of it was like preparing
the Editor binaries to send out to the team. Again, a nice bird's eye view. If we go in closer,
you see it has documentation. It can include other scripts. It has a bunch of variables that
you can set up and configure. Pretty much the core idea is these agent nodes
that can run commands, and the commands can be
various compilation things. I am building some cook
dependencies here, like the crash reporter, or it can be just various
general commands like zip and zip move files,
delete file, stuff like that. You unify everything
in this one recipe, rather than having a bunch
of batch files everywhere. Let us talk about pre-commit. In my opinion, I would not work
on another project that does not have this system. A bit of context before --
so generally, we follow
the "trunk-based development," which means there is one
main line for each project. We split off
only for major releases. That means that everyone
collaborates nicely. There is one place of
[Inaudible] there is less overhead,
faster iteration. But the downside is,
breakages have big impact, because if you break something,
the whole main line dies. Your traditional workflow
would be, you code for a bit, you hopefully view your changes,
then it goes to the main line. Only then you have
the validation from the build machines. Now a better system
is the following: You start coding like usual,
you view but then you kind of go sideways and you do a personal
type of validation. Was this validation successful? If no, you have this nice chance that now you just return
to the user saying, you goofed something, went bad,
so you have a chance to fix it. If you do it successful, only then you submit
to the main line. Then it gets picked up
by the more general validation. Effectively there are
two systems working here. There is a frontend
that the coders use, and there is the backend,
what the build machine does. The frontend is just
a glorified tool that allows you
to indirectly submit or do kind of personal builds. You can do an
off-the-shelf solution; if you are using TeamCity, Jet Brains provides
a Visual Studio plugin for this. If you are using Jenkins, their support out of the box
for pre-commit, but only on Git. Or if you have
a small tools division, you can try to write your own. Either if you try
to write your own or get it off-the-shelf, it basically looks like this --
two main choices. At the top you choose
what you want to check against, and at the bottom
you see in the yellow, they only submit this
if the test was successful. That means that on the backend, it is just a nice,
personal build system. Obviously the more configuration
you put, the better it is because
you have better chances to know that you did not break anything.
But at the same time, you might have more stress
on the Build Farm. It is a nice balance,
you have to be careful there. In order to help with this, we came up with a reasonable
kind of infrastructure for this, whereby to kind of cut down on
the compilation time when you pre-commit this. The idea is,
you rebuild kind of overnight all the participating
configurations. Then throughout the day,
you just incrementally build, using this kind of cache that
you have on the build machine. For example,
I want to test the Editor, I want to test the game on PC, and then the game on another
platform, let us say PS4. Normally, to build this
from scratch to know that everything
did not get affected. It will take, let us say, even two hours
on the fastest machine. I do this overnight,
I leave the Object files there. Then every day a new developer
participates with his changes, and he will do a non-unit
incremental build, and that kind of cuts it down
to three platforms, you can check them
in 15 minutes. Obviously, it depends on
the extent of your changes, if you modified the Engine
.h, then you are paying for it. But still, it is quite good.
The takeaway here is, this was a major
productivity booster for us. We experimented with the project
for year and a half, I think, and now we are spreading it, we are evangelizing to the rest
of the projects in the company. Obviously it is not
a silver bullet solution, but still, it helps quite a lot. Let us continue more on this
theme of automation and testing, and how can we help, creating a good system
that helps the developers. Still into the commit territory, it is because
we are using Perforce, and you have history, it is nice
to have a lot of info there. But nobody wants to write
so much stuff every single time. We did a couple of tools
that help the developer in this regard,
like adding tags, links to your JIRA
or title and description. It kind of looks like this,
it is just a glorified UI that allows you to kind of
take the guesswork and just concentrate
on description, tagging, and adding whoever
buddied your code. On the other side, what happens when you want
to validate this commit? You want to make sure,
again in Perforce, that they follow
this formatting. Interestingly enough, we experimented
with time of day; for example, we were cutting off
the commits after 5:00 PM to discourage commit and run,
so let a bit of time to test. Or most importantly, you do not
want to put fire up on fire, so if the build is broken,
nobody can submit unless you have a special
BuildFix token. This is how it kind of
looks to the user, like Perforce shows
you this nice dialog saying that I did
a couple of action rules, but the last one was not safe, the build was broken
at the time. You are not the fixer,
you cannot submit. This is another kind of
nice way to kind of gauge and make sure people
follow nice procedures. Now the black sheep
of game Dev -- unit testing. Everybody loves it,
but not that many people use it. Now Epic themselves,
they have a good framework, so there is good stuff
to start and use already. We enhanced it a bit. We followed a
given-when-then structure. Of course we have separate
coding standards for this test as well,
to show you how to write. Because that in
Visual Studio is very easy to, how should I write my unit test? Oh, I do not know, I will just
bring the unit test standard. I will kind of look a bit,
and then I will get an idea. They integrated with our
orchestrator, like for TeamCity, and also we did a bit of setup
and tear-down support, like you can create
programmatically as small World, you can spawn Actors in it, and then it will automatically
be destroyed. Let us take a nice
little example of how would you write a unit
in the Splash Damage framework. You start by having a couple of
syntactic sugar macros,
to kind of have your flags, your namings, groupings,
stuff like that. Then you are ready to just start
writing the unit testing, like one line
with the nice name. Then you can concentrate
on the logic. For example, given that the
Character has an attached Actor, when you set the Team ID
on this parent Character, then you expect that the child
Actor has the same Team ID, so you would test for that. In a couple
of quick operations, you just have a unit test, and hopefully you will write
more and more of these. We also experimented
with the functional testing, which this is
a bit more involved. It needs some special setup
in the Editor, like some
specially-crafted levels, with Blueprint Actors in them.
They live on the content side. They follow Epic's
own conventions, but they are not
network-capable, so depending on your needs,
they might not be so useful. We did not go into
too much detail, but now there is a new system, starting with 420,
I think, called Gauntlet, which allows to kind of have
various functional scenarios in the client-server
architecture. Check this out
if you are so inclined. Now if you are writing tests,
and like I showed you earlier, you have this nice separation
between the run-time, the gameplay logic and the test,
obviously the test module, you need to test the actual
internal logic implementation. But at the same time you want
to access this private data, but only from the test, not from the rest of the other
modules, so other games. You do not want to leak
the implementation details. Our solution was to piggyback
on those module API things, if you are familiar with this,
if you write your game modules, and then to interact with them, you have these decorators,
like modular name, API. We added a new one,
which has "test." For example, if I have a player
run-rime component, and somewhere deep
in his implementation there is this camera
target component, when I declared this,
I used the Player Test API. Then in the equivalent
player Test module, when I want to test something
about this camera target component, I just get it for free.
I can just interact with it. There is just a bit of magic
you need to do in the C-shot file
in a builder finish, and saying, I want to use --
this is just the Boolean flag -- you say I want to use
this thing, and you have it. In this way,
it is nicely encapsulated, no other game modules
can access this. You still
are nicely contained. This is an example over a couple
of tests running on TeamCity, and they are all okay. This is just like,
you need the log that the testing
framework produces. You just need to kind of map
it to what TeamCity needs. Now this was covered very nicely
in a previous talk yesterday, but it is quite
an interesting way of automating your workflow.
The Unreal Game Sync that is used to distribute
the game binaries, so you want to have
a nice controlled system where you distribute
binaries across. This is especially useful if you want to have this concept
of last known good, rather than always working
on the tip, like on the latest. Latest can be broken,
even with all the test, let us say it is broken, so you want to send out
the last known good to someone, either a QA, or some automated
system marked as good. This is very useful
for non-programmers. We did a couple of modifications
to this Unreal Game Sync, like again, we kind of
leveraged the BuildGraph, and we had more control on it, how you prepare
the zip binaries, you insert crash report into it
and so on and so forth, and then you send it out. But largely, it follows
what the Unreal Game Sync does out of the box, so I recommend
using this system. I guess the takeaway here is, you want to
invest in automation. It brings very nice
wins later on. But do it, if possible, early in
the lifetime of a project, even, if possible,
even in a prototype. Do not leave it all the way
into full production, because the project
will have too much inertia. You can still add some stuff,
but it will be harder. Okay, what about Splash Engine? In our work,
we try to extract and re-use the stuff we do
in Unreal Engine. This is pretty much just things that become
kind of game-agnostic, enhancements
and fixes to the Engine, or we have some nice
UI component library, some audio
on a nice event system that kind of decouples the listeners
from the broadcasters, so they do not need to know from
each other, asyncing of tasks. Some small rendering features,
tech-art utilities, and so on and so forth.
That is the kind of collection that makes Splash Engine
together. The majority get seeded
from Splash Engine, and integration
happens in two ways; either "downstream," I call it,
when you want to merge the latest UE4
happenings in the Engine, and then in the projects. Or the other way, like a project
gets such a cool feature that you want to bubble
it up to the Engine, and then to all other projects. If we take
a closer look again, this is our Perforce
from previously. We have just a collection of
the drops that we get from Epic. This is Splash Engine, and this is where
the actual game projects live. Then the Epic drops
is just a flat collection of whatever you download
from Epic. Splash Engine has
an interesting format, though. There is this clean branch, I am going to show
how this works in a second. There is the main line, and then
there are various staging areas for every particular project
we have. In order to understand
this better, let us take a merge scenario. I want to update
the particular game project to the latest
UE4 version. Again, you start
with this structure, then you take
from the latest drop, you copy across
into this clean branch, and this gives you nice
incremental [Inaudible]. Only after this, you merge across into
the main line of Splash Engine, and this is the chance
to kind of fix any conflicts, or fix up plugins, for example.
We sometimes use Wwise for audio, or Houdini, so this is your chance to bring
those up to date as well. Now in the meantime, Project A evolved
at its own pace, right? It is time to bring the
latest developments of project A into the staging area,
which is kind of abandoned since the last integrate.
After you have done that, you are free to separately do
this merge from main line, which is now latest UE4,
into the staging area. You can work here
in nice isolation, solving conflicts as you go. Hopefully you do it
fast enough so Project A does not evolve too much, because you cannot stop
the development, you cannot just say, oh guys, I am going to merge now,
do not do any more work. You work in this
staging area, and this allows us -- I think the fastest we did was,
like, two weeks. After you completed the merge
in the staging area, then you are free to move it out
and merge into the actual game project itself. Hopefully,
it did not evolve too much, that you might have
some last-minute conflicts. But there are not that many. The bulk of the work
you already did. You see the takeaway here,
this system kind of allows us to have nice,
quick integrations; decouples the integration work
from the main Dev work. But it needs to kind of --
you have to be careful, because having a structure
like this needs some dedicated resources,
otherwise it kind of atrophies. We have a small tech
sharing group that kind of takes care of this, and tries to always have
this ready, like for example,
that main line, all that stages, they need to have their own
build machines and stuff. All right, so we are nearing
the end of the talk, so I am going to leave you with
a nice kind of cautionary tale. We did the 4.21 integration
following all of this recipe, so we ate our own dog food. We did pre-commit all our tests
run, everything was nice. We gave ourself a pat
on the back. Then when the game project
was ready, we asked the guys
to continue their work. Then they open the Editor
and then they found out that every single Static Mesh
got turned into one triangle, which is the best
optimization ever. But because nobody figured
to actually, after the merge,
test the actual level. I guess the takeaway here is,
no automation system would save you
from having good due diligence. But I think this is the best
combination; having a nice plan
that you follow, and then complement that
with a nice automation system that catches errors as you go. With that, that is it. Thank you very much
for the opportunity. This is Splash Damage;
check our website and GitHub. [Applause] ♫ Unreal logo music ♫