In this tutorial we're going to look at features
of JUnit 5 that can make it easier for us to write effective and readable automated
tests. This tutorial uses Gradle, for information
on how to add JUnit 5 via Maven take a look at our video on migrating to JUnit 5 from
Junit 4. We can use Cmd and N or Ctrl and Enter to
add a new dependency to our Gradle build file. Typing Junit in the artifact search box should
give a list of possible dependencies, we're going to use junit-jupiter which lets us create
and run tests with JUnit 5. Version 5.6.2 is the most recent production
version at the moment, so let's pick that. We'll see this icon in the top right of the
Gradle build file when we've made changes to it, this is to remind us IntelliJ IDEA
will only to apply these changes when we've told the IDE to load the changes. We can click the icon, or use Shift Cmd and
i, or Ctrl Shift and O on windows and Linux. Once the Gradle dependency changes have been
loaded, we can see the junit-jupiter dependencies in the External Libraries section of our project
window, which means IntelliJ IDEA has correctly picked up the dependency and downloaded the
libraries. There's one last step we need to do for Gradle
in order to correctly use JUnit 5. We need to tell Gradle to use the JUnit Platform
when running the tests. Without these lines in the build file, Gradle
won't see our tests as valid tests for running. Let's make sure we've loaded these changes
too. Now we've correctly set up our JUnit dependency,
we can create our first JUnit 5 test. Let's create an ExampleTest to experiment
with JUnit 5's testing features. We can use the shortcut that generates code
to get IntelliJ IDEA to generate a new valid test method for us. If you're familiar with JUnit 4, you'll see
the basic test method looks exactly the same, and we can give it whatever format name we
usually use for our tests. The only difference with JUnit 5 is that it
uses the Test annotation from the jupiter package. JUnit 5 has an Assertions class for all the
common assertions we might want to make. We can use partial completion to find the
assertion that we want, let's pick a standard assertEquals. Now we have our most basic test case, we can
run it to make sure everything works the way we expect. I've used Run Context Configuration (Ctrl
Shift R or Ctrl Shift F10 on Windows) to run just this single test method. It doesn't make much difference in this test
class as it's only got one test method, but it's a useful tool for running a single test
method inside a larger test class. IntelliJ IDEA runs the test method and it
passes. If the details of the passing tests are hidden,
we can show all the tests that passed by clicking on the tick in the top left. Double clicking on the test method name will
take us back to that method in the code. Now let's see what happens when a test fails. IntelliJ IDEA shows the failing test in amber
since the test failed an assertion, rather than causing an error which would be shown
in red. We can see the expected value and the actual
value side by side and this should give us an idea what failed and how. In our case the cause of the problem should
be quite clear since we intentionally put the wrong number in as the "actual" argument. Note that IntelliJ IDEA's parameter hints
feature is really helpful for assertion methods. It's not clear from the method signature which
argument is the expected result and which is the actual result being checked. IntelliJ IDEA shows the names of the method
parameters as hints so we can see at a glance which is which. If we decide this is too much noise in the
editor, we can turn off hints for a specific method using Alt and Enter and selecting "Do
not show hints for current method" and the method will be added to the exclude list. We can configure the parameter hints from
the IDE preferences, in Editor -> Inlay Hints -> Java -> Parameter hints. We can turn hints on or off, configure which
types of methods show hints, and it's here we can see the Exclude list and remove items
from the exclude list if we decide we do want to see hints for this method. One thing to note for JUnit 5 tests is that
the test method doesn't need to be public in order to work. In fact IntelliJ IDEA will help here and let
us know if our modifiers can reduce visibility and still work as expected, and we can use
Alt and Enter to have the IDE make the changes. If we fix the test and re-run it we'll see
it passes as we expect it to. There's one more configuration item that may
be helpful to understand. We can configure how IntelliJ IDEA runs our
unit tests if we're using Gradle. By default IntelliJ IDEA uses Gradle to build
and run the code and tests in Gradle projects. This ensures that when we run the application
or tests in the IDE it works the same way as it would in other environments, like continuous
integration. It also ensures that any complex build or
setup logic, or code generation, is done. However we might choose to use the IntelliJ
IDEA runner to run our tests. In some circumstances this might be faster
than using Gradle and provide a faster feedback loop. Let's continue our exploration of JUnit 5
features. Quite often we might want to flag a test as
something we don't want to run. This is quite common when we're doing Test
Driven Development as our tests will, by definition, fail when we first write them. JUnit 5 supports this with a @ALBERT BOGONENKO
annotation. We can add descriptive text to state why the
test is not to be run. Like passing tests, IntelliJ IDEA usually
hides the full list of disabled tests so we can focus just on the ones that are failing. We can see all disabled tests by clicking
on the grey disabled icon. Clicking on the test name will show the reason
the test was disabled. Let's remove the Disabled annotation since
this test actually does work, and let's look at another useful annotation. JUnit 5 supports a @DisplayName for the test
method, so we can write a helpful descriptive name for the test. When we run the test, it's this DisplayName
that shows in the run window so we have much more readable text for each test. Not only does this encourage us to be descriptive,
since it's a text string and not a method name it also support special characters which
can also help readability. If we have a standard template for new test
methods that we'd like to follow, we could change the default test method template in
IntelliJ IDEA, or we could write a Live Template which helps us to create new test methods
that look exactly the way we want. I want to always create a new test method
with a DisplayName that is initially converted into a CamelCase method name. This encourages me to use the DisplayName
annotation to write readable test descriptions, and uses them to create valid method names
so the method name is also helpful. I also have my live template automatically
insert a fail into the generated method as any test should fail first even if we haven't
finished writing it yet. Let's take a look at how this Live Template
was defined. Live Templates can be found in the preferences,
under Editor -> Live Templates. I've created a new live template group called
Test and put my test live template here. I'm not going to go into this template in
detail here, the details will be in a blog post on the IntelliJ IDEA blog. The important point is that we can use Live
Templates and configure the variables using built in functions to give us the result we
want when we create a new test method. Let's focus on writing the tests themselves. As we already saw, JUnit 5 supports standard
assertions that may be familiar if we've used other testing frameworks. In the real world, we often have to check
more than one thing to prove to ourselves that something worked the way we expected. Take this list, for example. If we wanted to check every item in it was
correct, we might write multiple assertions to check each value. This works, it will certainly pass if all
the items in the list are as expected. The problem comes when one of the assertions
fails. In this case, the first assertion fails, and
we can see from the output why that was, and jump to the problem. What we don't know though is whether the other
assertions passed or failed, because JUnit won't run the assertions after the first failure. Let's see that by making sure all the assertions
would fail - when we run the test, we see once again that only the first assertion is
shown to fail, we have no idea the others are also broken. This could be a problem as we'd go back and
fix the first assertion, re-run the test, have to fix the next one, re-run the test,
and so-on. This is not the fast feedback we're looking
for. JUnit 5 supports an assertAll assertion. This will check every assertion even if one
of them fails. We do this by putting all of the assertions
we want to group together into the assertAll call as a series of lambda expressions. I'm going to move all of our current assertions
into the assertAll method call. I'm using a combination of smart completion
and Join Lines to create compiling code. Finally I use Complete Statement to correctly
close off the brackets and statement, and it also formats the code too. Now JUnit will check all of these assertions
when it runs the test. We can see the first failure, as we did before,
but now we can also see that all the assertions failed - they were all run even though the
first one failed. This makes it much easier for us as developers
to see the issues and fix them all in one pass, instead of having to repeatedly re-run
the test. Once we've made all the changes, re-running
the test shows we've fixed everything. Now let's look at assumptions. Later versions of JUnit 4 supported assumptions,
but those of us who are used to working with older tests might not have come across this
concept before. We may want to write tests that only run given
some set of circumstances are true - for example, if we're using a particular type of storage,
or we're using a particular library version. This might be more applicable to system or
integration tests than unit tests. In these cases we can set an assumption at
the start of the test, and the test will only be run if the criteria for that assumption
are met. Let's write a test that should only be run
if we're using an API version that's higher than ten. Again, for the purposes of this tutorial,
we'll show this working with a very simple assertion that should pass. When we run the test, we see that this test
runs and passes as expected, because in our case the Fixture is returning an API version
higher than 10. Let's flip the check in the assumption, so
the test only runs if the API version is less than 10, and re-run. Since our API version is higher than ten,
this check returns false, the assumption is not met, and the test is not run. It shows as a disabled or ignored test. Earlier we saw that we can use assertAll to
group a number of assertions and make sure they're all run. This is one way of performing multiple checks. There are other cases where we might want
to do the same set of checks on different sets of data. For this, we can use parameterised tests. Parameterised tests are where we can pass
data into the test as parameters, and with JUnit 5 there are a number of different ways
to do this. We're going to look at the simplest approach
to show how it works. We're going to use the @ValueSource annotation
to give the test method a series of individual values to test. This is an array, and JUnit 5 supports many
different types, let's use an array of ints for this test. We can hard code the values we want to pass
into the test here in this array. Each one of these values will be passed into
the method individually, so we can add a single int parameter to the test method with a useful
name to contain the value. IntelliJ IDEA can help us with parameterised
tests in JUnit 5, it lets us know that if we're using a ValueSource annotation we shouldn't
be using a @Richard Sun annotation as well, we can use Alt and Enter to see suggestions,
and accept the suggestion to use the ParameterizedTest annotation instead. Let's go ahead and create our test. We're going to create a Shape with the number
of sides given to us, and we're going to check that the Shape has been created the way that
we expect and can give us the correct number of sides. When we run the test, we can see that in fact
the test runs more than once. The test is run for each one of the int values
we put into the ValueSource annotation. We can also change the way these individual
tests are shown in the results by creating a custom name in the ParameterizedTest annotation. For this test, I'm just going to show the
value the test is being run with, the number of sides the shape is being created with. Since this name takes a String we can create
whatever meaningful name we want. Paramterized tests are very helpful for testing
large sets of valid data, but they're also really useful for checking all the invalid
values with the same assertions. We can create a new test to check invalid
input. We'll set up a new ValueSource of ints, but
this time the int values will all be invalid numbers of sides for a polygon. Let's assume that as well as too few sides,
our code doesn't support creating Shapes with a very large number of sides. Once again we need to make sure this is a
ParameterizedTest instead of a standard test, and we need to pass in the number of sides
to the test method. At this point we should be asking ourselves:
"what's the expected behaviour when the input is invalid?". If we decide that the constructor should be
throwing an exception when it is passed invalid values, we can check that with an assertThrows. We tell it which Exception we expect to be
thrown, and we use a lambda expression to pass in the method that we expect to throw
the exception. When we run this new test, we see that all
values fail the test. None of the invalid values caused an IllegalArgumentException
to be thrown. To fix this we need to fix the actual code
under test, our Shape class. We need to add validation to the constructor,
and have it throw the correct exception when the invalid values are passed in. Let's do the simplest thing to make the test
pass. When we re-run the test, we see all the different
cases now pass. In this final section we're going to look
at one of my favourite features of JUnit 5, nested tests. Nested tests allow us to group specific types
of tests together inside a larger class. There are lots of reasons we might want to
do this. For example, to group together tests with
similar setup or tear down, but that are not so different they need to be in their own
test file. We're going to use this feature to group together
all the tests that require a Shape to already be set up. We have to create an inner class, and add
the Nested annotation. We can also add a DisplayName to this the
same way we would to a test method. The nested class can contain fields, of course,
and we can use these to store values that all the tests inside this inner class will
need. Let's create a simple Shape to use in these
tests. We can even create Nested classes inside our
Nested class. This can be useful to do further grouping. We're going to use it in this example to group
together "Happy Path" tests, the tests that check everything works as expected under normal
circumstances. Now we can create our specific tests inside
our nested classes. With nested classes we'll probably want to
define a naming convention that makes sense when the test results are printed, which we'll
see in a minute. Let's make this first happy path test a simple
check that shows the Shape returns the correct number of sides. We can then create another test which checks
the correct description is returned for our shape. Now let's create a group for tests that show
what behviour is NOT supported, or is not expected. Let's say that in our example two Shapes with
the same number of sides are not expected to be actually the same shape. We can see that IntelliJ IDEA offers suggestions
for our assertions as well. IntelliJ IDEA suggests using assertNotEquals
instead because a specific assertion like that can be more readable or easier to reason
about when it fails than an assertFalse which can sometimes be difficult to understand. If we run all the tests in the class, which
I'm doing by moving my caret into the outermost class and using Run Context Configuration
again, but we could do by using Ctrl R or Shift + F10, we can see our nested tests in
the test results. We can see the grouping means the results
of similar tests are all grouped together. We can also see how the display name can help
us to understand the grouping of the tests. If all of these annotations are adding too
much noise to the editor, we can always collapse them by pressing on the minus in the gutter,
or by using the keyboard shortcut to fold code, cmd "." or Ctrl and . - where dot is
the full stop or period on the keyboard. We can hover over the collapsed annotations
to see them. This video has just touched the surface of
the features offered by JUnit 5. To find out more, go to the JUnit documentation,
it covers a huge host of topics, including showing the features we've seen in this video
in more detail. It also covers the steps to take to migrate
to JUnit 5 from JUnit 4, which was also covered in another IntelliJ IDEA video. Thanks for watching!