[MUSIC PLAYING] RANA BHATTACHARYA:
Hello, everyone. I'm really happy to see
so many people here today to talk through the
journey Atom Bank's been on in terms of building
a bank on the cloud. Feedback after session. You've probably seen the slide. My name's Rana Bhattacharya. I'm the Chief Technology
Officer at Atom Bank. And I'm joined with-- REG WILLIAMS: So, Reg Williams. I run a UK and Ireland
architecture practice. RANA BHATTACHARYA: We're
going to talk today, really, around our journey
moving to the cloud using Google and
our rearchitecture to leverage Google. A lot of you probably
aren't familiar with Atom, so we thought we would start
with a bit of background around Atom. So Atom, from a concept, was-- came about April 2014. We were given a restricted
license in June 2014 to actually build the first bank
in the UK that was app only. We launched April 2016 with
a fixed-rate saver and a SME proposition. This was an iOS. Following a few months,
we launched a Android app. Towards the end of 2016,
we launched a mortgage proposition. Effectively, right now,
we're roughly $2 billion on both sides of the
balance sheet, both savings and lending. And more recently,
for the likes-- in terms of this discussion, we
signed a contract with Google in February and launched our
initial workloads in Google in August this year. So I just explained that we
are a live, working bank. We went live in 2016. But last year, we
started a journey to actually move to the cloud
and actually rearchitect our bank. So you can kind of
wondering, why did we do that if we can do
things like have $2 billion on both sides of
the balance sheet? So when we incepted
the bank, there wasn't a real clear
guidance from the regulators around the use of cloud. So as a bank, you're constrained
by a level of guidance. But as we were
creating our bank, we were very clear,
in the future, we wanted to embrace the use
of cloud and other technologies that were coming through. So we took an
opportunity last year to create a program of
work to basically start building a new bank alongside
the bank we already have. And what we're
going to talk about is really that program of work
that we've been going through. For us, it's very
important, because we have a vision of where
we want to take our bank and how we want to
serve our customers. We want to really drive
benefits for our customers, give them the service and
products that they need and want. And we felt this was best
served by us moving to cloud and creating a stack
that works in the cloud to really give those benefits. Quite often, you go,
what are the challenges you're trying to face? And some of the
challenges we have are being felt by a lot of
other banking institutions. So if you think about where we
started from-- so originally, because we couldn't
use the cloud, we leveraged third-party
data centers. If you go down that
model-- it's a model. It's not a vendor. Go down that model,
what you'll find is, basically, that
doesn't give you the flexibility or
the agility you need, because you're data
center centric. Similarly enough, quite
often, the challenge is, you find is that using batch
processes rather than real time event data processing. Also, that whole model around
data centers and suppliers-- you often see over
resilience of third parties where, effectively, they are
running core functions for you. And because it's a
commercial relationship, what you find is you want
to change something, there's an impact assessment. There's a change request raised. You have to approve
the change request. Someone then goes and
buys some hardware. Someone then installs it. Someone then configures
it with operating systems, loads software. That takes time. That takes a lot of time. So we knew, from
our perspective, from-- both from a cost
and a speed perspective, that we wanted to be
more self-sufficient. And actually, leveraging
cloud helps us do that. So our response to
all these challenges was to create what we call the
Atom Banking Machine, which really is around
the technology which is our platform, our
people, and data-- because end of the day, you
could have the world's best technology, but if you
haven't got the people to use it properly in a
way that gives you agility, then you've just
got some technology. And at the heart
of a bank is really data in terms of its
customers, in terms of payments, et cetera. So if you don't actually
manage to fuse those three things together,
you're not going to be successful
in this journey. So at a very high
level, we're creating a state of the art
banking platform, which-- and the bedrock of that
is leveraging the Google Cloud. And what we're trying
to do is create one of the most advanced banking
machines in the industry. So we're using
Thought Machine, which will be a cloud native, smart
contract based core banking platform. So I've been in banks where,
basically, you might spend a year developing a product. Here, basically,
through configuration, which is really
code, effectively, you could define what a product
looks like in days and weeks. Obviously, there's more
things to do outside of defining the product
instead of launching it, but you're taking the pressures
away from product development by virtue of leveraging
this sort of technology. Again, we're also injecting
event streaming for the use of Kafka into our platform. And again, using the benefits
of cloud and DevOps to give us the agility
across our estate. What we're trying to do
is create a plug and play partnering model leveraging APIs
and event-based architecture to basically give us a level
of segregation between getting vendor locking-- so basically, historically
in banking platforms in a lot of banks, once you
integrate a third party, you're locked into
that third party, because it's like
committing heart surgery, you try to move it out. Quite often, organizations
fall in and out of love with various vendors but
can't do anything about it because it will take you years
to decouple that relationship. And we didn't want
to fall into that. And one of the key important
points here, also, is for us, we wanted to be in
control, not vendors. So we've also been increasing
our own capability and capacity within the bank to take control
of more of our engineering-- so things like our own app
development, which was already there, but we've
bolstered the size of the team, our integration. Things like Kafka, our ability
to configure our own products on Thought Machine, et cetera-- we're all going to
do that in house. The benefits of this would
be speed, efficiency, allow us to innovate and
experiment without actually, for a start up,
costing the Earth. And also, we're doing most of us
through a level a configuration within Atom itself. The journey we've been on-- effectively, we
started last year where, basically, we
went on an RFP process with the usual suspects,
as you would think. And we went down a route of
selecting Google and Accenture to help us provide us with
some initial capability to create our own capability
and also start making progress on our delivery journey. And it was quite important
for us to do that, because we didn't want
to just organically grow and take a lot of
time and not make progress. We wanted to basically
organically grow, make progress. And what Accenture did
to help us was basically gave us some people
who knew the right-- well, the right ways of
working to take us forward in the short term,
embed that ways of working while we
recruiting people in, as well. So we've created our
DevOps capability. We started doing design
for various phases of that transition of
building this bank, and also decommissioning
the old bank. Right now, we've
finished phase 0. And within the next few weeks,
we'll be completing phase 1. Then, throughout next year,
we're closing off phase 2. And we're looking
at phase 3, as well. And at the end of phase 2, we
would have actually launched our full stack of our new bank. And phase 3 is really
just closing down the old bank within a
data center, effectively. So within the next few weeks,
we'll have a hybrid bank, let's say-- old
world, new world. And then we'll be adding more
componentry into the cloud and using SAAS services
in view of shutting down the old bank in a safe
manner when we're ready. In terms of our
transformation, we talked-- I use the word partner. And part of our
journey has been also to differentiate between
suppliers and partners. Because suppliers,
quite often, you have a transactional
relationship, where if they do something,
there's a contract, et cetera. But we've been looking for
partners, because partnership is more than a contract,
where basically, you have someone on your side who
is looking to help you succeed. And what we look for with
our relationship with Google, particularly, is having
a long-term partner who is actually there
to help us succeed. And we are here today as part
of promoting the help Google has given us, as well,
on our journey, as well. From Accenture's
standpoint, as well, we were looking for a partner
to help us on our transition. And from day one,
it was kind of, help us to create a
capability, but we want to be self-sufficient. And Accenture helped us do that. REG WILLIAMS: Yeah. I'd add on that that these
guys set the bar very high coming into this work. And we'll go on and see
infrastructure as code, soup to nuts in terms of
building these environments on the cloud, dev and
test, and production. And, of course, we've been
delighted to help build that capability inside Atom. But there's-- it's particularly
interesting in that this bank has taken that business
approach to bring a proposition to market quickly, employing a
lot of on premise capability, and then refactor onto the cloud
as part of their future growth strategy-- which obviously,
for our organization as well, is something that we
see a lot of our bigger clients wrestling with. They've got years
of technical debt, and they're trying to work
their way through all of that. So for us, it's a really
important partnership to work together,
to be successful, to help learn in the
market how we all address some of the key
challenges around that problem and moving stuff off large,
monolithic systems of record on traditional IT and move on
to the fast, new, cloud-based infrastructure. RANA BHATTACHARYA: Cool. We talked about-- it
was essential for us to build our own capability. And part of this was really
training, upskilling. So obviously, we're
running a bank. We've got people. But we wanted those people
to start embracing Google. What Google helped
us do, actually, as part of the investment
and the partnership-- they invested trading into Atom. And not just Atom, but also
into Accenture, where required. So Accenture's got some
good people on things-- on areas like AWS. Have a good background
on delivery. But also, just getting them up
and ready to help us on Google was part the investment
Google put into this journey, as well, which was
really refreshing to see. But on recruitment,
again, through discussions with Google and Accenture and
our own equipment function, we started building our own Atom
team from a DevOps perspective, as well, to be capable of
running a full stack bank. Culture was quite important. I'll go back to
the point around, you can select all
the tools you want to, but if you haven't
got the people to run it in the right
way, tool's a tool. So Accenture brought
in a experienced Scrum Master, an agile coach. And then we very quickly created
the right working approach of a high cadence delivery
approach where, basically, we're using sprints to
basically develop, via DevOps, the key capabilities
of the infrastructure, and the ability to deploy
applications in a repeatable and a robust fashion. We'll talk a bit more about what
that actually means shortly. Then, as part the
journey, as we are a bank, moving to the cloud-- we just can't do that without
consulting the regulators, as you would expect. So the regulator's
job is to make sure that a bank works in a way
that doesn't harm its customers or broader country-- the
national infrastructure, because bank's a part of
national infrastructure. So we went into a discussion
with the regulator about our journey. And it's worthwhile
just talk about some of the high-level
discussion points. So there's something
called [INAUDIBLE],, which is part of
the FCA handbook. And this really talks about if
you have a material outsource, there's certain
provisions that need to be in the contract around
audit rights, liability, et cetera. So to make sure the bank has
the right sort of contract for material outsource. So we had to make sure we had
a contract with Google that allowed us that
level of coverage. Similarly, at the same
time, EBA guidelines from cloud outsourcing. We have to make sure we had
things around exit strategy, et cetera, et cetera,
or bottoms out that we could demonstrate
that we actually knew how to control the contract. More interesting discussions
we had with the regulators were around things like data
residency and data sovereignty. So obviously, we're a bank. Got customer data, got
payment data, et cetera. So it was essential
that the regulator understood that we understood
where data was going to reside. So none of our data is going
to be outside of Europe. It's going to be hosted
primarily in London. And we've got a failover site
in another location in Europe, as well. And data sovereignty is really
around who controls the data and who can see it. And we had to demonstrate
an understanding of how the data would
be held in Google and that Google
couldn't actually see it, because quite often,
there is a lot of noise in industry, really,
around cloud providers, big organizations being
able to use other people's data without permission. Google can't see
our data in terms of the way the encryption
works, et cetera, et cetera. We talked about exit plan. If you're going to
make a material move, irrespective is it Cloud
or any other provider, you need to understand,
if it doesn't go well, how do you move away to
someone else or something else. Then governance. The governance point is
really, the regulator wants to understand, do
you, as an organization, understand what you are
saying you're going to do? And have you gone through
the right levels of approval internally and mitigated risk? And we had to basically show-- the governance process and get
our board, the Atom bank board, to attest to the
fact that we have gone through the right process
and the right risks are mitigated, and we have the
right controls in place to run a bank on the
cloud, effectively. So where will we end
up with our journey? So we'll have a new
banking platform. This will have
capabilities where we can define products
very quickly using things like Thought Machine. We'll have a
real-time data stream layer which will basically
allow us to create actions and real-time events. So things like-- if a certain
type of payment happens, we could send out
a notification, if we wanted to, in real
time, not relying on batch. Could provide, in the future,
real-time customer insights, et cetera-- all of which, importantly,
is under Atom control. So our team will be
making changes and running the show of a bank rather
than being overly reliant on third parties. Obviously, we're going to,
for some of our services, use things like SaaS, where we
believe that's the right thing. But for areas that are very
important for the bank, we want to be in control. And once we have
this in place, we'll be able to decommission
our legacy systems. And what that means is,
we'll be able to create a very superior customer
experience, the right products and services at pace and
the right cost point-- because what we're trying to
do is basically use technology to automate as much as possible
so our operating costs are low, which therefore means that we
could basically price products for our customers at a
more appropriate price and try and get them the
most value as possible. So we talked-- well, we're
here, at a Google conference. So it's worthwhile talking about
what's worked well with Google. I talked a lot about
partnership earlier. Right from the outset, we
said we wanted a partner, because 50% of our journey is
really around having a partner, because you will have
bumps in your journey-- any journey you will do. But it's really around having
someone you can work with to get over those obstacles. And that's what we have found
in our journey with Google. Support for
upskilling-- again, we talked about the training
investment, which was great. And also access to
other Googlers, as well, when you have point problems. And again, that doesn't
go into discussions around the contracts. We just say, can
we get some help? Someone appears, answers
questions, we move on. We're already
seeing the benefits around the agility to
scalability and resilience. We'll come onto that
a bit more later. Cost is under our control. Quite often, a
bank is constrained around the number of
environments you can have, in a more traditional approach. We can spin up
environments very fast. And we'll talk a
bit more about that. But there's a cost to it. But we control it. We could also shut
down those environments where we don't need to. And then we've got the benefit
of, basically, the evergreening and trying to keep
things in support. Because if I think about
our current banking stack that we have live,
we have to coordinate various patch releases, security
releases, et cetera, et cetera. But behind the scenes,
Google is working on that. And you just have
to allow Google just to take care of
that for you, which minimizes the activities
you have to coordinate, effectively, and the
risk around that. Some lessons learned
from this journey-- so security's a
big part of a bank. One of the security
capabilities within Google is CMEK, which is Customer
Managed Encryption Keys. We're leveraging
this because we want to be on control of the keys
as much as possible in terms of instantiation, rotation,
et cetera, et cetera. What we found was,
CMEK initially wasn't rolled out
across all the regions, as we assumed it would be. But for us, it's been rolled
out where we needed to be, now. So that's great. And also, there's
a road map of where see CMEK will be available
within the various different services of Google. And we just need to
understand how best we could use what functions that
were available that weren't CMEK, plus, where the CMEK
would be available, to allow us on our journey to go live. There's a point here on lack
of geographical separation within the UK region. So if you're familiar
with Google regions, each Google region
typically has three zones. What we found was,
some of our zones were quite close
together for a bank, because we've got certain rules
around geographic dispersion. So effectively, if you wanted
to do a disaster recovery scenario, you can't have a-- two data centers too close
together, effectively. So the zones are
effectively data centers. But again, Google
are working on that. But part of our
deployment model, we've taken care of
that issue, anyway. So we're good to go. The contracts took a
while longer to establish. Again, we talked about
the type of contracts we need for a bank-- [INAUDIBLE],, EBA
guidance compliance. But we got through in the end. What was refreshing was, Google
worked with us through that process-- obviously,
to get our contract, but used it as a
lessons learnt process, and created what they
call the FS Addendum. So next time around a bank
wants to contract with Google, it'll be more efficient
in terms of leveraging the timeline, et cetera. But for us, also, it was really
understanding the roadmap. We talked about CMEK,
what's available when. Then you get a better
view of your journey and what you can use
when, when you've got certain criteria
of service required. So in terms of benefits
already realized, I talked about our journey,
where we've done phase 0 and we're about to
go live with phase 1. Already, we've built
our in-house capability. So it's fair to say
that Accenture's not with us anymore. So in terms of their
job, helping us build our capability--
that's been established. We've got our own SREs to
basically take us forward. Our lead time for
provisioning environments are significantly reduced. And I will explain
what that means. And again, we are
getting the value of what we call the enterprise agility. And I'm going to
demonstrate what that means to us right now. So historically, using a
third-party data center model, we probably would
have taken circa-- at best, around 41 weeks to
create another environment of a full stack bank. Right now, we could do that
under a week-- so five days. And we do this leveraging
the capabilities in Google where, on day 1, we effectively
create our base networking and our base identity
access management capabilities and
our DNS using tools like Vault Packer, Terraform. Across day 2 and
3, we're basically using infrastructure as a
code, deploy of a network infrastructure, as well as we
call software architecture. And it's worth saying-- the type of tooling
we're using, we're using it so it's
repeatable and fast. And as a bank, we
want to have processes that we know if we use
it in one environment, it will work in
another environment, because with the same effects,
we get the same process. Then, the last two days is
really around functional testing and making sure
everything is running. Then we hand it off to a
project, a team to do the work they need to do after
that and to the develop the new features, et
cetera, et cetera. Another example worth
talking about when we talk about enterprise
agility is our ability to release change. So over last few weeks, we've
been taking a change that's just been developed,
smoke testing it in a development
environment, putting it through a QA or a
test environment, then put it into
production the same day. And we've done this
multiple times. And when I say changes-- we might do it multiple
times, but each change is a baseline of change. So you could be talking about
50 or 100 individual changes incorporated that's going live. And we've been building out
our production environment-- which is live internally
for us right now-- on a daily basis and
adding more change in as we need to in view
of going live over the next few weeks in terms of
our true customers externally. Here's a quick video of one of
the other benefits we've got, as well as going-- through going to Cloud. So we've got our old app on the
left-hand side and our new app on the right-hand side. And through running workflows on
Google, the latency is removed. And we're finding,
as you can see on the app on the
right-hand side, it's running a lot faster. We have built a new app. And this is the first time
we're showing it externally in terms of elements
of its look and feel. It runs first, and
not just because we've built it natively, but also,
the actual interactions over the cloud-- the latency is removed
because traffic is-- well, the latency isn't
removed, but the traffic is prioritized over
Google compared to data center traffic, et
cetera, over the internet. So we get the benefits
of that, as well. So in reality, we-- on the new app, we're
talking about we're six or seven seconds faster
on our log-on journey. Reg is going to pick up. REG WILLIAMS: Thank you. So I am, indeed, the last
man standing from Accenture. It's been a real
privilege to work with a team on the
ground at Atom Bank. They came to us two years ago. And as I said earlier,
they set a really high bar in terms of what they
wanted to automate and how their vision
of driving their future bank on Google Cloud Platform-- how they wanted to roll that
out through, essentially, building out the infrastructure,
the base infrastructure on Google Cloud Platform,
building all the DevOps pipes to initially play out the
middleware stack so that they could run a hybrid model
between old-- existing, on-prem core bank and new-- and existing app, I think, in
at least one transition state, and then move onto new app
onto old bank, and then new app onto new bank. All facilitated through
full infrastructure as code-based environments on
Google Cloud Platform running Terraform and
Ansible, primarily, right across the full middleware
stack-- that I shall not name, but it's quite a lengthy
list of all the sorts of usual suspects in a modern
cloud-native architecture. And I wanted to
bring it up a level, because this is the thing
that I'm speaking to. I speak to a lot of clients
around their ambition for moving to cloud
native, high speed, at scale, successful
implementations of financial services solutions,
products, retail solutions-- really, it's right
across the market unit. And the thing that
we've observed is-- in fact, it's-- we
haven't observed it. It's not new. It's well-known. And anybody who's read
Fred Brooks would know-- would recognize the term
diseconomy of scale. It's intuitive to all of
us, as software engineers, that the bigger something gets,
the harder it gets to scale. The more expensive per
line of code it is, the more lines of code you have. So this graph is actually
based on some industry data. And it really shows you
that if you can break down a 100,000-day project into 10
discrete 10,000-day projects that are not connected
to one another-- that have only
production dependencies, then the benefit is as much
as, potentially, 40,000 days-- 40%. How about that for a business
case for microservices-- if you were ever wondering. Because the reality is, all of
that technology is not just-- it's easy to get into a sweet
shop mentality with this stuff. If I use APIs and I
use paths and I use Cloud and I've got
a bit of agile-- oh, I've got DevOps, too. And I bang all of that stuff in,
over the next 18 months, bang, I'm going to be able to tell-- I'm going to be able to realize
all of the potential in terms of faster time to market,
lower cost of ownership, better quality. And quite a lot
of my clients are finding that that isn't true. Actually, what they're doing
is adding more technical debt, shadow IT, cottage industries,
losing control of various parts of their estate as a
result. But the key thing is, it's not getting
the value out. They're still going to
market every 18 months. Maybe this group,
this is not familiar. But for me, it's-- we live this every day, more
or less, in big organizations-- big organizations
characterized by having 40 years of technical debt. So-- oh, go forwards. Go the right way. So what we want to be doing
is operating in the top zone. We want to get away from
boom and bust IT cycles. Big investments in change,
rack up some new middleware, the latest and greatest. Last year, it was ESB. This year, it's API Manager. Next year, it'll
be something else. And ironically, a lot of our
clients have got all of it, between the north face
and the south face. It's like the Jurassic
Coast of middleware, going all the way down
to the systems of record. And that is,
itself, adding drag. So it's like one of
those awful conditions you can get where wherever
you try and intervene to do something about
it, you make it worse. We want to be operating up in
the top of zone with agility and doing-- getting away
from that boom and bust thing into incremental transformation. So across all our
clients, we're all talking about the same stuff-- what's hot. PaaS is hot. Running a Kubernetes platform
so that I can modularize my code and scale it out and give
it to different teams is, obviously, really hot. APIs has been hot for a while. Use contract-based interactions
between your systems to isolate them
from one another. That's quite hot--
well, extremely hot. And what else is on here? And as we've been talking about
today, infrastructure as code. Automating your pipelines. Taking-- the quality you get
out of delivering something a thousand times
through the full stack means that by the time
you go into production, that stuff runs like clockwork. That's got a real
business benefit, and that's obviously very hot. Data lakes-- just a
sidebar on data lakes. A lot of people have been
moving their-- starting to address the data and
moving the data around. It's quite common
to see architectures employing a big data sink
to stream information out of your old legacy systems and
then read APIs on the lake, and then surface
that up onto your-- through your online channels
as a tactic to get around some of the drag in the legacy. So it's-- that's fairly heaty. It's quite hot. Certainly a lot of it going on. There's a risk around
some of that I'll come onto in terms of it
being a bit of a cul-de-sac unless you are really dealing
with the right path, the update path, because ultimately, it's
another cash strategy, really. But the key point is, still
got months and not days. We still have digital channels
that are not truly 24/7. And still rely on update cycles
that can take many hours, particularly if there's complex
systems on the back end. We're not seeing enough on-- or I don't think there's
enough progress, necessarily, in and around the data. I think there was a
survey I read recently that said 78% of CIOs feel that
their organizations are not exploiting their data properly. And so there are lots
of projects in flight that deal with moving
data around, but really, in terms of business value
being driven from that data, it's not quite such a-- there's not quite
such a good story. And ultimately-- and this is
the bit closest to my heart, because I'm a delivery guy,
at the end of the day-- we've still got too many
dependencies riddling our Gantt charts in all of the
software projects we're trying to put live. We haven't really achieved
that program to project-- program to product pivots that
we talk a good game around-- lean teams, Spotify
model, yada yada yada. But actually, we
still, if you look at any of the work we're
doing, have big Gantt charts with lots of
dependencies running through the organization. So there's work to do. Now, the one thing
I've put in green is called Real Time, because
I think, actually, that is-- there's never a silver bullet. You wouldn't believe
me if I said that. But I really want to leave
you with the impression that eventing is a key part of
the success strategy for scale as well as for business
value, because you can have business events,
as Rana was talking about. But it can be in the DNA
of your architecture, which will fight some of these
diseconomies of scale and will allow you to
pivot more effectively. So I'll come onto that. So I think I've
probably covered that. The key point is, you can
spend a lot of money going to the cloud and still
end up with a pretty brittle application. It'll be quicker to get live. You can change it faster. But you've still got to have
quite a large organization wrapped around it to do
anything front to back that delivers value
to your clients. And I characterize that
as, DevOps and agile and, to some extent, cloud are
essential but not sufficient. You need to be thinking
about the architecture. We really need to stand up,
as architects in the room, and take responsibility
to work with our teams to define architectures that
will work nicely on the cloud, will allow our
organization to scale. We talk about that in Accenture
as digital decoupling. And because, obviously,
typically the clients who come to Accenture
aren't greenfield clients that have got the luxury of
putting all this stuff out there from scratch-- the typically organizations
that have a load of legacy and technical debt-- quite
complex organizations. And we advocate taking
a value-driven approach to modernizing, which
basically means prioritizing your portfolio, and prioritizing
based on some KPIs that you are really clear about upfront-- things like average cadence
into production, total software development size by mandates--
because that should be getting smaller per release over time. And of course all the quality
stuff that's in there. My own please when
I talk about this is to say, the enterprise
architects stand up, because you should be
inheriting this stuff. This is what enterprise
architecture is really about, is helping your organizations
to drive a portfolio of change that will move the needle
on your speed to market and your cost of change. Event-driven architecture
I've talked about. This is the secret sauce. I've been working in
architectures where the APIs are quite brittle. There's lots of them. You have a contract
dependency through the stack. You can be using modern APIs
and still be quite brittle. And the events allow you to
introduce new functionality without changing any of
the other functionality. And those events-- could
be events, could be-- it could be Pub/Sub. But similarly, those sorts
of patterns, I think, we can see more of
in our architectures. And that will allow more
graceful, cost effective rollout of capability. Layers to ecosystem
builds on that. Particularly-- I don't know
if it's in financial services, but quite a lot of
the companies I've worked for, they are not
prepared to move the data. So the microservice is a proxy. And it's sitting above-- at the top of that
Jurassic Coast, synchronously plummeting all
the way down 40 feet of concrete into the DB2 mainframe
at the bottom. And then, if they want to do
something experience layer, they add extension
columns in DB2. And then it comes all the way
back up in a dependency graph, through the middleware
up to the top. So if your microservices
layer looks a bit like that, you're not going to drive value. You are still doing one speed
IT if you don't do anything about the data. You can tell yourselves
you're doing something else, but really, if you don't
move the data around, you're doing one speed IT. So the way I'd
leave you is, think about moving from
layers to ecosystem. Freestanding systems of
engagement, differentiation, record. Have their own databases. And talk to each other in
a loosely coupled fashion [INAUDIBLE] events. RANA BHATTACHARYA:
You've actually probably created it-- made it
worse by adding another layer. REG WILLIAMS: Yeah. RANA BHATTACHARYA: Yeah. REG WILLIAMS: So I'm not
pretending that's easy. But these are the
kinds of things that I think will
drive value if you're thinking about architecture. Automate everything. We talked a lot about
infrastructure as code. I think the next
step for Atom is to really go soup to nuts on
the automation in the service management and really
get into being an SRE-- managing their operations,
and in the SRE. We'll come and talk
about that in a minute. Cloud native but portable-- Kubernetes is everywhere. Everybody's talking
about Kubernetes. And I'm getting a lot of--
into a lot conversations about which Kubernetes. And actually, what's my cost
of making that a bad decision at a point in time? Do we really understand how I-- how costly is it to
reverse that and go in a different direction? I think a call out to
what Google are doing, enlightened as usual, with
Anthos and the idea that maybe you can make a decision but
hedge your bets a bit, I think, is a really grown-up
way to do it. I think the Red
Hat OpenShift stuff is simpler than using native
Kubernetes, in my experience. That's another good option. But you should be thinking
about the balance and the trade off between exploiting the
proven technology in the cloud with the flexibility
to run multi cloud and move those workloads around. When I say here, be clear on
your strategy for cloud native, cloud native is a
really overloaded term, because on the one hand, it
means container platforms-- to some people, mainly
infrastructure-y people. I've probably got that wrong. You know what I mean-- almost infrastructure-y people. It means using the
cloud's features so I don't have
to build anything. They've got their own databases
natively in the cloud. I can use their AI services. It can mean that, too. To me, it means like
building 12 factor apps. And that's all fine. But you need to understand,
in your organization, what you're actually
gunning for and not get confused around what
you mean by cloud native and what you're aiming to
get from cloud nativity. I'd add one more,
which is relevant here. We're using-- Rana's team have
been mainly focusing on Golang. And that's got
enormous advantages in terms of its
deployment unit size and reduced set of dependencies,
or no dependencies. It's a really-- I think, an important feature
of running a lean and cost efficient cloud platform. And so that's another
angle of cloud nativity, is are you actually using a
programming language construct that is tailored for the cloud,
or are you just shoveling loads of Java onto a JVM
that ends up with half a gig microservice runtime? So to bring it back, API first-- Rana talked about the
importance of APIs to the architecture
of Atom, it's how-- not only how the whole
thing is going to be-- is stitched together, but also,
it facilitates the transition states. So to gracefully get
from an on-prem version to a hybrid version to a
full cloud native version. An event-based
architecture loosely coupling the
different components as they are brought
into production, reducing development
time dependencies to avoid big projects emerging
which carry that weight. We've talked about
Go versus Java. We talked about
infrastructure as code. Where next? It's build out the story
through operations. Our experience has been, the bit
we didn't automate was the bit that added a load of drag--
which is kind of, shucks, we knew-- could have thought
that at the beginning. The bit we don't do is the
bit that caused the drag. So suddenly, it becomes all
about the change process, service management
process you wrap around building
your environments, commissioning your changes,
and all that stuff. Because we didn't automate that,
you could spin these things up in a minute or two. But if we're on spreadsheets
and emails the next three weeks trying to work out
what we're doing next, that is a wasted opportunity. So there's more work
to do around that. Rana, did you want to
pick up on modern ops? You want me to do it? RANA BHATTACHARYA: [INAUDIBLE]. REG WILLIAMS: Yeah, OK. So basically, this boils
down to integrating Jira and ServiceNow, for us. There's a bit more
to it than that. But having a seamless
transition between a ticket for a change and unrolling
that into a backlog for a development team
and then bringing it back into your release
management processes-- this is the frontline of
automation, in my view, because then you can really
bring the whole thing together and start then build-- using
that as a springboard to build out more of a-- more focus on automation in run
and in operations, and really bring that kind of
site reliability set of practices to the fore. So that's broadly where the
journey has been with Atom. And they're going to do lots
more exciting things to come. RANA BHATTACHARYA: Yeah. So it's probably
worth pointing out-- so some of the tooling
sets we talked about, like Terraform and Vault-- so if you think about
static architecture versus dynamic architecture
or dynamic infrastructure, so, old world, when you
got static architecture in a data center,
you're using tickets to coordinate manual
workflows to get stuff done. In the new world, what you
want to have is people more in control of just, at
the press of a button, you've got an environment
being provisioned or a change happening. And so whilst you can
still instigate it via ticket-- and what
we're talking about, we'll use tools like
ServiceNow to be the mechanism of instigating a ticket. But that will integrate quite
seamlessly with the automation capability to allow you
to put provisional change into an environment,
create a new environment, to kill an environment that
you don't need anymore. And those tools will
manage your workflow. But the bulk of the work
which used be manual-- it's gone, because you're
doing it through your DevOps, via SREs, et cetera, et cetera. REG WILLIAMS: Yeah. And knowing what's in your
environment's actually is the other thing. RANA BHATTACHARYA: Exactly. Exactly. REG WILLIAMS: So the CMDB
side of the feedback loop into your config
management database is a really important feature
of that integration, as well. So, final thoughts? RANA BHATTACHARYA: So from our
journey, we started a journey, and we really needed
to really think about what were the outcomes
we were looking for. And quite often,
you start a journey, and then you have
to remind yourself why you're doing something. And for us, it's
really, we wanted to be able to create the
products and services we want for our customers and
to create a bank that can operate at a lower cost
by leveraging technology to automate as much as possible. And the automation
wasn't just about cost, because we're a
regulated entity. We don't really like
making mistakes. So by having automation where
you have repeatable processes, it minimizes the human
factor of making mistakes. REG WILLIAMS: Yeah. DevOps and agile-- essential,
but not sufficient. I hope I made the point. Architecture's really important. You can have the best
DevOps and the best agile, but if you've got a
big, monolithic blob, you're not going
to drive agility. RANA BHATTACHARYA:
So I mentioned a number of times tools--
a tool's a tool, right? So do not forget reinvestment
in people and culture. Quite often, it's
not thought about. For us, it was
through the journey. A lot of work has happened
getting people together, breaking down silos
across different teams to have a more agile
way of working, and creating a culture of using
the tools in the right way to create outcomes rather
than trying to do-- use the new tools in the way we
were using the old tools, which effectively wouldn't give
you the results you need, which is kind of the point Reg
was talking about, as well. So it's about the tools, the
architecture, the people, the culture to get you
the outcome you need. Then, I talked before about
suppliers and partners. For us, it's very important. We're using this journey
to basically lock into partners we want to work
with to give us the capability, and not really focus on a purely
transactional relationship, because one-- like I said before,
you will undoubtedly have blips on your journey even
when you're in live service. But it's really about
having a partner to work with you to get over that
in the best way possible for both sides. REG WILLIAMS: And the last
one-- and just to re-emphasize the point again, I think if we-- and this is not merely,
purely around Atom. But I think in many examples,
if we started again, if we had the opportunity to
go back a year, 18 months, we'd have done more. We'd have gone,
well, that was great, but all the bits I
left out were the bits that are still slowing me down. So the one thing
I would leave you is, wherever you are
on your road map, see if you can do more. If you haven't got to
the infrastructure, automate the infrastructure. If you plan to automate the
infrastructure but you go, I'll deal with service
management next year-- this can take time. And you really
don't have the time. So automate that, too. Get your operational
automation in place. Think about how far you
can push the envelope. RANA BHATTACHARYA:
And I think, building on that, you're
never going to be done, in terms of, you'll have
a view of a milestone and-- for a journey and go,
right, I hit that milestone. But you're not really
done, because you've gone through a journey, you've
automated a number of things. Then you go, there's
still more to do. And if you stop
doing that, you'll stop getting further
benefits that you can for your automation. And for a bank, it's
important, because the more you could automate into a repeatable
process, the less mistakes you make, the less issues
you'll effectively proliferate into your
customers, et cetera, or create more work
within the bank, as well. So you just can't
think about projects done or milestones here. You have to think
about what's next. REG WILLIAMS: So
I think that's us. Thank you. We could do a couple of
questions now, or we can-- we'll hang around at
the side to have a chat. Any burning ones that
you want to raise? Yeah, at the front, here. Oh, sorry. Excuse me. Just one minute. Yes, please. AUDIENCE: So were
there any questions about concentration risk when
you spoke with the regulators? RANA BHATTACHARYA:
I think for us, because we were looking at
Google, which was slightly, at that point in time, different
in terms of other banks we were looking at [INAUDIBLE]
there was discussions. But the focus was, because
we're going to Google, how do we see it operating? Obviously, as part of various
discussions-- and internally, we have to think about lock
in, et cetera, et cetera. But Reg talked about a
good point around Anthos and the ability to-- the strategy of Google
to be able to move between different providers
or move back into on prem. So that was interesting. But at the other point
we were contracting, Anthos wasn't really an option. So we looked at,
basically, an option of how we can move away
from Google if we had to. But that's because we needed
to have a documented exit strategy. REG WILLIAMS: And there was one
question from the lady here. Yes, please. AUDIENCE: This was
absolutely fascinating and really, really
useful for me. I work for a
financial institution, one of the incumbents. And it's extremely difficult
to get them to actually move toward being cloud native. It would be very
interesting to understand how you worked with
your executive board to get that buy-in to
move towards this vision. RANA BHATTACHARYA: So
it's quite interesting, because I was on a FS
panel yesterday at Google, and same question came about. So I think it was
part of my job, as one of the executives
of the company on ExCo to help educate my colleagues
about why we're doing it and how we're going to do it. So I did a number of teaching
sessions at ExCo which talks about, what is Cloud,
different flavors of Cloud, et cetera, and the benefits,
and some of the drawbacks. Similarly, we did that on
DevOps and SREs, et cetera. We talked about
operating model changes. And that was to get buy-in. So when we talked about
discussions with a regulator, those sort of
discussions are led by one of my colleagues,
Chris Sparks, who's our chief risk officer. So he and I went to
see the regulators with members of our team. But I needed to be confident
that he was happy-- yeah, probably not
the right word-- but he understood what
we were trying to do, and we had his
approval in terms of, he understood we're doing
this in a very safe way. We understood the risks. We've got controls over any
risks, the mitigations, et cetera. And one of the differentiations,
I think, at Atom is, I didn't have to go
through many hurdles. It's got a flat
organization structure. And we also had the sponsorship
of my boss, the CEO, to say, yeah, we're doing
this, and the board as well, because right at the
onset of Atom, it was about, we wanted to create a technology
organization that would give us the right capabilities to
support our customers really well. So it all fit together. But I think the education
and communication is key in terms of making sure
all the executives of the bank understand why you
want to do this and how you're going to do it. And keep reminding them, because
it's not a quick journey. REG WILLIAMS: If you're not
shutting down a data center, it is somewhat a leap of faith. And on that basis, it's got to
be sponsorship from the top, not trying to convince those
in the organization who have minded otherwise. Or at least-- clearly, if
that's your circumstances, I don't want to sound
a little hopeless. But it's much harder to
do-- to convince people when the business case depends
on whether you're actually retiring a data center or not. Other than that, you
can make the case. You can do some of
the things I did. They're a bit esoteric,
but you can make them. But it's not as clear. So you need sponsorship. RANA BHATTACHARYA:
Because a lot of it, for us, was around
speed and agility and the cost benefits of that. I think we're done. REG WILLIAMS: Are we all
right to take another one? No, we're out. So we will be at the side. And look forward to
talking to you then. Thank you. RANA BHATTACHARYA:
Thank you, guys. [APPLAUSE] REG WILLIAMS: Thank you. [MUSIC PLAYING]