Building Streaming Microservices with Apache Kafka - Tim Berglund

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what time of day is it's morning okay I was about to say good morning cuz it still felt like morning but this is only my second day in Europe so the time of day that it feels like to me is a moving target you don't want to trust me on that question but it is in fact morning so good morning and thank you for being here on the schedule I don't I don't know if it's been updated but it may say that this talk is being delivered by a guy named Ben Stopford and we tell you story about Ben Stafford Ben Stopford is about to be the proud father of twins and a travel date schedule that he thought was prudent some months ago turned out not to be prudent so late in the game he kind of put a ban on travel and so today the part of Ben Stopford will be played by Tim Berglund if if you saw that in the schedule I want to make sure that was clear because that would have been weird and surprising also this is important I've got another talk later in the day with a similar title and fairly similar content now the Ben version of this and the Tim version of this are are different but you're gonna get a very Tim flavour of this that will feel a lot like the talk later so I want your expectations to be set there if you're planning on being in the Tim tract that's great I love you but those second two talks will be not add not identical but similar and you should you should be forewarned so anyway hi this talk is material that's really exciting to me who is in the case equal talk downstairs just now okay yeah I I love that talk cuz there's live coding and because Kay sequel is just so easy to get right like you do it and everybody gets it right away and it's immediately obvious that this is powerful and you want to be a part of your life what's not immediately obvious and I hinted at this in that talk is that Kafka has this agenda for you it like wants you to build things in a certain way make you think it's it's like ActiveMQ but bigger you know but like I can scale out everybody wants to scale out right if you can do it on 20 servers that's cooler than one so that's fine but Kafka has an architectural agenda for you it has designs on you and and wants you to build systems in a different way it's it's not just a big pipe and this talk and the talk this afternoon are really getting at that how is it that I'm supposed to build systems and and what does what does Kafka want out of me now there is a book that covers this it's called designing event-driven systems and just you can probably just google that it's by been and it is dense let me tell you what like it's it's not long but you may want to read it twice and it's not because it's difficult to poorly-written or anything I have read it it's very well written but the ideas in it are fairly meaty and I just want to say important I think this stuff matters a lot so I recommend the book as a follow-up now what have we got to say here there are these kind of two things that come together one is what we might call event-driven architectures sometimes event sourcing and this is all thinking that happens in the domain driven design community all right I am somewhat something of an outsider to that community I'm like not a DDD guy it's actually literally the truth that some of my best friends are commander of design people but I'm really not one of them but certainly we're aware that that event sourcing is a thing that comes from the domain driven design world on the other side of that there is this notion of stream processing now these two things are closely linked but they are optimizing for different goals right event-driven architectures are thinking about how can I deal with complex domain logic stream processing is how can I deal with event streams at internet scale with lots of servers because that's better than few and just like big systems you know so those are different things but we're going to talk about how they come together that's what this talk is about and there are things that get big there are things with complex domains that get pretty big we have Netflix and this is the trouble with this slide is you have to update it all the time because it the numbers just keep getting bigger it's crazy but as of fairly recently 2.2 trillion messages per day four hundred microservices per cluster and large clusters 200 is a big Kafka cluster you don't see many clusters that big 20 is is more typical 200 is huge ing the the Dutch Bank you can tell because it's orange a billion messages a day 20,000 messages per second you know really really big big numbers so these things come together in events and that's really this this talk is about events what they are and once you get that idea into your into your mind how do you now build systems based on events so there are two two roles that events played are two things that they do one is notifications when you saw that you heard the sound didn't you you heard a little sound right I have to go look right now somebody wants me slide is triggering to me anyway event serve as notification and that is a temporally distinct indication that a thing has happened but they also can carry state with them so they are notification and they are data it's it's temporally distinct and information-rich events have those two roles and I'll come back to this topic a few times now there's some processing that happens in there to come up with the notification and to come up with the data and the question is kind of what what sort of system do you build to to deal with these two things to deal with notification and data replication to deal with events and here's kind of the canonical streaming platform architecture diagram right here there's a lot to unpack in here there is the the suggestion that there's some large community of devices and it is creating events that are being ingested into Kafka and if that's if that's all you knew about Kafka that would kind of be the old school of big giant pipe you know that I can adjust all these events and and put them somewhere and that was that was Kafka of five years ago was all there's all these events let me put them somewhere and warders that somewhere well that's an HDFS cluster right like quick write them into HDFS we can do something with them that's not that's not modern Kafka these other components here we'll talk about a little bit so no I'll dig into the parts of this diagram but this is sort of the canonical streaming platform system that were that we're going to work with let me give you an idea of what I mean by a streaming pipeline and that's with a with a fairly simple example here so those mobile apps maybe there's a couple kind of events that we're capturing a couple kind of things that we want to think about one is apps being opened I open the app and it posts to some endpoint a little blob of JSON saying this person opened the app at this time right also when the app crashes usually we can get some kind of help in posting a notification or when the app restarts after it crashes it can know that it crashed and say hey here's a crash report and it'll post that data to some other endpoint so it produces those two kinds of events right apps opened and apps crashed now we can we can group those and window them and come up with crashes per day and app opens per day and if I have which apps get opened and how much per day and which apps crash and how often per day or how many times per day I can join those two things and come up with a collection of applications that I will label unstable and this is a streaming pipeline because the inputs are events they're not database tables I don't have to take events and write them into a table we want these to be events stored in Kafka and processed as streaming data now if you were in the case equal talk you're already ahead of the game you could probably kind of imagine what kay sequel would look like there are other ways to skin that cat that we'll talk about but that's that's the basics of what I mean by streaming pipeline alright let's keep looking at this and let's dig into this piece right here in the event that you're not super up on Kafka and you weren't in the last talk let me give you a little bit of an idea of what I mean what what what's going on inside Kafka the fundamental abstraction inside Kafka is a log alright and when messages are produced that written to the end of the log and they're always written to the end of the log and they're immutable it's just like an application log write with an application log you have some API that says hey let me let me produce this message and there's no notion in that API of where it goes because it always goes the end of the file right you don't to think about where it gets written it gets appended to the end and there's also no support in any of the dozens of say Java logging api's I think we're up to dozens now there's no support for editing a log right because what are you if you're editing a log you are a criminal probably or at least a conspirator you're you're probably trying to cover something up you don't edit logs so yeah these messages are also immutable once they're written they stay there and that enables all kinds of amazing functionality that we'll get to by the end there can be multiple readers of a log or in Kafka what we call multiple consumers and each consumer has its own offset our yes we're back good okay yeah multiple consumers each consumer can have its own offset and those are all tracked independently so Fred and Sally and George there's gonna be independent applications that are doing different kinds of computations over the the events in the log and you can rewind that is you can seek to a particular offset the API allows you to say hey let's go back to this numerical offset or this timestamp and begin consuming messages from there otherwise by default when you start up as a brand-new consumer you're gonna be getting the latest message or you can say I'd like to start at the earliest message and reprocess all of history those are your options but you don't really query a log you just consume it one message at a time that of course would be a terrible way to live right if that were the only abstraction you had all you have is just reading messages one after another so we're gonna have to do better than that but this is a great building block and turning that building block into a useful system is really what the rest of the talk is all about but let me let me give you just a little bit more Kafka smarts here that gray box is really a collection of machines called brokers and those brokers work together to be a Kafka cluster and the the they maintain a collection of what are called topics and pretty much every other messaging system in the world calls them topics too so you probably know what topics are those are just named queues of messages but importantly in Kafka they can be partitioned I can take a single topic and split it up into pieces and distribute those pieces over multiple brokers so I'm able to scale out my my topic management and internally topic handles or partly Kafka handles replication of those things and you know consistency between replicas and all the stuff that comes up and being a distributed system that's all terrible and you're glad you don't have to write it it's all it's all there and we're content for purposes of this to literally wave hands and say that stuff works and let's not worry about it and of course things like fault tolerance because there's replication I can lose a broker and I still got the data elsewhere my consuming services on the right there if one of those goes down Kafka gives me the ability to have those be fault tolerant also so you know one of my consumer instances dies I can take the work that it was doing the partitions it was processing and failed them over to one of the other consumers so all those basic building blocks are kind of there and that's nice because you know we we do this sort of snuck up on us in the last like five or six years everybody is all of a sudden the distributed systems developer like we're all supposed to be building these elastically scalable fault-tolerant systems and that's super hard but Kafka makes a lot of that easy through mechanisms that will just kind of I'll ask you to take on faith and we can talk about this afternoon if you want to talk about more now those what did we zoom in on here basically we're zooming in on the consuming services here the brokers are just doing messaging that Kafka cluster component is just doing pub/sub and storage and I was talking to somebody on the way up they said hey we want to build something where we don't ever want to get rid of data we want storage to be infinite I said that's fine just partition and plan and buy disks and scale out and Kafka will keep it forever does not have to delete data by default Kafka retains data for seven days but you're able to configure that and you can make it infinity days if you want so that's totally cool Kafka is doing storage it's doing pub/sub those brokers don't ever do any computation on data though that is the province of things like Kay sequel and things like Kafka streams now Kafka streams is a Java API that is just a library that you code against as long as you're writing in Java if you're not ready in Java it's not super exciting but if you are it's actually very exciting it's a really cool API that makes your application whatever your application is it's a spring boot thing it's a it's a whatever framework you want to use you can now also declare this Kafka streams API as a dependency it's a part of Apache Kafka so it's completely open-source and you can build these sophisticated stream cinq apologies that you deploy with your app that code stays with your app it's not in this other processing cluster or somewhere else it's a part of your application so mentally the stream processing and whatever your app does those are one thing right that's one program you're thinking about and then in terms of deployment that's also one program that you deploy and the clustering of that program kind of gets done for you again by the consumer group stuff but because so Kafka streams is a Java API and how many Java people here Casey hands yeah hi you're my people this is DevOps I suppose how many like definitely for sure not Java people alright I salute your bravery putting your hand up at devoxx okay so I'm gonna be showing you examples in case equal but I encourage you fellow Java developers look at Kafka streams case equal is much easier much easier to get but for the microservices case there and if you're a Java developer Java shop already lots of reasons to go the case equal route so I encourage you to check that out there's some resources online on the Apache Kafka website with some nice tutorial videos this guy I know that can tell you a little bit about a little bit more about that so anyway if you wanted to do this remember I said we have apps open those are just raw application opening events and we want to turn that into apps opened per day we would do something like this this case equal query right here we're gonna create a table called opened per day from this select statement we just walk through this a line at a time all right select application ID and count that makes sense from apps opened and apparently the schema of that apps opened stream has an application ID at least and and maybe a time I don't know but it's at least an application ID so we're selecting application ID an account from that grouping by application ID and then within one day windows so we're gonna basically reset the count every day is what that means and we're making this a persistent query so this is running in the background in the case equal engine producing the opens per day into a topic called opened per day so the results of this query are gonna get spit out into that topic all right not Kasich will just go run in the the case equal engine doing its little thing at scale taking these events in processing them managing state handling failover doing all the stuff that K sequel does for you and producing the results into the output topic which was called opened per day now that's that's roughly stream processing let's look a little bit at event-driven architectures this is a subtle point and this this is I think a thing that event-driven architectures are trying to get you to this also comes up if if you get people talking about micro-services and you get them talking not merely in terms of the buzzword but in terms of the the I mean what I I see like a a human and organizational angle here if you get microservices into your blood when i say we build ecosystems what i mean is you know like in a real life ecosystem of living things there are a bunch of competitors for resources and they more or less sort of balance each other out and sometimes new things can enter an ecosystem and sometimes that's very bad right like like you know rabbits that don't have enough predators and they take over Australia or I just read about some really terrible kind of weed that's growing in the southern United States that has these spines on it and it'll give you third-degree burns on your skin and like wow that's great what's that it's a what yeah it's not poison ivy it's much worse than poison ivy poison ivy would be like hand lotion compared to this thing so yeah so sometimes bad things happen and even maybe like we've all written that code right certainly I have but you know sometimes bad things come into ecosystems but also it's not like one out like the the rabbit that lives in my yard is aware of what kind of weeds are growing in the field next to my yard it doesn't think about that right the rabbit just does its thing and if something changes over here maybe the rabbits behavior will change fine but the rabbit doesn't coordinate with the weeds as an ecosystem the things happen and they can balance out but you don't have to worry about centralized management of the thing it just kind of works you could screw it up it's not like you can't break ecosystems we do that all the time but there is not top-down control over that and if you've got if you're building micro services you're building ideally an ecosystem where you don't have top-down control over everything but people are able to stand up new services maybe without everybody knowing like maybe there's somebody who knows but maybe you're a developer of of one set of services and somebody else stands up another set of them that's interoperating with your data and maybe you don't even know it like you're the rabbit that doesn't know that that weed is growing over there yet now maybe that weed tastes good and you'll figure it out later and there's something you can do but you nobody asks you that's kind of what I mean by ecosystem so here's a diagram of some services and that font is a little small I doubt you can see it on the back but there's a there's a just kind of point things out here there's a UI and there's a web server that that web front-end is talking to and that web servers issuing your requests to a payment service and orders service shipping service stock service and customer service now this this diagram right here this diagram right here I could have put this on a slide 15 years ago on a Sowa presentation this could be an event-driven architecture presentation it could be a micro services presentation right very very generic kind of thing and it has actually for that whole time periods been somewhat difficult to build this it's it's easy to talk about these hey let's break things into services but it's hard to do of data let me tell you what I mean by that if you look at the the things that I'm dealing with here customers orders and stock or catalog right most services are not interested in just one part of that like you know say the order processing service well it just cares about orders right now it cares about catalog and it cares about customers and everybody kind kind of needs a little bit of everything and that was easy back when we could have one database when we built a monolith we had one database and all of the modules in our monolith could just do whatever they need to do with the data most of us have reached the understanding that that's probably a bad way to build large systems and we're trying to stop and so when we break things into services we hit this unfortunate realization that most of them share what the slide says is the same set of core facts they want to share data and we have to come up with a way of dealing with that now recall what I said before which is that events where these two hats events are notifications and events are state they are replication of data so let's consider this subset simplified version of our thing we're gonna buy an iPad so we have the order service the shipping service and the customer service so we've we've broken our monolith down into micro services because we went to a talk that said that was a good idea and we're doing it with the talk wasn't wrong I think it is a good idea we're doing it with integrating the services through RPC calls all right so the web server tells the order service hey somebody placed an order here's the form submission but there you go so go deal without you order services all right let me validate that looks good now I need to tell the shipping service to ship it and so it makes a synchronous RPC call to the shipping service shipping service says well I need to make sure we have the things and I know where to send them so I'll go ask the customer service for the current address and that will bring the address back from the customer service those are all synchronous calls and that works right I don't mean to set that up as too much of a strawman because there are people who build micro services of states this way it is little bit finicky in that they can be brittle right you can get failure cascades if you have one service that goes down then you have to have you know circuit breakers and things to make sure your world doesn't explode so it's an obviously a guy who's here to talk about Kafka it's gonna say something negative about this but I don't my point is I don't want to be sloppy in that negativity there are people who do this and do it successfully but I just like to show you a more excellent way and that's this so let's convert this a piece at a time the order now comes into the orders service and that order service has local state in it all right it's got a local representation of orders that it can look up quickly if it needs to but when it creates a new order validates that order creates it it's gonna publish a message now to a kafka topic and say here's a new order here's a new valid order the shipping service is just sitting around waiting for new valid orders to come in on that topic and when it sees one it consumes it and it's unfortunately still synchronously talking to the customer service but it'll go talk to that customer service and get the customer data but now the the orders and shipping services are decoupled they're no longer synchronously coupled they're asynchronous through the Kafka cue the Kafka topic nice thing here about the multiple consumers thing like clearly shipping service is an interested consumer of validated orders who else might be an interested consumer of validated orders I don't know maybe you're writing a new spam campaign and you want to send email to people based on some sort of analytics you run over orders well now you're a new consumer you can stand up a new service this is what I mean by ecosystem orders the order service doesn't know who's going to read those orders and doesn't need to care you just get to innovate things get to grow in your garden here new new services and you know for example we might have a new repricing service that stands up that says well you know let me take a look at value at orders oh hey this is a special order I need to tweak the pricing of it because there is such and such a volume discount or something about the the customer we get this ability to do as I said for new services to grow in the ecosystem now let's talk about state a little more as we do this we encounter that difficult reality that many of our services are as I said before dealing with the same set of core facts or to put that less abstractly shipping service needs customers so let's rewrite customers to every time there's a change to customer data and assume it has some rest interface on its outer surface that allows people to change you know allows from the web interface to change customers or customers to edit their address or their name or whatever and every time a cusp the customer service is aware that a customer has changed it publishes that changed record to a kafka topic now the shipping service in addition to subscribing to validated orders is going to subscribe to updated customers and it's going to keep its own local materialized view of that customer data so the canonical location are the the the place that has primary responsibility for customer data is the customer service but I'm saying no it's okay make a copy of that data and keep it inside the shipping service which at first blush sounds awful because I have this mutable data and I have two copies of it right we don't want that at all but the fact is it's yes customers are mutable but the system of record is this topic in Kafka it's the customers topic and Kafka and all of the events in that topic are immutable so I am perfectly safe consuming that topic in whatever services I want like say shipping service and making a new basically you know what looks like immutable data of customers because that mutable database is just a materialized view over the same log of immutable events so it's ok you can make as many local copies of that as you want because the system of record is an immutable event stream and that's that's basically what we do here notification and data replication we've seen both of those things happen now alright let's talk about state a little bit here inside that that shipping service is probably implemented I'm guessing if it's written in Java it's probably a Kafka streams application we might have a little k sequel running somewhere that that helps us do that and specifically what that lets us do is is this on the next slide yes it is it lets us create a thing called a K table now in the kafka streams API a K table is an in-memory representation of data in a Kafka topic so the what otherwise would be you know a difficult process of us consuming a bunch of messages and coming up with some in-memory way of storing them you know some hash table that scales to large sizes or something I just say hey Kafka streams I'd like to create a K table out of this thing and now that K table object gives me an API for querying things by key and I've got this efficient mechanism for looking it up the partitioning of the Kafka topic holding customer data yeah we're still talking about customers here let's say that's a 5 partition topic customers isn't going to get too huge probably I mean unless you're an Amazon it's probably not going to be that big but you want the ability for it to grow a little bit let's say it has 5 partitions well I can potentially now deploy five instances of my shipping service and automatically the Kafka cluster will will assign the each of the five customer partitions to each of the nodes in the cluster so just by writing this Kafka streams application I've gotten a little scalable thing for free and building up that loke materialized view that database I'm not implying that I had to deploy a little Postgres alongside my my server to get that database it I can have that for free from the Kafka the Kafka streams API now a note on that if I don't want to use Kafka streams for that and there's actually a reason for you to have like you have sophisticated querying needs and you need all kinds of secondary indexes on things and basically you do need a database that's totally ok you can create from the messages in the topic you can materialize an actual relational database with tables in it and have that be a part of your service there's nothing in the world wrong with that now I'll show you a little bit of Kafka streams code we saw some que sequel for show you a little bit of Kafka streams code and we will apply these things so imagine orders and customers being these two topics that we are consuming now customers were going to create a table out of that we're gonna we're gonna create a que table out of customers because fundamentally customers are a table I can represent then in a topic I can have a changelog topic with all my my updated customers in it but the basic structure there is that that's a that's a table and orders may be a table I may I may represent orders as events that Kafka streams code is is just this Java API I can do this thing I can say well look I've got a topic called orders out there let me make a stream out of that let me join it to the customers topic and basically join these these two topics together and then I can transform the resulting key value pairs in any way I want now and then persist that to a new stream called shipments this is clearly very pseudocode e because the details of kafka streams are a whole talk on their own but I wanted you to at least see that code so you can know it's just Java and it is I'll just digress for a moment it if there's a bit of a learning curve to it right like when I talk to teams that have gotten really good at at streams there's always this sense of hey we got over the hump and like not many people have and we're not gonna stop now you know it's they feel as if they have achieved a thing it's not that hard but there's a certainly a curve to this because it's a new kind of programming most of us if you have a computer science degree or you know formal training you probably took a class in how to do relational database design like you have to or if you didn't take a class in that you have done it and you know you've been educated in a school of life of how to do this but we have not done this with streaming yet so the first streams app you write not only do you have to learn the API and the particulars of it and wrestle with the typing and all that kind of stuff but the the concept of stream processing is kind of new for us question yes so the question is how is it that we have customers in the stream how do those come to be so there's a topic called customers and the customer service is producing updates into that topic when it when people edit their profiles or create new profiles it'll produce a message into that topic yeah that would that join would not work then you'd need you need to make sure that to create an order you've already created an account and so the customer exists before the order exists sure that's fine if it's an old account oh oh oh if it's an old yeah so customers you'd want a long retention period on that or better yet you'd want to make it a compacted topic which is somewhat beyond scope but the the the short story is you don't have to it doesn't have to be that those messages get deleted like I said you we can make Kafka store store data indefinitely and that's what you do yes you said that we said I've got my entire database in Kafka he's got it you do and that's a good thing so more on that now let's continue what we've talked about here is we've we've just gone through the log and the streaming engine and we've talked briefly about the producer and the consumer also there will be databases in your system that are are still relational databases and all that data is not in Kafka yet that's where the connectors come in Kafka Connect is a system for getting legacy data into topics and you find once you start going down this path of trying to build an event-driven system you've got you you want more data to be in topics you have a legacy database and that's kind of a pain now and hard to work with and you would rather get that into topics and that's that's what those connectors are for that's a standard way to get legacy data from outside the system inside alright let's work through an example so we've got here a somewhat simplified thing similar to what we're looking at before there's an order service and we have received orders validated orders and completed orders in Kafka now let's do a few things to this so converting legacy databases to events using Kafka Connect we might have out here this legacy stock database and we'll use Kafka connect to capture changes to that database and produce them into a topic called products alright so now that now we've got data from that that database we can't touch into topics and this is Kafka connect and connect is a framework that's one of those things that you'd write if somebody didn't write it for you like you'd come up with this this need to capture stuff from text files or capture stuff from relational databases and you write that as a little framework Connect is already there being that framework for you there are a number of ways to do change data capture with Kafka connect from relational databases so you get that legacy data into events so that's that's an important step now that you've got those events in Kafka and all of my order events are in Kafka I can say well it's okay for me to make this my my source of truth my single source of truth all my orders are there all my products are there now sure there's this legacy product database but I'm gonna consider the version at Kafka to be the truth and so now I can write reporting services which are themselves what Kay sequel queries streams applications consumer applications whatever they are all my data's and topics so now I can do a Thorat ativ reporting on the stuff in my cluster said this before but just to make sure this is clear inside each service I have the freedom to create a materialized view either using Kafka streams or using Postgres or or maybe I have some crazy thing that I need use Cassandra for I don't know you always have permission to create a materialized view why because you have immutable events stored in Kafka so every service knows it can always blow away it's it's local cache of the data and rebuild it and it's okay you'll get a proper view of that data if you use Kafka streams for that you get a few things for free you do get basically free state management of the tables that you create with Kafka streams if one of the instances of your service goes down and it's got you know this table that it's managing of stock using the Kafka streams API that service goes down you just spin up a new one to replace it Kafka streams will have persisted that local state to an internal topic inside the Kafka cluster so when you stand up the replacement node you get that for free or as I said you can just use a database what you end up doing and I'm going to talk more about this concentrate a little bit more on this concept in my talk this afternoon is that you have kind of created a database inside out in other words at the heart of every database there's always a log right there's always a commit log somewhere and here instead of one big database with all our database our data in it we said let's make one big commit log and build a bunch of services with little copies of the data that we need hanging off of that commit log more on that idea this afternoon and I'm gonna skip this forward in the interests of time we also have a transactional API available in Kafka now you can program this directly at the producer and consumer level as of Kafka point zero point eleven when exactly exactly once semantics were released really that took the form of a couple of things one of which is a transactional API so if I'm dealing with with purely with Kafka I have the ability to wrap multiple partition rights within transactions and streams does a lot of this for me for free if you're using producer and consumer those little of the low-level API is directly you also have access to that transactional API that you can do and this kind of framework gives you that ecosystem that I mentioned before I've got a bunch of independent services all consuming immutable events in Kafka topics each new service that grows each service that you deploy doesn't need to know about new services that might grow up on the data it's producing those new services you're free simply to build them and you're truly decoupled so like I said this whole event-driven architecture thing the concepts we get from domain driven design the stream processing thing these are optimizing for different goals one is trying to help you handle complex domains one is trying to help you do big things but they are linked so what do you do well as always start simple and evolve build a service rather than having that service make synchronous calls have services when the the first thing that you're refactoring broadcast events it publishes events out to a Kafka topic retain them in a log so you have the ability to rewind and history reprocess things build new services based on that log build reporting based on that log and then evolve that event stream with new services using Kafka streams to do whatever sort of computation you need to do in those new services you've got that that rich API available to you and when you need to when you need to query because of course you can't query a log that's a terrible experience when you need to query things you build up that local materialized view using whatever database functionality you need to if it's a K table and streams great if it's a Cassandra cluster because something really got big also great whatever your service needs you can build that all right if you want more that's a good slide to take a picture of links to code to confluent cloud to the book which I solemnly charge you all to go read this weekend or next week or whatever you can get to it and like I said we have a few minutes here but I want you to know because of the the been rescheduling thing this afternoon's talk that I'm giving some overlap with this so if you come I'll try to tell different jokes they're not identical but they are similar so just so you know you can plan your afternoon accordingly thanks for being here have a good one [Applause]
Info
Channel: Devoxx
Views: 90,075
Rating: 4.9129128 out of 5
Keywords: DVXPL18
Id: Hlb-Ss3q3as
Channel Id: undefined
Length: 43min 51sec (2631 seconds)
Published: Tue Jul 17 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.