LangGraph Simplified: Master Custom AI Agent Creation

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I recorded this video with the ambitious goal of helping you to master the langra framework I knowe that there's a lot of interest in developing agents with langra but the technical difficulties around understanding the framework have deterred many people from proceeding I hope to alleviate that problem for you by first explaining the philosophy and key Concepts behind the langra framework and then I'll go on to concretize that knowledge for you by demonst rating to you a custom web search agent that I've developed with Lang brth by the end of this video you'll be comfortable developing your own agents with the langra framework if you're joining me on this journey the code for the custom web search agent is linked in a GitHub repo in the description of this video so you can go and pull that out if you want if you enjoy this content give the video a thumbs up share comment and subscribe to the channel for more large language model and AI engineering content let's talk a little bit about some of the key Concepts in Lang graph so these concepts are really important for you to understand to enable you to develop your own asent workflows right and I think a lot of people get confused around these Concepts so the first one is the state and the second is the graph I'm going to go into a little bit more detail about what a state is and what a graph is specifically but you should hold in mind right now that there are two key abstractions or two key Concepts that you should be aware of a state and a graph what you should understand is that the graph reads and writes from the state so the graph can read what's written in the state and it can also write things to the state the purpose of the state is to keep track of all of the activity of the agent system so it's like a record you can think of it as a record that's the state and that's why I have it um outlined here as like a document it's a record of the agent activity that has happened in your agent workflow the graph can read to it and write to it and we'll talk about the specifics of how that happens in the next few slides so bear that in mind there are two key Concepts that you should be aware of the state and the graph what's a graph now if you study mathematics you'll already understand what a graph is but for those of you that haven't studied mathematics I'm going to try to break this down a little bit so graph consists of two things edges and noes in langra nodes can be agents or tools and obviously we understand that agents are essentially powered by large language models and tools can be normal hyphen functions or whatever coding language you're working in so agents are powered by large language models and tools can be any type of function that does something like a web search for example or maybe it's um I don't know sends an email it does something like that so they're usually deterministic whereas agents are powered by large language models and can take on more flexible tasks so your nodes comprise of tools and agents your edges link the nodes together so you can see that outline here we have um the planner and that's an agent and it has an edge linking to the tool which is a web search tool so that edge determines the sequence of events that happen in your agent workflow so your planner execute whatever the task is that the planner has been set first and then the web search tool executes its own task and then the edge connects the web search to the researcher so that executes its own task one of the key Concepts around the graphs in Lang graph is that you can actually have these feedback loops so you can actually have an edge that feeds back to another Edge and these are called conditional edges so for example a reviewer might review an output from a reporter and then decide okay either the output goes to the reporter to deliver a final report or the output can go back to the researcher because we need to get more information from the web search or it goes back to the planet because we need to adjust our approach and these are conditional edges that you can set in in langra so the edge is traversed based on some kind of condition being satisfied and the conditions can be flexible they can be programmatic conditions where it's just an IFL statement or they can be conditions that are determined by an agent or a large language model in this case so reviewer can read an output and decide whether it passes a quality assurance and then sends it back to an agent that I believe should receive it to to make those amendments that needs to make so that's the concept of a graph graphs consist of nodes which can be agents or tools and edges which determine the sequence of events there are two types of edges there are normal deterministic edges and there are conditional edges and those conditional edges can be determined they traversed based on some criteria that you set that criteria can be programmatic like if else statements o can be determined by a large language model in this case we're determining the conditions by a large language model the web search agent I've built is simple and some of you that have seen my other videos you will be familiar with the approach that I take you have a planner the planner comes up with search queries which are then inputs to the web search tool the web search tool takes the search query and then returns a search engine results page which will have the URLs and descriptions of what's in those URLs much like what you would see when you go and search Google yourself after that the researcher takes that search engine results page and determines the best page to use to answer the query that is passed to the scraping tool which just scrapes the content from that URL the content and the source which is the URL is passed to the reporter And the reporter then delivers a response so that reporter delivers a response based on um the information scraped from the scraping tool and that's a responsive query that is then passed to the reviewer and the reviewer decides if it passes the response it passes it back to the reporter and in which case that's fired off and the end of the graph is is complete or if the reviewer doesn't pass the response it rejects the response and then sends it back to the researcher planner or the reporter depending on whether or not something needs to change just as I was explaining the graph I realized that I actually had a missing load so in this case just to explain to you what the reviewer does again the reviewer receives a report which will be a draft report and the reviewer decides whether to pass that report or not if the reviewer doesn't pass the report for any reason it gives some reason and some feedback as to why the report wasn't passed and then decides which agent it needs to send the um the execution role back to in order to get the response that it desires so whoever it needs to send it back to the planner and we need to go and do some more research and um do do another web search maybe change our approach slightly go to the researcher and then scrape some data and deliver a whole new report or whether it's just the researcher has chosen a bad source and he needs to pick another one or whether the reporter hasn't included a citation ultimately if the reviewer does pass the report it's passed to a final report which is then delivered to the user at the end and then the exit the workflow is exited that's the approach and as I said some of you will already be familiar with that if you've followed my other videos let's go on to understand what the state does now I mentioned to you before that the state is like a record of activities that have happened in the agent workflow so what you've got to understand is that each node in the graph as I mentioned before represents either an agent or a tool so it's an agent or a tool every time a node is executed so what that means is every time an agent performs an activity or you use a tool in the graph the response the output of that is written to the state so it's written to the state and the state is defined as a dictionary so remember we have a researcher so the researcher might P some context from a Wikipedia page for example so that research is written to the state and you can see that here and then the web page for for that research is also written to the state and then reporter might write a report to deliver an output and then that's written to the state and the reviewer response can be written to the state as well there is flexibility over how you write things to the state so you might want to write things to the state to overwrite what is already there or you might just want to append what has been currently delivered by the agents or the tools to the state the whole purpose of the state is to enable you to keep track of all of the activities of the system because for example if we deliver a report and the reviewer responds to that report and doesn't pass it we then want to be able to read from the state the reason the reviewer didn't pass the report so when the researcher let's say the reviewer passes the report back to the researcher the researcher can actually pick up from the state what the reviewer said about the report and then that will enable the researcher to adjust its approach crunch so the state is a way of keeping track of all of the activities across the entire agent workflow if you've used autogen before autogen kind of manages this in a crude Way by just sharing the entire state with every single agent so oen doesn't really have a concept of state it just shares the entire con context of the agent workflow with every single agent which is token intensive this way you have much better control over what each agent in your workflow sees and what it doesn't see and that is actually less token intensive than sharing the entire context with every agent and I think it makes the workflows more effective before we move on to looking at the python code I would like to give you some tips before you start building your agent workflow so my number one tip is do not dive straight into coding I think with building any agent workflow it's important to first understand the graph how do you want your agents to communicate with each other start by mapping out your graph and make sure that is clear in your head and then map out your state so what pieces of information do you want to be recorded and what information is going to be important for your agents to be able to use to execute the task that you want to execute so that those are my key tips and I think if you follow those tips and you understand the concept of State and graph the rest is just syntax and my project that I am sharing with you should be able to help you with some of that syntax stuff that's it hopefully that was simple enough for you let's concretize it by looking at some python code for this web search agent that I've developed in langra let's explore the python code so you might be able to see from the file directory for this project on the left hand side that there are quite a lot of scripts in here I'm not going to be running through the details of every script because I don't think that's useful first of all and second we'll be here all day talking through theep details of every script you are going to have the GitHub repo available to you so you can explore that for yourself in your own time what I am going to talk about are the most important scripts and some of the key Concepts in the code that you should be aware of when you're building your own Lang graph project so the most important scripts here are the graph. PIP graph. Pi is obviously where the graph the agent graph is defined we'll talk about how that's set up in Lang in Lang graph you get a much better idea of how to construct your own graphs tools. Pike are where the tools are defined so I'm not going to show you I'm not going to go through every detail about how the tools are set up because they're just basic tools really you've got a web search tool and you've got a scraper tool but there are some key Concepts about how tools are set up that you should be aware of agents. Pi are where I set up my agents and there's some key Concepts about setting up agents that you need to be aware of there state. piy is where I Define the state we'll talk through through that prompts dop that ties into the agents dopy and that's where we actually write the prompts to guide those agents I like to keep those things separate it makes the project a bit neater and easier to digest and then app.py is the front end where you interact with everything else so that's where you set parameters like the maximum number of recursions that you have or the or iterations you have over the workflow and the model you want to use let's start off by looking at graph. Pi so graph. Pi is where you set up your agent workflow there is a lot of boiler plate in here but the key thing is the first thing you want to do is actually Define your graph and this is made a lot easier if you already have your graph defined schematically like the one I showed you earlier so essentially you start by defining a graph object and to define a a a graph object you'll need to have a state you need to have a state object defined and I'll show you how you define that state object later on I'll show you after I I go through the graph script once you have your graph object you can begin to add nodes to your graph and the key thing here to add a node is you give it a name so that's how you will refer to that node going forward and then you assign to that node either an agent or a tool so you can see here I've assign the planning agent or the planner agent and the planner agent has inputs that are important for it so you can see the planner agent takes the state because we need to read the research um it takes the research question which is the user input it takes feedback which you can get from the state it takes the previous plans as well so the plan agent takes all of those those inputs and it takes um the model too so that's an agent node and you can see I've done the same thing for the researcher and you're just adding the node to the graph so remember you give it a label and then you either add an agent or a tool that's a key thing to to bear in mind right then I'm adding a reporter and this doesn't have to be in the order of operations of your graph yet we're going to do that later on add the reviewer and you can see here where I've added that to the node now I'm adding the tools so where this is obviously different recall the graph right the graph nodes can either be agents or tools so now I'm adding tools to the graph so we've got the search engine results page tool and then we've got a scraper tool that I've created and you can see these tools are added here and then I'm adding the final report node and then the end node and the end node is just where you all the operations finished so you define a start node and you define an end node you'll see how we do that later on the next step is to Define your edges in the graph so remember the edges are the things that connect the nodes and they Define how the nodes the sequence of operation of those nodes defining the edges requires you to set these on your graph object so you set your entry point for me the entry point is at the planner so the planner will receive the initial query from the user you can set your finish point and that's the end node I think it's always useful to have an end node a separate end node where it's just a finish point so F that in mind the end node for me doesn't do much it doesn't do anything it returns just a state but it's a useful thing to have just to finish out your graph and then you can add your edges so the way you connect edges is from left to right so the planner connects to cital if you remember the diagram from before so if you'd like to like revisit that diagram you can compare it to the code server tool connects to the researcher the researcher connects to the scraper tool the scraper tool connects to the reporter And the reporter connects to the reviewer then there's the conditional Edge so remember the conditional Edge is based on a condition we can join to other edges right so what you have here is you can have graph and then you have the method add conditional Edge and you set your reviewer um well for me it's the reviewer because this is The Edge the starting point of the edge you define first and then you have the the um the function that defines where that edge is passed to or where what agent or node is going to receive that um receive the is going to be not received but you define what agent or node will continue the operation after via the function so I have this function called pass review I'll show you really quickly what that looks like so it's a function that uses this prompt to basically say if the if the review agent says it's a pass and then defining the next agent to pass through so it can either pass to the planner the researcher reporter or go for a final report and it just says something along the lines of if it's false um then select the agent to go to and that could be planner research a report or reporter now you don't have to do this with a large language model I've just chosen to do it it's partly because of laziness but just to to make things easier just in case I get results from the reviewer that are not standardized but it should be but basically all this is doing is taking the response from the reviewer and outputting outputting what agent to to pass the graph to next that's all that's doing and that's how I'm defining my conditional Edge so this just takes the response from the reviewer reads it and goes okay I need to pass to the researcher or I need to pass to the final node that's all that's doing and you can see that function being passed here when you're adding your conditional Edge and then finally I have the final report and that delivers to the end node from which we exit the graph and then all you need to do from there is compile your graph and that's it you have your workflow you compiled your graph you have your workflow The Next Step I should talk about is the state and how we Define the state because remember to Define your graph to set up your graph object you need to have a state defined so let's move on to the state Tab and there is a class that you can use to define the state object so I use this dictionary class and all I'm doing here is defining the keys of that dictionary and also adding some some constraints adding some data validation to those keys so I want this to be a list and then this add messages function is remember I mention that you have control over how your state is updated so where it's an add message what I'm doing is I'm actually appending the new messages or the new responses onto the old ones so I'm actually building up a a a library of all the responses that have ever happened in this agent workflow because remember the agent workflow can be six there are feedback loops so sometimes you might want to refer to something that the agent workflow did for for iterations AG go so that's why I pen the responses rather than overwriting the responses but if you depending on what you want to do you may choose to overwrite the responses so that's how you define your agent graph and your your state that's how you define your state so this is a state agent graph State and that is read into the graph object once you have that this function here all it is is it's a helper function basically that helps me read the state depending on different scenarios that's all that is so I'm not going to go into detail about that you can have a look at that in your own time and that's it that's it let's move on to looking at the agent stop High because that is quite an important part of the script the way you set up your agents is really key here you need to be careful about what you're returning for your agents I think this is where some people go wrong with langra so when you're setting up agents in langra you must take the state as an input and then you must write to the state as an output so how I've set it up is and the reason you you want to take the state as an input in some form because effectively you want to read from the state at some point so if you call that that animation I had the reporter May read from the state to understand what the reviewer has told it previously so it can adjust its approach but then it will also need to write to the state so each agent or each node in your graph must read from the state and write to the state so how you define your States is is critical I'm sorry how you define your agent is critical so essentially my agent here you can see where it writes to the state so you see the planner response and you see the message yeah so the rest of this is just pretty standard stuff if you've ever used the open AI API it's just calling that but when it gets a response it writes that response to the state it writes that response to the state so that's it's important that you read um your function takes the state as an input and also Returns the state and I found that land graph my graft the way I set it up doesn't work unless I return the state so you must return the state as well so you want to read from the state because you might want to take some actions based on certain keys in the state and then you need to write to the state and return that state and I've done exactly the same thing for the researcher you can see here you can see the state as an input and then I am writing to the state so I'm writing the response to the state and this will update according to how You' defined your state object so if you defined your state object such that the update overwrites it will overwrite if you defined it such that it will add a new state it will add a new one it will append a it will append to your state apologies not add and you can see all my agents are defined here you can go through this in detail in your own time while you have the code in front of you the next important thing to look out while we're talking about read and write our tools so let me actually bring up the tools we have oh I have one open already so we have the basic scraper tool so let's look at the basic scraper tool so the key thing for tools is that you also must read in the state so you must take the state in as as a as an argument to your function you must also write to the state so you want to write to the state for your scraper tool too and the reason you want to write to the state is that some agent down the line will be picking up what you've written to the state for your scraper tool so you can see here where I write to the state for the scraper you can see that so you see scraper response and I write to the scrape so all of these things here are just exceptions and conditions for different errors and all that type of stuff but the most important thing to understand is that whatever tool you define should take the state in and you should also be writing to the state at the end that's the that's the important thing to understand let's look at the tr. py script if you watched any of my videos before you'll see this as a familiar approach so I like to keep my prompts in a separate script because I believe mixing the programming aspect of it and the prompts gets confusing you end up with really long scripts so I like to keep my prompts as a separate script and then read them in where I need to use them so what you have here with the prompts STP script is the the the um the prompts for the planner the prompts for the researcher and these are prompt templates because we want these prompts to be dynamic so remember we're going to have a different search engine results page the feedback will change previous selections will place so these are prompt templates we have the reporter prompt template too and we have the reviewer prompt template and so on some of these are useful to be returned as Json so for example the planner I've asked to return it as a Json where it gives the search term it gives an overall strategy and it gives some additional information this I found is useful because I could just pick up that search term and use it directly in the search engine tool and that's less hassle for something like the reporter that is not returned as a Json that is returned simply as just a um a response that is formatted in this way where we provide some citations and we also ask it to provide references to those citations within the text navigating back to the agents. PIP if you need your agent to respond is a as a Json format you actually set this in the model itself so actually what I've realized is I keep the model separate so if if I navigate to open AI models there are two model functions I've defined one returns just the standard response text generation in the hour I've asked to return as a Json format so if you need your model to return a Json format you must Define that when you're calling the the open AI model or whatever model that you're using right now this project only integrates with open AI I will be looking to adapt it to work with oama and also looking to adapt it to work with your own hosted inference server I'll usually go down the route of using VM as the inference server let me know in the comments section if that's something you'll be interested in in seeing is I I'll I'll make those adaptations to the SCP but for now it only works with open AI lastly we can look at the app front end so the app front end is very basic it's not really a front end this all works in the command prompt and what you do is you can set the model that you want to work with and then you can set the iteration so this will time out after it hits this number of iterations so if you ask it a really complex question it might take 40 before it it times out you can set this it will sorry it will take 14 before it times out you can set this to whatever number you want and you can obviously use any of the open AI models here I have found that actually gbt 40 is the most effective especially when it gets some more complicated questions GPT 3.5 turbo can do the simple questions GPT 4 preview is not worth using because I think it's actually more expensive than gbt 40 and it's it's slower so if you want to use gbt 4 just use gbt 40 otherwise use gbt 3.5 turbo and don't expect great results for gbt 3.5 turbo it can answer your basic questions but that's probably about it yep and you know when you are executing I've done all the execution here um so I've defined the agent graph with all the nodes and edges and then all I'm doing is in the front end I'm reading in the graph and then compiling the workflow so when you initiate this app that's you'll see this you'll get creating the graph and compiling work flone and graph of workflow created and then you're able to input your queries in the poers Shell or the command prompt that you're using and that is it you can go over this code in much more detail yourself but hopefully I've concretized some of those abstractions for you because I think that is where people struggle and you will be able to see how it works by playing around with it yourself so I really emplore you to do that if that's something you are interested in next I'm going to give you a little demo of how it actually all works together let's demo this Lang graph web search Asian so to get started what you want to do is open up a command prompt or a Powershell I'm working with Anaconda Powershell get your environment set up um so you'll follow the read me in the GitHub to get that set up I'll show you to do with anacon the Powell and then what I do is I run python app do app apologies python M because we're running a module app app and what that's going to do is it's going to start off by compiling the agent graph so you can see here we've created and compiled creating and graph and compiling workflow and then it takes a few seconds before you're able to ask a question so you'll be able to ask any question you like here and when you do ask a question if you're finished asking that question you can simply exit by typing exit so what I'm going to do is I'm going to ask a basic question when did the capital of Nigeria change and hit enter on that you can see we've entered the planner and I remember I told you it returns in Json so you can see it's given us an overall strategy but most importantly it's giving us a search when did the capital of Nigeria change and then the researcher has selected from the search engine results page um we've got a selected page URL here britanica the reporter has then given us a response based on that and this is the correct response it's given us a citation to the reviewer has has reviewed that and then it's passed the review so you can see that and then we've got our final response apologies here I know this is written in blue so it might be a bit difficult to see but you can always go through this yourself when you run the agent for yourself and just to show you that that response is actually a real Source we can click through and pull up the the resour pull up the site um the website for the citation um it's just asking me to accept some privacy policy stuff there you go so there you go that's the citation that's the website for the citation so you can check the accuracy of this stuff yourself when you're running it I found it to work pretty well I've actually found it in some cases to work better than my custom agent that I built I'll show you one such example so if you watch my previous videos you'll know I did a matrix ranking the different models with my custom agent one of the tougher questions that I had was a question about weather and the Premier League so I'm going to past it in so the question was what is the current we forecast in the largest city north of the city where the team that finished second in the 2023 2024 Premier League season played its last match and I found that most of the time with my custon agent I didn't get an appropriate response for this so let's just see how the lra agent works so right away and I I like the speed of this thing obviously it's working off GT 40 but I find it to be pretty Speedy so planner has come up with a plan let's see what we get as an answer I'm not going to go through every single step here because it will take some time and I want to keep this video relatively short but just to demonstrate to you that we can actually get to An Answer here there we go so we do get to an answer with the Lang graph approach if you want to investigate for yourself how we got there I implore you to go and do that we've also got two citations here so we've got dual citations I guess what could be improved here is we should be referencing the citations within the text we're not actually referencing the citations within the text sometimes it does it sometimes it doesn't that's probably more to do with the way I've prompted it than anything else because I think L graph is really flexible all right one last thing I'd like to show you I'd actually like to show you how the state gets updated for all of these questions so what I'm going to do is I'm going to hit exit and then I'm going to revert back to my application and you can actually have this verbos option set to true in order to to return the state so the output is going to be a lot more messy because we're actually returning that state dictionary and I just want to show you what that looks like so you get an idea of how the State updates so I've done that I will bring back my Powershell and let's rerun the application so what we want to run is python module app. app so we're running a module of that python project now I'm going to ask us I'm going to ask the same question because there are several iterations there and you can see how the state grows so everything you see in white here is the actual State that's the state dictionary so you can see initially it starts off as the research question and then we have a search T and you can see the state is growing because we're pending more and more more information to that state you can see again and you can see that until we get to a final answer so I think we delivered our final answer here the final report but yeah essentially you can see that is the state of the entire agent workflow and you're free to go through what I state looks like in your your own time by the way so yeah we're keeping track of all of the we sites we visited we're keeping track of all the previous responses all of that is in this state dictionary and we don't pass the entire thing to every agent we just pass the relevant parts to the relevant agents that's what's good about Lang grath you can really customize to that level you can make things very deterministic as well okay I want to talk a little bit about my experiences building with langra and I want to talk about where I put langro in comparison to other programming paradigms for agents like crew AI autogen um agency swarm what else have I used building your own custom so I want to I want to to make that comparison next so stay tuned if you'd like to hear my opinion on that my verdict on langra is that I think it's a rather impressive framework for building agent workflows there is a technical hurdle initially so it does take a lot to understand how the framework works and there's some abstractions there that are confusing at first and I think that's a common theme with a lot of things that have come out of the Lang chain group of Frameworks but honestly just investing a little bit of time to understand how the framework works and understand those abstractions hopefully this video can help you on a way to that I believe is worthwh because I think of all of the Frameworks I'm reviewed Lang graph is one of the most customizable I would certainly put it above through Ai and I'd certainly put it above autogen I think when it comes to Simplicity agency swarm is is definitely a simpler framework to use but I believe that langra does be agency swarm in certain areas and the areas I believe it beats agency swarm is agency swarm is kind of built on the premise of using the assistance API from open AI which is currently in beta and I think that's a brilliant idea to use the assistance API because it comes with a lot of things like embedded retrieval this retrieval is not is native to the assistant problem is their assistants are in beta right now that's the issue whereas langra is obviously working off um it's not specifically designed to just work off the assistance API I know that agency swarm has recently included functionality to use open source models so that is there but the designer of agency swarm says himself that it's supposed to be used with the assistance API that's what he recommends and I think you know that makes a lot of sense the other thing I would say about Lang graph that I think does really well is that it's really obvious how to direct specific context to each agent so whereas when you use autogen they have this group chat manager thing and that group chat manager just fires out context the entire context of the workflow to every agent which is token intensive so if you've ever developed with autogen you end up using a lot of tokens and it costs you a lot and I think through AI must do something similar whereas with langra it's really intuitive on exactly how to direct specific bits of the context to specific agents so you don't need to you don't need to direct that entire stake BR to every single agent in your chain you can just like if you want to just direct the research you can direct the research or you can direct the search engine results page giving my example um as as a as a use case so I think that part of I think that part of Lang graph is is fantastic I think the other impressive thing or the other the other advantage of L graph is all of the Integrations that you get with the Lang chain library now I know Lang chain is a bit Marmite some people like it others don't but one thing I would say about Lang chain is that they have a lot of Integrations with various different tools and services and I think given that they have all of those Integrations sometimes it can save you time to just use their Integrations that they've already built especially if you're building a proof of concept or an MVP and you don't want to spend all the time coding those Integrations up yourself so it does give you that option to work with Lang chain and use all of their Integrations that are available to you out of the box and I think that's not something to be underestimated especially if you need to get something up and running quickly my final verdict is I would skip crew Ai and Skip autogen in favor of langra as long as you're willing to put in the time to understand the principles so understand the key Concepts and also just know that it is going to be more complex to use than than crei and autogen I would go directly to langra because of that ability to customize your your workflow and I haven't really seen that customization outside of langra and agency swarm I would say the lra is a fun up for me I I I like it I like the framework I invested maybe about a day building this this demo for you guys so hopefully it's useful to you and let me know what your thoughts are if you like this content give us a thumbs up give us your opinion if you've used langra before in the comment section give us your opinion on the other Frameworks share and subscribe to the channel for more large language model and AI engineering content I will see you on the next video
Info
Channel: Data Centric
Views: 18,127
Rating: undefined out of 5
Keywords:
Id: R-o_a6dvzQM
Channel Id: undefined
Length: 43min 51sec (2631 seconds)
Published: Sat Jun 08 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.