10 A.I. Breakthroughs in 2024 That Will CHANGE EVERYTHING

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
happy New Year everybody 2024 is here and I have so many exciting things planned for us for this year if you thought 2023 was a big year for AI just wait until you see what's coming in 2024 in this video I'm going to talk about my 10 biggest AI predictions for 2024 so let's go first llama 3 I believe llama 3 is going to drop in the first half of 2024 and it is going to be incredible it is going to close the gap between open our models and Cutting Edge proprietary models like gp4 and I'm going to talk a little bit more about how open- Source models are catching up rapidly with proprietary models a little bit later in this video it's crazy to think that less than a year ago llama 1 was released and it was leaked actually it wasn't really released as much as leaked by somebody and it really changed the trajectory of open- source large language models now we had an incredibly capable large language model that anybody could download modify fine-tune and run locally on consumer grade hardware and it was really influential on myself and my channel and I'm so excited that meta jumped in the open source game then they released llama 2 which was even better and closed the Gap even more between open source and gp4 and Mark Zuckerberg has already teased llama 3 he already said his team is working on it take a look at this video where him and Lex Friedman are talking about it llama 2 is incredible it's the you you're really it recently there's already been a lot of exciting developments around it is there what what's your sense about its release and is there a llama 3 uh in the future yeah I mean I think on on the last podcast that we did together we were talking about the debate that we were having around open sourcing llama too and I'm I'm glad that we did um you know I think at this point there's the the value of open- sourcing a foundation model like llama 2 is significantly greater than um than the than the risks and in my view I mean we did we spent a lot of time doing a very rigorous assessment of that and red teaming it um but I'm I'm I'm very glad that we released L to I think the reception has been um it's it's just been really exciting to see how excited people have have been uh about it and it's gotten way more you know downloads and usage than I than I I would have even expected and I was pretty optimistic about it um so that's that's been great um llama 3 uh I mean there's always another model that we're training so I mean it's you know for right now you we built we we train llama 2 and we released it as an open source model and right now the priority is building that into a bunch of the consumer products all the different AIS and um and and a bunch of different um products that that we're basically building as consumer products because Lou by itself it's not a consumer product right it's more of a piece of INF infastructure that people could could build things with so that's been the big priority is kind of continuing to fine-tune and um and and kind of just get llama 2 and and its um and its little the branches that we've built off of it ready for Consumer products that hopefully you know hundreds of millions of people will will um enjoy using those those products in billions one day but yeah I mean we're also working on on the Future Foundation models and um and I don't have anything new or news news on that I don't know you I don't know exactly when it's going to be ready um I think just like we had a debate around llama 2 and open sourcing it um I think we'll we'll need to have a similar debate and process to Red Team this and make sure that this is safe but and my hope is that we'll be able to to open source this next version when it's ready too but um but that's not that we're not we're not you know close to doing that this month I mean this is um that's just it's it's a thing that we're we're still somewhat early in working on so there you go they still need to Red Team llama 3 and make sure that it is the best that it can be before they release it to the public and meta has really benefited from going open source with AI they've really made a name for themselves in the AI community and the broader AI industry specifically because they've done such incredible work with open- Source Ai and I love that they're taking that approach if you had asked me one year ago if I thought meta was going to be a major player in open source artificial intelligence I would have called you crazy and it's definitely in their best interest to continue to push on open source and this is the tried andrue Microsoft model which is to give away the product for free when you're not the leader in a category and of course Humanity benefits when that happens they serve as a counterbalance to the closed Source models like perplexity like gp4 like Gemini from Google and I love seeing it and Yan Lon the head of AI at meta is a huge proponent of Open Source so between him and Zuck being huge proponents of open- source AI it's only a matter of time until llama 3 happens and I do believe it's going to happen in the first half of 2024 but as you heard what they're currently working on is integrating llama 2 into all of their different products which is super impressive to hear so if you didn't think llama 2 was nearly as capable as GPT 4 it is completely production level ready as proven by the fact that meta is building it into their products that serve billions of users and check out this clip where he also teases AI studio in which Mark Zuckerberg has this Vision where anybody any consumer can create AI models as easily as they can create ugc content on Facebook we're also working on this platform that we call AI Studio that's going to make it so that over time anyone will be able to create one of these AIS almost like they create any other ugc content across the platform so I'm I'm excited about that I think that to some degree we're not going to see the full potential of this until you just have the full creativity of the whole Community being able to build stuff but there's a lot of a lot of stuff that we need to get right so so I'm super excited to see what meta delivers for open source AI in 2024 thank you to the sponsor of this video Mind Studio by uai mind studio is an incredible AI platform that lets you build workflows automations and chatbots all powered by the top llms out there and they just added a bunch of new Integrations including zapier which I've been a fan of for years zapier allows you to basically plug in any thirdparty application directly into mind Studio making it incredibly powerful so just as an example you can plug your Google calendar into mind Studio have it read your calendar and summarize the meetings that you have coming up for that day and even that week and not only that you can have it go out and grab the LinkedIn profile for each of the attendees and give you a summary about the attendees prior to the meeting all of this can be built in mind studio so check out mind Studio by uai I'll drop the link in the description below and get building with it it's incredible and next this is an obvious one it's not really even a prediction but Gemini Ultra is going to be released in 2024 by Google Now Gemini made waves at the end of 2024 because they released a bunch of demo videos and research papers that showed it was an extremely capable model as good as GPT 4 on Gemini Ultra I did an entire video review about it check it out in the description below but they also had controversy because it was leaked that the video that they released was actually highly edited and really not a good representation of what Gemini was capable of let me show you that clip now here we go tell me what you see I see you placing a piece of paper on the table I see a squiggly line what about now the contour lines are smooth and flowing with no sharp angles or Jagged edges it looks like a bird to me hm what if I had this the bird is swimming in the water it has a long neck and Beak it is a duck yes a duck is a type of waterf in the family Inay ducks are closely related to and swans the duck appears to be blue this is not a common color for ducks however there are some breeds of blue ducks these are typically not as common as other duck breeds it is more common for ducks to be brown black or white are you sure what the quack I was just talking about a Blue Duck and now you're holding one it looks like blue ducks are more common than I thought would this float I'm not sure what material it's made of but it looks like it might be rubber or plastic what if I tell you it's making a squeaking sound oh if it's squeaking it's definitely going to float H nice it is a rubber duck it is made of a material that is less dense than water can you teach me how to say this in a different language and give me a few options o can you explain how to pronounce the Mandarin one is it y y is pronounced with a first tone in Mandarin there are four tones which are essential to distinguish the meaning of words the first tone is a high level tone Oh okay okay so in that clip it looks like somebody is just talking naturally with AI but really what happened is a lot of editing a lot of cutting a lot of prompting and it's not actually as clean and Polished as what it looks like in that video however the research paper and the performances detailed in the research paper are still very real so I'm extremely excited to see what Gemini Ultra is capable of and also I'm all for more competition in the Clos Source proprietary model space B so gp4 is going to have some strong competition from Google Now as already mentioned Gemini is going to come in three packages Nano Pro and Ultra Nano is going to be for end devices they're going to live on device like the Google pixel for example and they're going to be able to be run completely from that device with no internet connection required then you have Gemini Pro which is more on the level of a llama 2 and it needs pretty beefy consumer Hardware to run and then you have Gemini Ultra which will only be able to be run by Cloud servers and Google is really going to invest heavily to make sure developers are utilizing the Gemini models that is their big play they want developers to build on top of Gemini and so does open AI with gp4 that is how you build a thriving ecosystem when you offer a very compelling developer product and apple might be releasing more AI products in this year which I'll talk about in a bit but Apple already has a very strong developer community so Google Gemini and open AI gp4 are going to have to compete heavily with apple as well to capture mind share of developers and between apple meta Google Microsoft open aai the winner might be whoever can lure the most developers over to build incredible apps on top of their AI platforms so Google Gemini Ultra will be released in the first half of 2024 that's my prediction and it will come with a lot of problems initially hallucinations vulnerabilities bugs but it will quickly improve as soon as it gets in the hand hands of consumers and developers my next prediction is about robots humanoid robots and other types of robots are going to continue to evolve and become better and more companies are going to release robot products in 2024 now the main player in the space is still Boston Dynamics they've been doing this for 30 plus years and have shown incredible demo videos of their robots but last year Tesla made huge strides towards building their own humanoid robot with Optimus now from just a little while ago where the only thing Tesla had was a person in a spandex outfit dressed like a robot dancing around was the only demo they had now they actually have a working and really impressive prototype of the Optimus robot so I think Tesla is going to make huge progress in 2024 with Optimus now I found a couple videos by a robotics expert and they said a couple things that I found interesting first if you're going to use a humanoid robot in a factory 3 mph is the minimum speed you need to have that robot be really effective in a factory and 5 m an hour is not even needed and the current speed estimate of Optimus is 2 m an hour so it's almost there um but the um most some of them have even speced only about 3 miles per hour and that's because that's really all you need for a walking robot uh five is is really brisk I mean very few people walk at five miles per hour you almost have to get into a jog to be able to go that fast so if you are doing useful work or anything like that 3 mes per hour is more than nothing even then that's very quick on what you would need for for most you can go slower than that so right now what is the speed and from what I could tell you calculate because I I remember you actually calculated the last video you had actually measured the way what do speed Scott yes and and so I looked at it several times by kind of by counting the number steps it did looking sort of at the size of the feed to get an estimate of the distance that it traveled there and it looks like it's traveling probably about 15 ft in a little bit less than 8 seconds so it's probably at a walking speed of I think about one and a half meters per second and that's going to be probably just shy of two miles per hour it's it's it's going to be right in that area he also predicts that AI day3 will be in q1 of 2024 and they'll probably announce a bunch of updates and publish more demos that show the progress that they're making on Optimus he also predicts that dozens of Optimus robots will be produced produced by q1 which I think is a bit aggressive and anytime you're talking about Tesla and Elon Musk and timelines you can probably take it with a grain of salt but he also says that there will be hundreds by the end of the year Tesla is quickly becoming much more than an electric vehicle company I already knew that they were an energy storage and production company but now they're going to be also a robotics company and it's super exciting to see where this company goes and it seems the hardest part of Optimus right now is getting the actual ators to work properly and those are the mechanisms behind joint movement and of course that's probably the major limiting factor on on the bot it's it's not going to be the the cast that it's not g to be the batteries I got you know more than enough batteries it's not going to be the FSD chips it's totally going to be the actuators and well while the actuators are sophisticated they're not going to be that difficult to mass produce once they get them line up and going but once they get those actuators working well then production will accelerate quickly and as mentioned there'll be a lot of other robotics companies releasing new products prods in 2024 but I still believe Optimus by Tesla will evolve the fastest now let's talk about open- Source large language models which is the thing that I'm most excited about in 2024 I already mentioned that open source models are catching up with closed Source proprietary models like gp4 but let me show you really how quickly they are catching up look at this trend curve the dotted purple line is closed Source models over time and it starts from 2018 when gpt2 was released then GPT 3 and 201 20 and continuing through chinchilla Palm Claude Palm 2 GPT 4 Gemini Ultra and this graph is MML U performance over time and mlu is one of those benchmarks that large language models are tested on now let's look at the bottom black dotted line the curve is much steeper and as you can see towards the end the distance between the purple dotted line and the black dotted line is closing rapidly from flon T5 llama 65b llama 2 mixt the E 34b and when llama 3 comes out I think it's really going to make that Gap even smaller I am extremely bullish on open source in 2024 if that wasn't already obvious and I'm going to bring you all the news all the tutorials for open source models in 2024 and let me talk about a couple other things right now there are over 325,000 opsource AI models on hugging face that is an astounding number parer sizes will continue to increase although of course parameter size isn't everything and bigger might not always be better quantization techniques will continue to improve and I think that is actually one technology that has been greatly underappreciated in 2023 is how incredible quantization techniques have become you can pretty much run any model on most consumer Hardware with very minimal quality loss with some of these more aggressive quantization techniques and then I also also predict mixture of experts will become the gold standard for open source models we already know that gp4 uses a mixture of experts and we know that mixol was the first open source model to leverage that technology and it performed incredibly well if you've seen any of my testing videos on mixol which I'll drop in the description below if you haven't seen it and the reason why mixture of experts is so important for open source going into 2024 is because it allows a huge model to perform really efficiently because you don't necessarily need to use the entire model when you're running inference you just choose pieces of the model that are best suited for the prompt and then run inference on a subset of the total model size so it is a large model but you can run it really efficiently on consumer hardware and then when you add in quantization techniques that's where it really gets exciting and meta is not the only company developing open source AI NASA and IBM are both doing so with geospatial data and then we're going to have a lot of Industry specific models like Finn GPT and finance and a lot of those might be fine tunes based on existing foundational open source models but some of them might be actually new models themselves and as mentioned meta is still leading the charge in open source Ai and even apple is getting into the open source game a model that kind of went under the radar and I didn't even make a video on it was released by Apple at the end of last year that model is called ML faret and it is a multimodal model that is completely open- source and the fact that apple is releasing it in open source is pretty shocking to be honest everything they do is clouded in secrecy but I welcome it and that also leads me to believe that the way that they're thinking about Ai and large language models is they want them to live on device and that would make a lot of sense because Apple silicon is extremely good at running large language models and a lot of software lately has been coming out leveraging the power of Apple silicon their specific M2 M3 chips to run these models extremely efficiently and extremely well in fact fact my M2 Ultra laptop runs models better than my 490 GPU and maybe I haven't done enough optimization with the 490 but for now I still use my laptop to run all of my open source models and I love the idea of having a model locally on my computer locally on my phone even if it's closed Source I still love the idea of not having to connect to the internet to actually talk to my model now I know I'm a huge proponent of Open Source but let's talk about the potential counterargument to open Source proliferating in 2024 check out this clip from jiren Lanier which talks about exactly this topic I think the open- source idea it comes from a really good place I think that people who believe in it believe that it makes things more open and Democratic and honest and safe and all that the problem with it is the mathematics of network effects so let's say if you have a bunch of people that share stuff uh for free it might be music it might be computer code it could be all kinds of things and you're saying oh we're being uh very communitarian here we're sharing share alike we're it's a barter giant barter system the thing is that the exchange of all that free stuff is going to tend towards Monopoly because of mathematics and so then it'll become the greater glory of something like a Google or whatever it is and so you end up with instead of decentralization you end up with Hyper centralization and then that Hub is incentivized to keep certain things very secret and proprietary like its algorithms or the derived data which is actually very expensive to generate of how things correlate um and so I think this idea that opening things leads to decentralization is just mathematically false and we have end now in a previous video I talked about the New York Times lawsuit of open Ai and those kind of things as well as Twitter shutting down their API Reddit shutting down their API all of these companies are going to start protecting their data by any means necessary and what that means is open source models are going to find it harder and harder to build data sets to build future models and so only companies with vast funding will be able to acquire data sets to train future models but there might be a potential solution to this which is with synthetic data and I'll talk about that in a little bit next if you've seen any of my videos you know I'm also very bullish on AI agents and I think 2024 is going to be the year of AI agents they're not only going to get much better because models are getting better but the software powering agents and allowing agents to collaborate with each other will also continue to get better and most importantly they're going to start finding real world use cases we're going to figure out what works and what doesn't and be able to put together essentially formulas or templates for people to use to solve real world use cases well whether that's coding research putting together data sets all of these different use cases that AI agents and AI agent teams can be used for will start to be well known and well documented but not only that we're going to start seeing more emergent behavior from AI agents I made a video a long time ago about the generative agents paper out of Stamford and that was extremely well received and basically what it showed is that when you have a simulated town full of 25 different AI agents and each of them is powered by gp4 they start to exhibit humanlike Behavior they start to have habits they start to form relationships and I think that is what we're going to continue to see AI agents will start to surprise us with how humanlike they really are and we're going to start having to ask ourselves what does it actually mean to be human it might be a little early to start asking these questions but I find it fascinating and so really what is consciousness if these AI agents can behave in the simulated world just like a human and someone with no knowledge could look at it and think it was actual humans in this world what's the difference between them and real humans and I also also think agents are going to help us predict what can happen in certain situations for example we can put them in situations where they test Game Theory such as the prisoners dilemma obviously we already know the optimal solution for that but it would be interesting to see how AI handles that and check out this video by a16z that talks about exactly this topic I think generative agents and tools like L model could be used to advance social science and social science to a large extent has in the quest to understand who we are and there's a lot of really interesting applications that can come out of that that will Empower different communities and societies I think given how difficult it was to actually evaluate what it means to be believable I think this accuracy actually has a lot of interesting questions around it what does it mean to accurately reflect human behavior it could be that if we can match distribution of human behavior let's say in this context they have this kind of probability of Behaving this way right let's say it's 10: p.m. what what are the chances that I will be asleep or will be awake uh what are the chances that I be working that I might not be working I think ultimately getting to that degree of accuracy in the simulation might be sort of the next step to these kind of simulation based work so to some extent these tools can be used as a predictive tool looking into sort of the future of what might happen in our own community and I think those are sort of the ways I think we see this field unfold maybe in the next few years we're going to be able to place agents in situations to predict how humans will behave whether that's for advertising for political polling for Psychiatry psychology dating instead of having to run human trials we can have an initial run with AI agents to see how they're going to perform and that'll allow us to scale the trials to much greater levels than we could have previously and another prediction I have is that we're going to have a lot more tooling built in 2024 to help Wrangle AI teams right now the hardest part of running AI agent teams is not necessarily implementing them but finding the right definitions of system messages prompts roles Etc where they actually perform well together and get the output that you expect and there's a new AI agent project called crew AI which I plan on doing a review for soon and my next prediction is there will not be AGI this year but really how do you define AGI Mark Zuckerberg and Lex Friedman talk about exactly this in this clip what year do you think we'll have a super intelligence I don't know I mean that's pure speculation I think it's uh I think it's very clear just taking a step back that we had a big breakthrough in the last year yes so I I still don't think I have any particular Insight on when like a singular AI system that is a general intelligence will get created but I I think that one thing that most people in the discourse that I've seen about this haven't really grappled with is that we do seem to have organiz organizations and you know structures in the world that exhibit greater than human intelligence already so you know one example is a you know a company you know it acts as an entity it has you know a singular brand um obviously it's a collection of people but I I certainly hope that you know meta with tens of thousands of people make smarter decisions than one person another example that I think is even more removed from kind of the way we think about like the personification of of um of intelligence which is often implied in some of these questions is think about something like the stock market right the the stock market is you know takes inputs it's a distributed system it's like the cybernetic organism that probably millions of people around the world are basically voting every day by choosing what to invest in but it's basically this this organism or or structure that is smarter than any individual that we use to allocate capital as efficiently as possible around the world just about a week ago Sam Alman put out a poll on X asking what features people want to see in 20124 for open Ai and by far people want AGI and he did say Wow way more requests in the first 2 minutes for AGI than expected I am sorry to disappoint but I do not think we could deliver that in 2024 Ray kurile and Elon Musk seem to think AGI is coming around 2029 so it's interesting to see those two predictions aligning but my prediction is no AGI in 2024 next I think 2024 is also going to be the year of synthetic data now we already talked about how data is becoming much more valuable and much more closed off if a company has an extremely valuable data set they're not going to want to share that like has been the case on the internet since the internet was born basically and so the solution to that is synthetic data synthetic data means large language models are actually created creating new data for future models to be trained on now whether or not they could do that effectively is still a big question but I think we're going to start seeing a lot of synthetic data being used to train models and I've already seen some pretty strong examples of this in the real world for example Tesla is so far ahead with their full self-driving because they have so much data that they've collected from the cameras on Tesla vehicles over the years and I'm talking about millions and millions of miles of real world data now other Auto makers don't have that data Audi BMW Toyota they haven't outfitted their vehicles with cameras at the same level that Tesla has so they have three options they can put cameras on their vehicles and start collecting real world data right now from scratch they can purchase third-party data sets of that data but that's really hard to come by because you actually need cameras in the field or you create synthetic data and I've seen examples of companies that are creating synthetic realworld road driving data so synthetic data is going to be especially prominent in Industries where privacy is a big concern like healthcare and financial where you don't want to use user data because if that data leaks out you are legally exposed so if we can solve synthetic data that really is one of the key foundational Technologies needed to create AGI because humans are just not going to be able to have enough or create enough data on our own manually and when a company has the data they're not going to share it so synthetic data is probably one of the key ingredients to AGI next multimodal will become the default no more text only all models will be multimodal by Nature they will be trained as multimodal and that's already what Gemini is GPT 4's future models will be trained like that as well Apple's ml faret model that I talked about is also multimodal and that's going to be the default images video audio text it doesn't matter what the input is what the output is it will accept C everything but multimodal isn't perfect and check out this clip from a Stanford lecture that talks about some of the problems with multimodal like if all of this multimodal stuff is cool and sort of useful and and doesn't look that difficult you know like why aren't we all doing multimodal things and so why why do we focus on specific modalities and I think there are a couple of problems just to be aware of um so one is modalities can sometimes dominate especially text is much more dominant than Vision or audio in many use cases right so uh you can already just have a model that that picks up on the text signal and basically learns to ignore the image completely which actually happened embarrassingly for visual question answering we'll get to that so visual question answering you could do that without actually looking at the picture um the additional mod modalities can add a lot of noise so it makes your machine learning problem more difficult uh you don't always have full coverage right so as I said if you look at Facebook post sometimes you have text sometimes you have a picture sometimes you have both but you don't have a guarantee that you always have both so how do you deal with that um in many cases we just really weren't ready it was too complicated uh to implement stuff and also just in general like how to design your model really to uh to combine all the information is actually quite complicated so so there they talk about how one modality can dominate the other modalities additional modalities adding noise full coverage of a specific modality is not necessarily guaranteed and other issues but I am still very bullish on multimodal making big progress in 2024 all right next on the security front and this is a sad one Bots will be essentially impossible to detect and synthetic data can be a key contributor to this that's kind of the double edge sword of synthetic data when we have all of this data the models get better the Bots get better and evil Bots Bots trained for scamming for spamming those will get better as well and Bots already solve cap and other prove your human tests better than humans do so we're really not going to be able to tell what is AI and what is not and that's an especially big problem going into the 2024 election cycle spam on X on meta on Instagram deep fakes all of these things are going to be so good in 2024 it's going to be a huge problem now Elon Musk already outlined the solution that X is taking to prevent spamming and Bots and that's by charging so Twitter might C cost a dollar a month or a dollar a year to use it and that makes a lot of sense it seems like nothing but that dollar multiplied over a bot Network that might have hundreds of thousands of bots the math behind spamming a network doesn't make sense anymore it's not profitable anymore so let's look at the X example if you require payment a few things will happen the cost to run a bot will grow orders of magnitude and the cost benefit equation doesn't make sense anymore number two you're also Piggy backing on financial Network filters so when you go out and get a credit card you have to give them all of your information your address your social security number and those things are really hard to come by and if you are going to acquire it on the black market they're expensive and then if you go to acquire a credit cards Straight From the Black Market even those are expensive so it's really messing with the incentive system of Bot networks take a look at this clip from Elon Musk that talks about how X is going to prevent Bots by making users pay really I say maybe the single single most important reason that we're moving to having a small uh monthly payment uh for uh use of the X system is uh it's it's it's the only way I can think of to combat uh vast armies of bots uh because a bot costs a fraction of a penny call it a tenth of a penny but if if uh if somebody even has to pay you know a few dollars or something some some minor amount the the effect of cost of of bots is very high and then you also have have to uh get a new payment uh method every time you have a new Bond going into this election year with deep fakes with Bots being very difficult if not impossible to detect we're going to need a lot of Education to help people understand how to look at information online to determine themselves whether they should be believing it or not and last I believe GPT 4.5 will drop most likely in q1 but possibly in Q2 of this year GPT 4.5 will be a big evolution in a lot of ways but it won't be that step function to GPT 5 which Sam Alman already said they're not working on currently who knows if that's actually true GPT 4.5 will be much better much faster much cheaper but will still be based on their existing GPT 4 model and architecture a few weeks ago there were a lot of rumors that GPT 4.5 was already out in the field but it turns out that was all AI hallucination and speculation basically they convinced AI to say it was using GPT 4.5 but it really wasn't so those are all of my predictions for AI in 2024 what an exciting year ahead of us I have so many incredible tutorials planned research papers I plan to review broader topics and I'm so excited to bring you all of it let me know what you think in the comments if you liked this video please consider giving a like And subscribe and I'll see you in the next one
Info
Channel: Matthew Berman
Views: 165,512
Rating: undefined out of 5
Keywords: ai news, ai predictions, open source ai, llm, ai, artificial intelligence, llms, chatgpt, gpt4.5, robots, optimus, synthetic data, tesla, tesla robot, bots, deepfakes
Id: Fbbu_GQcrwc
Channel Id: undefined
Length: 35min 22sec (2122 seconds)
Published: Wed Jan 03 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.