ClickHouse v23.8 Release Webinar

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I see it started it has we've started I'm just sending the link out now to everybody hello and welcome to today's release call uh we're joined by uh clickhouse co-founder and original Creator Creator Alexi uh and also Dale who's going to be his able assistant when it comes to asking questions so without further Ado I'll hand you off uh today uh to uh Alexa thank you by the way you said greater or creative I'm not sure what is clickhouse creative the original clickhouse Creator perfect perfect but it sounds funny uh we will start in about two minutes and I see many people are joining and if your friends are not in Zoom they can join on YouTube we have live stream on our Channel uh with a bit of delay I don't like delay even two seconds so I appreciate if everyone will join live in Zoom without any delay and welcome our regular attendees welcome Alexander welcome Ellen nice to see you again welcome Artur Boris Dennis ghee or Dennis G sorry if I'm pronouncing incorrectly but welcome anyway welcome emra welcome John Kennedy what interesting Name by the way Jonathan Ackerman Acker Michael welcome our regular contributor by the way yes I can read your second pen uh Patrick Ricky salzer welcome again Robert Swanson welcome my colleague Rupa I am so glad you joined it Todd and brown and Roger then Brie brilliant nice so we have so many features and so big releases so let's start let me share the presentation okay so this is about clickhouse 23.8 LTS release what does it mean 23 is the current year eight is the current month and LTS is long-term support release so twice a year we have a special release that is supported with batches and Bug fixes for at least one year so you don't be afraid of starting this version because you can just install a new patch another patch without installing the subsequent version like 203 Dodge 9 23.10 and so on it is for someone who needs long term support so I will spend most of the time for new features and hopefully we will have just a few minutes for your questions and if you have questions you can post in Zoom in YouTube and my colleagues Nick and Dale will ask these questions live so what do we have 29 new features it should be 30 actually but 29 good anyway [Music] 19 performance optimizations and 63 bug fixes a lot a lot of dark fixes so when this version will be released today I appreciate if you install it as soon as possible so many nice stuff so what do we have let's start with some an interesting features uninspiring features an exciting features and I would say features that are just boring and I invited so many people like hundreds of people to just look how I will explain this uninspiring features what is the most uninspiring and by the way if you find anything actually interesting for inspiring please tell me in the chat because I will not think so if you will not do it uh the first one is arithmetic operations on vectors so clickhouse has a great support for arrays you define arrays just with square brackets like this and also and this is unique to clickhouse comparing to other database Management Systems we have higher order functions with Lambda functions and you can apply these functions to arrays and this is what clickhouse already have since year 2013. for example with this array map function you can easily sum two arrays and this already worked it worked fine and it looks smart and if you will write this query you will also look smart so clickhouse will give you at least some Advantage but in version 23.8 you can some erase while looking entirely stupid and cool just right one array plus another array and if this arrays are of the same size you will get the result if not you will get an error message it is implemented for plus and minus operators it is not implemented for multiplication and I want to ask you why did not we Implement if for multiplier operator why and if your answer if you type the answer to the chat and if your answer will be correct I will give you this t-shirt um I want your answer I will give you just three seconds to type it and I want it not because I want to know what the answer is but I wanted because this t-shirt is so great I want to make a gift so please do we have any we have one answer which uh with one answer we'll give I look at three seconds the first answer which I can't pronounce the name because it's in Russian apologies um is that it would be a matrix in most cases it would be a matrix and that wouldn't be uh I suppose they're implying that wouldn't be logical to display um and then the other answer is products of two arrays need to be transposed not sure that's a limitation um so we've only got two answers and I'm gonna imagine neither of those uh this is a good answer which I think is the right answer because it's not defined whether it's a DOT product or vector multiplication absolutely the third answer is correct as I thought and that's from Mikhail uh our team Mikhail and he can reach out to reach out to Tyler For That t-shirt uh perfect the second answer was uh partially correct because you can multiply vectors in a different way like scalar product is one way or external product that will give a matter something like Matrix or any other way multiple ways to do it uh okay what is quite similar and I'm not sure if it is useful concatenation of tuples so we have this concatenation operator or a different syntax but the same meaning concat function and you can concatenate strings but now you can also concatenate tuples and what it will do it will just replace this do this easy you can get an attitude two tuples with two elements each and you will get one table of four elements each nice do you need this I am not sure another uninteresting feature it is so uninteresting it is about cluster and cluster all replicas table function so in clickhouse you can create a distributed table and query your cluster as a single table the query will be distributed if you don't want to create distributed table explicitly you can also use a table function there are such table functions as remote a remote remote secure cluster and cluster all replicas the cluster or replica table function has one argument with the name of your cluster and another argument with the table name and it will query every server in your cluster as a chart collect the results summarize whatever the typical usage of this function is for simple queries when you just want to get a summary of your cluster for example you query hostname version and uptime of every server in your cluster but you have to specify this the table name and what is this table it is like dual table in Oracle or MySQL it is just a table containing a single record so a single record will be received from every server and you will ignore it and instead calculate these functions and it's quite unusual you have to explain what it is I don't want to explain so in the new version you just omit this argument and it is system.1 by default it looks like this and you can also remove all arguments and the default cluster name will be by default sounds nice and this is much more usable useful for usability automatic suggestions to correct misspellings in table and database name if you mistaken checks with hex it will tell you here don't exist maybe you meant checks if you can use the database name like a typo in the world in the world default you will get a suggestion it is nice and it is a again unsurprising because we already had this feature for facts and names data type names aggregate function names table function names everything but now also for tables and databases another two boring features truncate database you know there is truncate stable it's like make this table empty but don't drop it now there is also truncated database what is the purpose why don't why isn't it enough to just drop database and then create it again two reasons first reason maybe you forgot how to create your database suppose it has a long definition with a database engine and you don't want to bother okay uh looks like it is not the main motivation there is another use case imagine there are many plans connecting to clickhouse and doing create table if not exists and if you drop the database these clients will receive errors if you truncate the database these clients will automatically recreate the tables so it's a good example to motivate this feature and yet another boring feature is Azure blob storage cluster table function actually when I see a table function containing four words it starts to be a little bit scary compare it to something like S3 is also two two words not one okay what it is so clickhouse has an integration with Azure Microsoft Azure and Azure has a service name at block storage it is similar to S3 but different because they have everything a little bit different and you can use a table function name at Azure blob storage to just read or write files and process them but if you also add cluster so it will be Azure blob storage cluster and specify your cluster name it can process many many files using the resources of all the Clusters so imagine you have 100 of clickhouse servers and one million of files and you just type this simple query and your cluster it will do all the power it will use all the hardware all the network maybe your network will explode after this Maybe some people from the data center will say that the data center is on fire but clickhouse will work okay now about performance optimizations something that I like the most clickhouse is about performance and for performance and the best performance and everything about performance and we have many improvements for uh reading of files for data import data processing on the Fly say ETL or elt or whatever if you have a data Lake you can use clickhouse to get most of your leg and the first feature is optimization that allows to count records without reading any columns thank you let's look at this query it will locate multiple files on the file system but at the same time it can use S3 or URL or azure as well and we requested to Simply get the number of Records in the previous version this query selected any column read it and the current the number of records and for this query with 100 of files and 100 million of Records it took 100 milliseconds and you might have thought that it is passed but it is not fast comparing to the new release 93.8 because this new release don't read any cons at all it only reads the method data it still has to spend some time for opening files and doing six and reading this metadata but 22 milliseconds five times faster and it works not only for parking it works even for tsv and CSV and Json and you might ask but tsp doesn't have metadata and seriously as well but nevertheless it will simply count the number of lines that it will be simpler than [Music] counting it another modification not reading files when we don't have to read them and to explain it there is the following example by the way the data set is public and you can just copy paste and reproduce on your own so when you read multiple files say 100 of them also provides you two virtual columns underscore file and underscore path and you can use this columns to filter by a file name or a full path and the previous clickhouse version already supported this query and it's supposedly worked in the previous version but it worked in the following way clickhouse will read all the files and filter by file name then after reading which is entirely inefficient in the new version it will filter the file names first and then read all only remaining file names so it looks like we simply swapped two lines of code right and now it is infinite times faster infinite because I tested the latest version and it works pretty well but for some reason the previous version did not work at all I don't know why and I don't want to bother it should have worked but no it did not hmm okay but if we read just a single file or multiple files and we skip something inside it by filtering some data formats are quite smart and such a format like parquet is slowly approaching the capabilities of clickhouse you might think that market is like something like protocol clickhouse they used many uh many of the same ideas it is not as powerful as merge 3 but still it has embedded index to skip data by minimum and maximum conditions by the unique values and in this example let's just go straight to the numbers in the previous version this query took 0.7 seconds and it processes all the data almost 100 million records who is a speed of 20 Gigabytes per second 20 Gigabytes per second this is Fast and the new version is slower with just 3.48 gigabytes per second so it process is processing data slower but it spent less time about six or seven times less 0.1 seconds because if process the processes less amount of Records it just filter it irrelevant records so we have all these optimizations for external data formats Market let's look at the benchmark we have clickbench and the results are that it is about 40 percent Improvement on average one quick bench and you can see that one query sped up three times my requirements did not change so where is the difference and the difference is here we have 43 Benchmark queries and seven out of 43 could use index and these seven queries spread up up to 20 times so in average we have just about 1.4 40 percent difference and this is huge okay what else let me show you even more numbers I like performance comparisons and this is about just in time query compilation for armor architecture if you read boring academic papers you might find some statements that analytical databases if they want to be fast they have to implement vectorized query processing or just in time compilation of queries so a query will produce an ideal code for the specific CPU specific looks will be optimized vectorizer for maximum but the actual truth from these academic papers is that if you want your analytical database to be fast you should just take clickhouse and throw off throw away your analytical database and use clickhouse instead because clickhouse has support for both vectorizes query execution and query compilation with just-in-time compiler and all of this at once together until recently just in time compilation was not enabled for arm only for x86 now we have it for arm and up to 1.4 almost 1.5 Improvement on performance of performance on some queries maybe not the most representative queries but at least at least something [Music] okay now about something really interesting non-borrowing features maybe experimental features maybe just unusual features and the first one is the support for direct importing from archives and you know that clickhouse can read from external files say file.csv and it can also read from any kind of compressor formats gstg lz4 Snappy gzip X zip d zip2 and this is available for two years maybe this is not new say you have a data.csv.gz and clickhouse will easily process this data but this is just compressed formats for a single file what if you have archived like Zip 7z or Turbo like tar.zst these are cars contain multiple files they can contain a directory structure with a arbitrary complexity in nesting and the new feature you can process data from inside archives directly and it looks like this you specify a path to Archive then a special syntax this this will separate the path to Archive in your file system from the past Insider connect and this feature has support for everything you can use Globe matching for this part for this path you can process all files you can filter files let me better show you something how it works so let me larger and I will run playgrounds local and I will process some files from my downloads folder so I have something name it name it book genome.zip I don't know what it is and there is a file inside this archive reviews dot Json first of all I want to check if the switch or even works it works unsurprisingly we have it in the new release and it works it automatically derives the structure of Json automatically derive the types and column names and now we can analyze this data but even more if I don't know the exact file name or I want to process multiple files I will use this star blob it works perfectly and it works fast okay did you know that we also support this star star Globe so clickhouse has a lot of stars it matches any sequence of directories and it works again good okay it was not it was not interesting for you what about this so we will use star star slash star dot CSV we will collect all CSV files from this the zip and we will ignore them and simply output their path so it also has support for the bus virtual column nice not so fast but here are some files and people even included this Mac OS X I don't want this file by the way what else we can use this query describe file and we will get type inference it will tell us what the file is what it contains another example so let's look inside this CSV nice we have tags identifiers and scores I don't know but it is Maybe it is about books book reviews Maybe I already I already forgot how and why I downloaded this file I hope it is it was legal to download if because I I downloaded it from a random website name of best data sets for you okay let's do some Analytics nice so we have an average score for tags sorted by popularity and the most popular tag about books is strange and the second popular is Asia and the third is treasure so people should read books about strange results found in Asia during road trips that were cheesy and I will not continue reading okay what about other interesting features and this is what I dream of a lot streaming consumption from S3 and now my dream is true it is included in the 23.8 release so what it is we have a new table engine s3q it has all the parameters similar to S3 uh you specify files possibly uh Paradise possibly hints about the format and data structure but also it has a lot of interesting settings like mode equals to unordered what it will do it will look at this bunch of files and it will schedule these files for consumption it will consume and read from these files one after another but only if you subscribed to this table with materialized view and this materializer view can do processing of this data with any type of Select query and all right and the result to the destination table so it is similar how uh Kafka or rabbit mq uh table engine works you use it in the same way and it will constantly check if there are new files on S3 and consume these files and it has a lot of tunes for example it can keep these files as is it will only track the state which files were processed which are not and continue to consume the new files it can delete delete files after consumption the state is stored inside clickhouse keeper and you define a path inside click housekeeper for this state and it enables parallel and distributed consumption you can create this table on all servers of your cluster and they will just split by files and consume different files without duplicates and there are two modes unordered and ordered with ordered mode it will track the maximum processed file name and process only the files that are greater or say say lexicographically greater than the latest processed file for example if files contain 10 timestamp in their name there is an ordered mode it will track the whole set of processed files regardless of how big it is so it is quite nice and I want to say this feature is very fresh it is experimental it will be included in the today release 23.8 but at the same time my colleagues are preparing patches for this feature so it will be production ready today hopefully before midnight if not no it will be ready before midnight my colleagues promised that okay what do we have for bonus something even less usual what is the most unusual way to run a clickhouse so let's take a look at different CPU architectures we have a x8664 it is boring architecture it is so so old so sometimes inefficient there is Army or ar 64. until recently it was mostly for mobile phones but now there are servers for Army and clickhouse is run in production on this architecture there is risk 564 even newer mostly used for small embedded devices that are even less powerful than mobile phones like smart cameras for whatever drones probably and you can put clickhouse there and it will work there is power PC with multiple different options but what is new now we have a cross compiler built for Big Iron that big like on this picture imagine how powerful clickhouse will be on this Mainframe and [Music] this support was contributed by that big company but the cross compilers build was added by my colleague jakov and we had to add it because we received so many pull requests for support of this architecture so we started to question how we are going to test this build how we ensure that it will not be broken and the only way is to include it in the automated build I'm not sure if you need this feature but I would I just want to brag now clickhouse works on f390x what is the opposite opposite of big mainframes the opposite is serverless if you hear that AWS launch it serverless clickhouse on AWS Lambda what you will think probably you will think something different but you know uh AWS has a wonderful customer support including solution architects and I was discussing one idea who is one of these solution architects and this idea received some enthusiasm so it was tested okay let me explain the idea is as following so imagine let's suppose you have a data set or multiple files inside your F3 buckets say CSV GSB Market orc Json just a bunch of files data Lake data mesh data Mass whatever and you want not just download these files you want to query them directly and this is easy just install clickhouse local and query install clickhouse server and queries as data distribute in distributed fashion start using clickhouse Cloud get a cluster and also query this data uh on distributed cluster it is easy but there is one option to make it even more seamless even more elegant so you create a number the function you create a special URL and point for this Lambda function and you query this URL endpoint with the bucket and uh object name and also you post a select query and this query is processed by clickhouse located basically nowhere or maybe somewhere or you don't care it is it exists but it is serverless and out of nowhere your data is processed by this query and your bucket instead of just a bunch of files it becomes like a database so it is easy it is not a Distributive query the it uses just a single Lambda invocation resources of a single Lambda and it is similar to another feature from AWS that already exists S3 select but with this feature you get all clickhouse query language all clickhouse power maybe it is faster than S3 select and most likely because clickhouse is such a nice technology so here is a repository I advise you to take a look to try let's look at some demo so you don't have to use anything to use this feature you can query from curl you can query from Postman and it looks like a real SQL UI it was not a SQL UI but such a simple tool and with just a few clicks we will write a query and it works okay another feature one of my favorites about dashboards in clickhouse and you know I'm a colleague house developer I'm a backhand server side developer I like bytes and bits what about JavaScript well I pronounce JavaScript my mood lowers down to something so I will not pronounce JavaScript frequently but nevertheless on weekends I'm actually a front-end developer and I developed this Advanced dashboard let's take a look here it is the first you might notice that this dashboard is absolutely gorgeous it is so nice when I when I'm looking at this dashboard I want to I don't know I want to lick it I want the butter it is so nice it was fast it even has a dark theme but what is new in this dashboard first of all mass edits you press this button and it will give you a full configuration you can copy it you can paste it and edit the the dashboard accordingly it will not save it for you you will have to copy and paste another feature you can maximize the chart and this is new in the release and you now you can even drag this charts see how beautiful it is it is it is just amazing I can I can drag this chart and play it for hours I can forget about the workload on this server it is enough I I like it so much foreign let's continue because I can play for such a long time that we have only 10 minutes what about something new in clickhouse loud the first one is the most interesting a new table engine Spirit merge 3. this is a table engine designed specifically for clickhouse cloud specifically for clusters with shared storage that can scale dynamically say this minute you have a cluster with three replicas one minute later you have a cluster with 10 replicas and it should change the configuration quickly and easily and third first three is a special table Engine with separated metadata and actually it is a simplification in contrast to replicated merge tree it provides better scalability better performance of inserts and merges it allows to save costs PowerPoints in Greek housewife and consequently it allows us to offer better price for you so clickhouse Cloud will be the best service not only comparing basically comparing to everything else and it gives faster server startup and it even produces complexity the only thing to note that it is a special kind of feature that is available exclusively for click house cloud and some other partner Cloud providers so don't expect it to be in open source anytime soon hand I appreciate your understanding and if you use uh if your self-manager setups either it is only that most of the time because uh typically you just provision your service and you use it with a specified sizing not with say Dynamic scaling quiz uh service storage and with full separation of metadata what else now MySQL protocol is also available in clickhouse Cloud it is not exclusive for clickhouse Cloud clickhouse Server it has MySQL protocol among others and actually because is a polyglot database it has restful interface native protocol grpc my signal and postgres protocols and even all GBC and jdbc drivers and a lot of other drivers and the question is what do you want us to implement next maybe eight DBC don't be confused with only BC maybe you want mongodb protocol or radius interface why not tell me and we will try to implement it okay what about Integrations now we have the official connector for power bi power bi is a business intelligence system for Windows it is nice but it has some say disadvantages for example you connect it to a data source and it tries to fetch all the data and if you connect it to clickhouse with petabytes of data often power bi will just choke uh but with the official connector you enable and switch to direct queries and everything works nicely it will construct queries for you it will not try to load all the data set it will use the power of clickhouse and what to read on our blog and what to watch on YouTube stuff for example the integration of clickhouse and hug and face on hug and face you you know there are a lot of data sets for AI and typically you want to use clickhouse for data analysis and data preparation it is easy uh what about interesting case studies clavio or clavio or Claudio how to pronounce it okay it does not matter it does not matter at all as as long as they are using clickhouse and they do they do use clickhouse and have a lot of benefits what about instacart there are many many companies starting from insta instacart insta bug instana Instagram and most of them are using clickhouse I'm not sure about Instagram but I'm sure about everything else uh Message Board One gauge they also use Quick house and they share how how nice it actually is okay and a few technical articles about the internals of asynchronous inserts you want you prefer video content there is a channel the live stream will be also published on this channel so subscribe share with friends and I'm ready to answer your question we have just five minutes just five minutes okay I think we're gonna have enough time for five minutes uh a fabulous amount of time so um lots of questions we're not a huge number of questions this week but some some pretty good ones um nikolai's asked a few which you think you've addressed one which is open sourcing of shared military um that's Lexington if you want to add anything to that but I think you've covered that pretty well for now it's just in Cloud it won't be an open source uh Lexi do you want to do anything to that yeah the currently exclusive for clickhouse cloud currently we are not going to open source it but uh nevertheless we are collecting uh your feedback about possible applications because potentially there could be more applications of this feature than we see today and maybe there could be something even bigger than this particular feature but for specifically foreign it is just proprietary okay um we have a question with regards to which I think is something we could document better actually uh what about compatibility between click house and click housekeeper if someone's upgrading click has creeper every single month uh should they be keeping Kick-Ass people up today what are our compatibility guarantees there between versions of Clint housekeeper and clickhouse basically the hundreds percent forward and backwards Keeper at all just install whatever the latest version is and forget about it and it will continue to work for years and clickhouse keeper has a much less I would say surface so sometimes they eat the receiving kind of some new features that sometimes would be used to optimize some internals of clickhouse but compatibility is always guaranteed and uh clickhouse keeper is smaller simpler it is absolutely reliable no problem if you install it and say forget for five years yeah I think we would if we had any breaking changes there we would let you know so I um I think obviously you get the improvements that we make in terms of a click housekeepers continuously undergoing improvements I know that the team were talking about it today that they're even making further performance improvements so always benefit from that if you do upgrade um the quantum Lavon support for unique constraints at any regard any any plans regarding this yeah we uh enlisted this feature in the roadmap for this year to instant stream uh but as I see most likely we will not have this feature in this year we will have to postpone it and it will be included in the next year roadmap it is not easy to implement there are some tricks required uh but now it is less it is included in the roadmap okay that's good it's getting over thinking about it um and it's it's so a few things regarding the dashboard so this is a really quick easy question but the dashboards themselves do they have any kind of impact on clickhouse itself in terms of storage CPU Ram disk space I know they're very very lightweight so maybe you just want to reiterate that by default collects for some Telemetry about itself and stores this Telemetry locally inside clickhouse and this is clicker's way everything that you do like logs metrics traces everything is stored in the same clickhouse server there are smaller tables like system.query log uh larger tables like system.txt log system dot metrics log and uh another table slightly larger like system.synchronous Matrix block and my recommendation by default don't worry about these tables at all even if you don't need them just keep keep them just in case at least they contain some interesting data that you might need later but if you have a click click us on very constrained setup same just tens of gigabytes of storage very small disks three tiers sometimes you will find that these tables are large like a few gigabytes and you might want to disable something like system.synchronous log and this is also normal you will not get this data on the dashboard but it will not spend the disk space okay thanks Lexi we have um I know this is something that I think you've just rediscovered or you read I asked you this once what about a faster Max support like and we at the moment we obviously read the whole map when we need to access a key I know there were some plans around here can you give us an upgrade an update with regards to that this is a interesting question because we already tried uh at least one implementation from from external contributor to optimize maps with the hash tables but the results were paradoxically so the results were as as follows when you have a small map it doesn't help it makes things even worse when we have a large map say at least 10 000 of entries these large Maps became slow on its own there's a large uh values inside tables sometimes large values are not good for different reasons for the reason that you have to read a bunch of records and this block takes a lot of memory it takes more CPU and memory bandwidth memory pressure to process and we find out that even if a specific indexing of map values speed up this is a little this scenario when it is beneficial is not good at all so this feature is not merged and not implemented but still we can think about it from a different perspective we should not take it as is and add the index into the value instead we should Implement a different data type that will automatically split Maps into different values into different streams of data on disk and this is the feature that is in active development you can check it it is named charted Maps okay that's great um there are a few other questions where we have time unfortunately I think we try to answer everything um I just like shout out to ramazan has actually we made some good suggestions we had support suggestions for cyber support a graphql as an alternative to my sequel probably not going to do that anytime soon but uh welcome Community PR um and also a few things around a commented she would have been nice to support plus as a concatenation operator as a more intuitive way for people to concatenate strings but appreciate it's already probably overloaded elsewhere okay first about Cipher and graphql hmm maybe we want we want to implement it but I don't want some overflow of query languages it was just too many do remember a database name at parangodeb or yeah I do actually no actually I had a have a a friend that was looking at them recently and yeah sometimes when you do do minor things it is you do nothing uh nothing actually well uh okay and the second question was about what was the second question uh the second question was regarding concatenation operations but it's it's a it's a double bar not a plus yeah it is controversial because when you have this double bar it explicitly say that this is the concatenation that's why we can use this concatenation operator not only for Strings but also for tuples and it will be not ambiguous with uh the case when you want pair like element wise operation on tubers and also in a SQL is originally a weakly type language so you can easily say uh do plus operate plus operator to add two strings that actually contain in numbers not every database does it but say circulite does it and it is fully standard compliant so again if we will use plus as a concatenation it will introduce just a bit of confusion I don't like confusion so I'm trying to be quite conservative in introducing overloads syntax sugar whatever okay that's probably it unfortunately um thanks for everyone for listening I don't know Nick have you got some final words of me if you're going to close it out but thanks for everyone for visiting thanks Alexa for your time for this question then everyone a little bit over I think it was some great answers out ah I forgot to announce uh so September 7th uh September 7th yeah we will have a special event name at clickhouse Cloud webinar or clickhouse Cloud Community call whatever you want to know about clickhouse Cloud if you want to ask provocative questions unusual questions special questions about uh cost efficiency about uh pricing about features about do you want to say move our Focus entirely to clickhouse Cloud now I don't suggest you to ask this question so I advise you to join our webinar uh on September 7th Dale do we have a link I'm sure you can come up with one quickly if you go to events companies like events you go to our home page and go to events it should be there okay just just type clickhouse.com and you will find it yeah it's company this company slash news events so I'll put it in this in the chat okay thank you thanks everyone see you in in just a week foreign
Info
Channel: ClickHouse
Views: 3,754
Rating: undefined out of 5
Keywords:
Id: d1_pyoWcydk
Channel Id: undefined
Length: 67min 49sec (4069 seconds)
Published: Thu Aug 31 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.