Live Coding: Uncovering Geologic Layers with Wolfram Language

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
my name is Dan and today we're going to be looking at a live coding case study and in this case study I'm going to be uncovering the geologic layers in South Asia with Wolfram language So today we're going to be following essentially just a fairly random problem that I happen to you know just think of just out of the blue I wanted this to be something that I'm not completely comfortable with and I wanted to do a study of it in the real world sort of sense so I wanted to you know doing what um I wanted to analyze the data in the way that that a data scientist would so whenever you see some of our webinars typically we have looked at the data beforehand and we've written some code then Rewritten that checked it a bit made it a bit more efficient and then you see the final result but I wanted to have a webinar where you know it's kind of a bit messy some of the code may be wrong and see where we go I'm just seeing a question in just asking why I chose the data set yep um so basically as I've said I chose it because I'm not really I don't know much about geology and I thought it'd be a challenge so if I just get right into it um so these are essentially the steps that a data scientist might uh might follow if they are to try to solve some sort of problem you know it might be that they are dealing with company data it might be that they have obtained their own data perhaps as a geologist you've taken a sample of rock data from you know rocks from a volcano and you want to find out what age they are or what impact they've had in some era or something or other and the way to do this is to follow the following steps so you clearly Define the question that you want to solve in that case it might be you know from what geological period do these rocks come you Wrangle with the data so essentially you clean it up a bit you make sure that it's in the right format for the tools that you're using so in our case I'm using Wolfram language so it wants to be something like lists or associations then you explore the data and this is typically just to get a sense of what the data actually uh you know what is actually there how large is the data set are there any bad or missing values in there what does it look like can we plot it that sort of thing and once you've done any further cleanup and you've done your exploration then it's down to analysis this is where you actually learn more about the data so it might be that you know you've got your rock sample or whatever it is and you might produce a full decent diagram of the globe and plot where it you know typically has appeared in the past or something and produce a historical geological timeline or something and you know this analysis is where you're really learning about the data and then finally you want to communicate your results so it might be that you are part of our research team and you've got a team leader and they want to know all about this rock so you need to communicate it to them and that might be in the form of a presentation it might be a report it might be simply you talking verbally but you want some document to reference you know whatever it might be um that step needs to come at the end so without further Ado I'm just gonna turn off my camera give you a bit more room to see my notebook and I'll get right into it I think that's done that excellent so what is this data that we're looking at today well it comes from the USGS website so this is uh I'm sure it says somewhere on the website it's the United States Geological Survey probably and again as I said this is chosen fairly at random and the particular uh data that I'm actually looking at today is can be found here funnily enough they seem to have a maintenance notice so you may or may not be able to download this data for yourself we shall see you know if you want to follow along and part of this being a real world um sort of introduction to the data science process I'd like to show you how I actually got the data it turns out that this was actually really really simple but that's not always the case so to begin my analysis you know to begin just the whole project I first went to this website as I brought up and I just read a little bit about the data so I as I've already said I knew nothing about geology so I had to read about this and apparently it contains the digital geologic layer for the map of South Asia I didn't really know what this meant but apparently the data will include arcs polygons labels and attributes for faults and rivers so that's quite interesting I have since you know after having done this analysis put this sentence in so that you uh yourselves can you know learn more about these data types but before doing this analysis I didn't know what these were obviously I know about JPEG and XML but SHP dbf prj shs X um never seen it before so that was quite fun and the goal of this project is simply what can we learn from the data so this is my question that I'd like to solve today and I'm certainly not going to uh to um learn everything you can know about this data um I'm actually going to do a fairly well not uh shallow analysis but you know a fairly uh restricted analysis on just part of the data so I'd highly encourage you all to you know after this after this webinar to take a look at the data yourself and in fact I may be posting some questions uh afterwards for you to have a go at so the next step is to Wrangle and this is how I got the data first of all I simply went and clicked download all as simple as that and luckily for me there was no cleanup required so already we're breezing through the steps in our data science pipeline normally you'll find that um actually getting the data can be quite tricky you might have to download it in an unwanted format that's quite common um you know to get something that you don't really like but in this case uh as part it was part of my analysis I was actually quite happy to just you know take what I could get so seeing these file types I was just like well let's just have a go see what we can do with them so all I did was click download all and then unzip and then my first step in my wrangling was to check the contents and now this is a somewhat laborious process because I decided to go through every file um and yeah I just wanted to see what they contained now as I've said I didn't know what an SHP file was so I could have done an online search I could have gone onto Wikipedia and I actually did in the end but first of all it's always helpful to ask an llm um now you may be wondering where I get this particular cell from and what I've done is I have actually so in as of version 13 3 you can go to file new chat enabled notebook and that brings up a notebook like this and it enables you to you know write in your chat queries now this will require an API key and it will cost you you know money like pennies per token that sort of thing um so I've got a work account so that's all sorted but you know just to warn you that this will cost you if you try to do this do this yourself but what I did was I took the style sheet from this so I went to edit style sheet and this style sheet for this chat book comes from this chatbook notebook here so what I did was simply put that in to the style sheet of this presentation notebook and that has enabled me to insert my own chat cells this particular cell is a side chat cell which simply means that it won't take into account any of the content above uh any of this text above here it's completely on its own standalone and it's told me that an SHP file or shape file that's useful to know is for geospatial Vector data interesting so you know I learned a bit more about it but um to be honest I tried to keep myself from getting too deep deep into the into the details because I wanted to you know you know be be sort of somewhat naive in my analysis and see where it took me um I'm just so as I begin with this code here I'm just seeing a question asking are there any useful file system functions out there any other ones so good question so my first step in almost any analysis is to locate where I am you know and put all any files that I've downloaded in in a location that I know about and so what I do is I find my notebook directory this is a very useful function and I happen to have put it I've made and additional files folder uh whoops so I've made this additional files folder and that means that I can put any any files like this SHP shx stuff in there and get out of the way and what you then do is you do file name join to the directory that you are in and take yourself to this additional files folder now the reason why you use file name join is because you want your file um you know you want your file delimiters to be file system uh sorry operating system independent so you'll notice that I have backticks here whereas if you were to evaluate this on something else like on Mac you'd probably have forward slashes I said I said backticks I meant back slashes so by using file name join in and out and it enables you to be OS independent and I actually have a couple more um guide pages that I could probably show you actually that are very useful in terms of file operations uh the main one I suppose is probably the file operations guide so this has some very useful stuff I think possibly more useful actually might be possibly directories and directory operations this is another good one so this will help you to actually sort of locate where you want to be precisely whereas file operations is a bit more about finding the files themselves but this other one file name operations is very very useful so I find myself splitting up file names all the time so dropping the file extension for example so you can use file base name just to get the file name itself you know all very handy stuff and we'll be posting these links um somewhere for you I'm sure if you need them but essentially you can just search in the documentation for things like operations on file names or directories and it'll take you there excellent so as I said I set my directory so this is now my Mathematica is now pointing to that directory and so now I'm going to learn a bit about the files so I looked first at the shape file and I can see a load of stuff far too much to be uh for me to take in but what does interest me is the data here in particular because I'll probably be using that um but yeah this is not telling me so much what I probably should have done first of all is found the summary so getting the file summary is incredibly useful whenever you've got a large file because it tells you how big it is this does not actually this doesn't import the whole file and then find its file size it checks the file size first so if you find so this is like really helpful if you um find yourself dealing with large data because it saves you from accidentally importing something that's you know several gigabytes but nevertheless um it's only four megabytes so not too much to deal with what I have done is dug down a little bit into this shape file and found that it's actually comprised of some lines and Geo positions so this is great already I'm seeing you know why it's called a shape file I can see that I you know I've got a bunch of geopositions and I'm presumably this is creating I would hope it was creating a polygon but it seems to be just a partial Edge perhaps that's the arc to which it was referring uh in the description on the web page so then again I just went through all the rest of the files I didn't know what shx was but it didn't really matter because that's actually not supported in Mathematica and that's something that you may well find happens I mean we do support an awful lot of stuff um all of these different uh data types but you never know it might not be there and so that's worth knowing so I guess I could have then looked up potential alter you know alternate file types but as it happens when I asked my llm what the difference was it turns out that the shx is sort of like the same data but a quicker way to retrieve it so it gives you faster queries basically so in a way it's a downside for uh you know for this webinar because we're going to have to go with sluggish data retrieval but it's not such a big deal because we know we can definitely get the data from the SHP file anyway so that's all good right then the dbf file turned out to be database as I've already mentioned and actually we'll see later on the data I believe uh does actually have some useful stuff in it in particular the labels one of these we'll see later on has something related to um to uh geological layers so that turned out to be quite helpful prj is just a text file so I won't really go into that didn't turn out to be very useful so then I moved on to the g08 APG folder so remember we have I probably should have renamed these to be honest if I just scroll back up sorry this is probably a bit confusing on the eyes we have g08 ALG and gu8 APG why they're called that I don't know but then I looked in the shape file for that and maybe it's in the next section but what I found was that this tended to contain things like polygons as opposed to Simply arcs so this actually was at you know this was where I thought great I'm actually receiving uh some useful data here because arcs to me seems you know I wasn't I wasn't totally giving up on it but it it I wasn't convinced that I was going to be getting Contours which would be ideal you know I want really I want some sort of Contour around each geological layer but um we'll see we'll see what the data comes up with so already said that's not supported and what I did find in my dbf was some potentially useful knowledge in here I didn't know what these values referred to I suppose I could have looked in the elements file and that may have helped but this to me was where I realized I'd struck gold because I recognize some of these I mean I say recognized not really I mean H2O makes sense although there seems to be a duplicate there so we might have to deal with that but some of these looked you know given that we're in the context of geology well geological layers I thought that perhaps these might refer to things like Cretaceous or Jurassic or something like that um so we'll find out more about that later on and finally there were just two extra additional files that told me uh mostly this this was the most useful one that told me what I was actually going to obtain from the data in the end which was going to be some sort of graphic of uh India and Sri Lanka and you know something about the geology so that's the sort of slow boring bit where I had to just slog through the data learn more about it but now let's get on to the interesting part which is the exploration of the data so obviously I have to go back into these files again but this time I'm going to try to do a bit more with them so I imported my shape file again looked at the elements I had a little look through them I found for example uh that the majority of the content was mostly related to the uh the map projection so for example the all of these uh all of these components here so the majority of this stuff I think ended up being things like you know GCS wgs 1984 which turned out as I found out through my research it related to a coordinate system and these it had a reference model in there I already knew that the radius of the Earth was something like that so I actually did a double check so if we just evaluate earth radius in a uh I don't know what you call these cells actually let's have a look they are linguistic assistant template cells supposedly so I just pressed Ctrl shift e there by the way um if I just get rid of that so I did this uh how do I get rid of that um sorry I'm just struggling to oh never mind I did Ctrl shift e there and that enabled me to see inside that cell so that enabled me to do a natural language query in order to Output some code that will find Earth's radius so it looks like this but then when I hit enter it gives me this code and I have accepted that interpretation and now if I evaluate this we get this value it's in metric not ideal so why don't we convert that so I can do something like unit convert of the answer I'm going to be lazy and just use the percentage symbol which simply means the last output and I want it to be in meters because that's what those other ones are in these ones here and as you can see 6.371 times 10 to the 6. 6.3 something times 10 to the 6. so this is probably these values are probably the uh different uh radii of the Earth being that it's not completely spherical it's an oblates spheroid so this is probably like Earth at its minimal point and Earth at its maximal point so that was quite interesting but what really caught my eye was the graphics part so Graphics was one of these components and as I'm doing this for the first time in this notebook it's going to download the geotiles so that will always happen if you if you change your Geo range because um so when you when you plot something when you plot a Geo thing for the first time it has to download these geotiles but if you change the geographic range then it has to plot or download new ones because it downloads different Tiles at different resolutions uh but anyway this was really exciting because I've now got uh kind of what I suspected from the first uh folder from the the um g08 ALG one I think it was so this is the line data I was talking about now I kind of got excited because I thought this looks like Contours to me which is fantastic contos is exactly what I want and perhaps if I can do a one-to-one mapping between these Contours and those um acronyms that I suspected were geological layers then maybe I can sort of produce some kind of plot so how what I want to do now is actually reproduce this properly so all I've done here is I've simply extracted the graphics component from the files that you know that I imported that's not quite what I want to do I want to actually produce this myself so that I can tweak it so that I can you know change the color scheme or remove some of the Contours or whatever I want to do so if I scroll down I'm gonna try and do that so I thought why don't I import the actual data component because presumably all the geological data is actually within that and then see if I can convert that data into that graphic now this was quite a lot of data and I had to take I had to use shallow around the data so this is actually quite an unusual um format that I've got here so I'm going to rewrite that so if you're not aware of sorry of what this means this is a pure function here with this slot in this Ampersand so if you've not used those before that might be slightly confusing but also I'm using this uh double forward slash PostScript uh evaluation which again is something else that you may not have seen before so I can write exactly the same code like this does the same thing this was just more convenient for me at the time you know I was just being a bit lazy really and shallow is similar to short I'd recommend checking out both and what it tells me is that we have some further components so we dug down into the shape file into the data component of that and then we've dug down further just now into uh into that and finding and have found that we have a layer name key geometry labels and labeled data clearly this geometry one has all the good stuff because it's absolutely massive but the downside is that this table this data is not in quite the format I'd like it's ruled data so it's got these rules as we call them um and the way you make those is you do rule like this and that produces that so behind the scenes this is actually you know this is actually rule of those two things so ruled data is good because um it enables you to use functions like lookup or or things like keys this is great but it's not quite ideal what I'd really like is to have a combination of lists and associations um primarily associations because um because I'm just noticing that I there's a question in the chat asking about the ideal data structure and so associations enable you to do this key and look up but um in a sort of cleaner fashion and you know there are a there's a whole Suite of functions out there designed to work with associations in fact if we go to the documentation we might well find if we scroll down some of these functions here so data set of course that's a wrapper for a combination of lists and associations uh Keys values key value pattern you know these are all really useful stuff so I wanted to restructure this as an association of ruled data the rule data is already done for me so I literally just removed the first list because this is doubly nested in list and I don't need it to be and then applied Association to that now Association will delete um duplicated keys so if you do Association of like you know a goes to 1 comma a goes to 2. it will use the latter one so that will take precedence and that sort of wipes out this first one so that's something to be aware of but I can already see that we have unique Keys here so that's that's not a problem so I'm cleaning up my data um and that was easy enough now let's look at the label data component so remember we've we've already drill down to our data and now we're looking at the layer the label data aspect of that now this had a bunch of stuff that I wasn't again 100 sure to what it referred so you know F node Tino don't really know presumably it's vertices in something I can guess what poly means possibly it's left polygon and right polygon but I'm not 100 sure so again I asked an llm see what it said and in this case it knew what it was so apparently so unless it's hallucinating and just completely making it up which is a valid you know it's a perfectly reasonable possibility unless it's making it up these are probably fairly common keys in geological data so that's how I was able to know about them and it's telling me that these F node and T node represent the start and end nodes of a line or Arc good to know so yeah I could probably somehow combine those with The Arc data and that might enable me to construct my Contours so that's good news but moving on what I really wanted to dig into was the geometry aspect of it and this as it turned out was an absolutely huge amount of data we've already seen from above from that shallow code that it's nearly ten thousand items long but if we actually dig down a bit and see you know the first three items then each one of these so these are not going to be small objects they're having to uh hold you know several pieces of information themselves each one you know maybe takes five or six pieces of information and so we've probably got 50 000 data points in here you know more or less so this is a lot of data so this is why I decided to take only the first few items just to have a peek and this is something I would definitely recommend I wouldn't ever well generally recommend importing something and immediately looking at the whole thing at once I'd always suggest have a sample you know you could even um do something like this you could even say random sample and then just take a random sample of one and that will just give you some sort of sense of what's in there now it doesn't guarantee you know I might do this random sample and find that it's always line and Geo position but there might be other stuff in there so I should also check that I should also check the dimensions and according to the dimension data this is one flat list now this is not so this is not guaranteed so Dimensions works with um rectangular data not with ragged data so if I do dimensions of something like this one comma two and then three comma four it says it's a two by two Matrix which is perfect spot on but if I remove that and make it a ragged array then it doesn't actually recognize that this is here almost you know it's sort of this is indistinguishable from a flat list now so that's just something to be aware of there is such a function as array depth as well I'm not sure if that also requires it to be non-ragged um worth checking out but anyway by this point I was pretty confident that the data was one long flat list and so I wasn't really going to be able to deal you know do much with it I realize now you know that I could have used the F node and T node data to sort of connect those arcs up somehow but for the time being I was thinking uh actually I'm a bit stuffed with this data so I should probably look at the polygon data nevertheless I did it last double check to see if there was anything but line data in there nope looks like one flat list so here I'm using a replacement and I'm replacing at level one and the reason like that's just a fast way of doing replacement you could have I could equally well have done replace all at no level specification but generally speaking if you have heavily nested data for example this is going to be a bit slow so you if you only want to replace on one level I'd recommend using replace with a level specification so as I it says here um I decided that this data wasn't particularly useful to me but nevertheless I decided to plot it so I decided to plot every Arc as a separate random color so I used map thread here to thread a thickness you know line thickness over all the lines and as you can see well as I thought at the time this is sort of arbitrary with regard to where one Arc ends and the next begins so I kind of gave up on the light on the line data at that point that's not to say I would ever give up on it completely but I wanted to move on and learn more about the polygons that I had seen now um I'm just seeing a question about um using map thread you know what is the best way to use map thread well basically map or rather how do you use map thread so matte thread basically is like a combination of map and thread you know kind of it's in the name and I won't go too much into detail but essentially map likes to apply one thing to a list of things like so um and for whatever reason that's taking a little while to evaluate I wonder if it's stuck on something else whereas thread is ideal for joining lists together so I'm suspecting that this uh image is actually very large and causing my front end to lag a bit so I'm just going to get rid of that and thread as I said is useful for threading lists together now typically I tend to use it in this form and for many um for quite a long time I thought this was the only way to use thread to zip together lists you know one and four two and five three and six but as it happens this is not actually this is sort of one simply one aspect of thread the list that you see here is actually a function you know you can replace it with something so you can actually thread a function over this zipped data so that's how you use thread and map thread does pretty much the same thing it joins them together and you're mapping some function over a threaded list so it will thread these lists together and then it will map this across them that's basically how it works um incidentally there's also through which is kind of like the opposite of um map I think so map is one thing to many whereas through is many things to one there you go so anyway I did discover that I could plot that graphic that I produced earlier all the same so if when it comes up you'll see that I have there you go this one but by this point I'd sort of given up on that so let's have a look at the polygon data so now we're in the APG file folder so this is like the second folder and I thought you know here's where I'm actually going to find the good stuff the actual polygons so again I just did a quick look into the data and I noticed that there was a graphics uh component so once again I'm going to be able to see what it actually should look like once I have you know done a bit of digging down into the data and at first sight I was a bit disappointed because it looks like it's just simply produced one giant polygon of the whole region so this is not going to be very useful to me however if you click on the graphic uh or double click rather you discover that actually it's made up of lots of separate polygons which is fantastic news so this is exactly what I want so let's get digging so I begin by importing the data again just the data aspect of the APG shape file it is pretty much the same as the previous one so it has the same Keys here and this time it's only got 3 000 elements in the geometry aspect so that tells me that I'm getting on the right track rather than you know there might be 10 000 arcs but only three thousand polygons that kind of makes sense so again I did pretty much the same restructuring as before because we you know we've got this data in exactly the same format and once again it was very nice data and very clean data doesn't normally happen um I was very pleased with whoever um set up this data set I suppose that's what happens when it comes from the government they you know they did it quite well now in labels so if we dig down into this labels part these are different this time and now a bit more self-explanatory but once again there were a couple that I didn't quite know what it referred to so I did ask the L M but as it happened these were not um particularly common so although it understood what area and perimeter referred to but it didn't really uh know what the rest referred to but that was fine uh nothing you know that's not to worry so I did a little bit more research again tried not to get too deep into that and wanted to you know figure out the data a bit more myself so I just dug right into it pulled out just the first three elements once again again I could have done something like random sample I would also recommend random choice uh uh because random sample is sampling um with replacement whereas random choice I believe this is the right way around is sampling without replacement so it's possible that in my random sample um I could get the same three elements it's very unlikely but you know it's possible so if you want to remove that possibility you can use random choice so I saw when I uh looked at this geometry file that I have polygons of geopositions so at first I thought you know this is absolutely ideal I'm finally getting somewhere and I can finally plot my data well as it turned out that was not quite the case this is not entirely made up of polygons this is actually a mixture of polygons and filled curve counts by is an excellent way of uh you know determining the structure of fairly simple data because what I've done is I've found all the heads within my data so it finds out all these bits so I knew that I knew now that I had polygons and filled curves and what I really wanted to do was um I wanted to unify the data I wanted the data to all be polygons or all be felt curves because that just makes future analysis that bit easier so I first did a little bit of inspection had a bit of an explore to see how um you know how many points made up the perimeter of each polygon and by the looks of it the vast majority of them are fairly small of course this does not guarantee that the polygon is small simply that it has you know it could be a very large but simple polygon and hence why it's got not many uh not very many points but nevertheless it's suggestive of the fact that they are mostly small polygons and I did this with cases um and if you want me to elaborate on you know how best to extract from data you know with um you know any other ways to pull things from data I can do so um actually I am seeing some a suggestion about data restructuring um so yes I will mention it so basically I use cases here because cases is a way of narrowing down your data I have um all the geometry data here but I only want the points from within those Geo positions you know I just want the points within this object here somehow now I could do something like map a function across the whole of the the data and sort of say something like you know if I had um you know if I say this is a sample like this then I can do something like map first across my sample and that digs down inside the polygons and finds the geopositions and so then I could have done first of first I suppose uh probably cleaner if I do something like this one comma one so this is the first of the first so I could have done something like this and this would have equally well uh dug down into our particular points but the thing is I don't actually I mean this is gonna for one thing this is gonna be huge I mean as you can see I'm scrolling for days here um we've got far too much data on our screen for another it's going to be a bit slow um and well I suppose I said if that is an equally valid way of doing it but I just happened to choose cases instead but what cases does is it um enables you to narrow down the data without having to display all of the data like so without having to deal with the entirety of the data first you narrow down before you sort of display if you will so we take only the cases where it is something of the form Geo position and then I I replace any geopositions with just the points inside and I do that at every level from not to Infinity I could have used something else instead of reply instead of cases I could have used replace as you've seen earlier very handy or replace all or indeed replace part but actually another favorite of mine is Select select is great as well because cases works with patterns so this here these underscores indicate that this is a pattern whereas select would work with uh uh well I think technically in the documentation they simply call it criteria but really it means functions so with select you can do stuff like you know select everything from such and such list where they are greater than two and a half like 2.4 or something so this I have a function here instead of a pattern and that obviously gives you back three so anyway those are just a couple of ways of extracting things from data um so yeah anyway I decided to plot the regions just to see what I was really dealing with I already knew that I had some nice polygons in here having clicked on them with the graphic output earlier but this was really uh showing that we were getting somewhere now this is you know already feeling like we're near the final result we've managed to um extract things deep you know deep down into our data and pull out the individual polygons that said I also wanted to get rid of these uh sort of C areas here I mean I know this is India and Sri Lanka I wanted to drop these uh you know ocean areas now I kind of did this in a funky way not the not the best suggestion I decided to delete polygons by area which is a pretty terrible idea really but you know when you're when you're just running through this sort of analysis these kind of things come to you and without without going you know without thinking too carefully about it you can end up going down some rabbit hole and only later realizing that it was a terrible idea uh so I would highly you know I'd welcome well I'd recommend that you give this problem a go yourself removing these you know light blue and light grayish polygons but doing so by uh Geo position perhaps so maybe you could do something like remove any polygons such that and then I think there's region within there's region intersection you know user function like this to say that if if the polygon overlaps with a geoposition here or here then remove it that's not the quickest way but that would probably work nevertheless I didn't do that so I decided to remove the c or rather extract the land regions by um by finding the largest polygons and in order to do this I needed to unify my data as I've said so I have my polygons and my filled curves and I just wanted polygons because I know I could use a function like for example take largest or take largest by and that will only really work if everything is of the same form I could brute force it make it work otherwise but really things would be a whole lot easier if we had just one or the other so first of all I wanted to see why some were polygons and why some were filled curves so what I did was I found the cases of the geometry data such that they were failed curves and I found a random sample of just three of those and then I did pretty much the same code that I used earlier to plot all these field curves with a random color and see what they look like and they they didn't really uh you know sort of it didn't tick at first I didn't quite know what was you know special about them but after evaluating it a couple of times I realized that all of these have holes in them and that seems to be it if we scroll down a minute and just see a random sample of polygons or rather all the polygons actually when this evaluates we'll see that all of these ones do not have holes in them and so in fact if I go back up for a moment and evaluate all of the field curves you'll see that this should be the perfect complement to this and in fact if I collapse both the cells or well mostly just the bottom one and in fact let's just delete that as well you can see that this one all of these field curves perfectly jigsaw puzzles its way into this one so then I had a predicament what do I do about these feel about these holes how do I turn them into polygons and my first idea was yet another terrible idea which was to Simply convert them just take every field curve and mash it into a polygon and essentially the way I did that was I said replace I said replace all filled curves that I've selected here such uh so that you take the geopositions and you just wrap a polygon around them so field curves tend to be a collection of geopositions as it turns out so they have a one collection for that whole one collection for this whole and that and that and then one collection for the entirety of the outer shape so I was wrapping polygon around the whole lot and so what that does is it simply plots all of the polygons on top of each other so I did a couple of tests so as you can see we have multiple uh lists of points and everything seemed to work fine so as always I did my test on just a small number of things so just two filled curves at a time and that seems to be fine so then I decided to run it on all of the regions of course anything that is a polygon already will remain a polygon so this ought to work flawlessly in theory so I have to do a double check foreign at the very least everything is a polygon now or at least it has an outer wrapper of polygon that doesn't mean it necessarily is a polygon because it could be something like polygon of nonsense and it would still register as having a head polygon but nevertheless I was pretty confident that it had worked so I decided to take or find the largest two polygons by area yeah now they look correct um and the area certainly looks rather large so let's go ahead and plot them again same Geographics code as before using map thread and there you go so this looks great this is perfect this is exactly what I want except for the fact that I've got these holes here these are duplicated so these are layers of polygons now which is not really ideal I want a perfect jigsaw puzzle where it's only one level deep uh funnily enough this is sort of like analogous to having layers of Earth now can it we're sort of we've got our own geological uh plot here but nevermind so the second approach you know once I realized my blunder was to drop the holes completely so I went into the filled curves documentation and found out this nice little fact that polygons well the way it works is they construct a curve and then they construct a hole within that and you fill the outer curve and then don't fill the hole and then you fill any Island within that and then don't fill any Lake within that and so on and so forth so it's alternating so in theory if I can create an alternating sum I can I can you know I can apply this sum to um to my Geo my Geo position so I can sort of say here are the Geo positions for the outer part then drop the ones from the inner part then add the ones from the outer part then drop and so on so I kind of thought this would work um I'm not 100 sure that's the way to do it to be honest um it might have been better to Simply find the outermost ones or you know those that have the largest cross section and then just simply drop everything within actually um not sure but oh well this is what I did um so you know this that's part of this webinar it's kind of all hacky so I I wanted this function to work fairly quickly because I've got a lot of filled curves like you know a couple of thousand of them and it was potentially going to be a lot of data and also I wanted something that would work in general so I decided to create a handful of alternating sum functions now um I realized once I had written this webinar that actually these aren't even all right so I am seeing a comment asking if I can explain how these functions work in more detail and I will do that very happily because I know that at least two of them don't even work as I in uh you know wanted them to so this is what they should do so they should take a list and Alternate between taking away and adding so they appear to work perfectly but if I add some duplicates in there then you'll see where it all falls apart these two seem to be doing the right thing so number two and three seem to be doing all right but number one is clearly just completely wrong um and then if we add another a we can see now that that now that none of them match um so really it ought to be you know a minus that plus that so it should be are any of them right uh so minus plus and then minus so it should be no a plus B minus E I think and minus plus minus plus oh yeah so it should be it should be this one so number two is correct I believe so what's wrong with number one and number three where did I go wrong what did I do and my first mistake was using first position so I thought um you know I'm multiplying by a power of minus one to do my plusing and minusing that's absolutely fine that's a good way of doing it and I'm using sum you know that makes perfect sense my problem was with my my attempt to locate each item in the list using first position of course only finds the first position so in this list here it finds the first a notes that it's in position one so does to the power of one but when it finds the second one it also does it to the power one and same for the third so it's no change so this is just gonna this is just plain wrong this second one I believe is correct and what I do is I assign each item in the list a unique index and then I use that index in the power of negative one so yes that does make perfect sense so we simply multiply all of the first all of the items in the list by their index to the power of minus one the okay um so it's sort of like if you had the list uh ABC it would be like doing a to the power of uh you know of one a to the power of 1 squared because it's we've got a plus one in here so it's a to power 1 squared plus b to the power of uh 2 squared so right no that's wrong uh sorry this is the power of that this is to the power of minus one squared and this is to the power of minus one cubed and so on you know it's all a bit complicated but anyway that should work absolutely fine now this third one this went wrong because of what I mentioned earlier with associations um always keeping just the last key that they see Association thread does a normal thread and then converts it to an association so it's only keeping the last key so we're basically losing you know with any duplicates we're losing all the former ones so that's not ideal clearly that's just going to go wrong and it does but that's fine you know I did some timing tests they're not really so relevant at the moment I mean I can run them anyway but it's a bit of it's a bit of a pointless exercise because we know that two out of three of these are doing the wrong thing but as it happens number two is actually doing pretty well and that's the one we want now as it happens when I decided to do in the end was do a quick check online to see if anybody else had done a similar function and luckily for me they had so some smarty pants on Stock Exchange had done it for me already they had done a they had done it as an extension to the fold function so while I was simply plusing and minusing through a list they were folding in any function you like whether it was FG or in this case subtract and plus so I decided to go down that route and I actually ended up making my own resource function out of it so this ended up going Way Beyond what really should have happened um and what I can do incidentally is just give you a brief overview of how to actually create your own resource function just in case you're interested and the way you do it is you go to the Wolfram function repository so this website here uh we'll post that in chat as well and then you click submit a new function that will download this blank notebook here and then you just filled it out and once you've filled it out you use these um templates to help you so the way it works is you highlight the word first and then you click on the button and it does the thing that it tells you it's going to do and once you have done that you can run through various checks and then submit to the repository now if I were to submit this it would probably give me an error and say you know we suggest that you don't do this or that you add a bullet you know full stop here or that you add this there and stuff so it's very very easy it's it's trying to help you all the way and then once that gets verified uh by the people at Wolfram then you will get your function submitted so anyway I submit to my function and luckily for me uh if for whatever reason that test data is staying as blue but hopefully this will run fine I ran my function against this test data uh yeah apparently that test data didn't get that's oh there we go so I never actually evaluated the test data that's why these were rapid so if we rerun that we can see that the first one was wrong and slow and the last one was unfortunately quite fast even though it's wrong but now if I run my fault rotate function you can see it's actually reasonably fast anyway it's all right it's not perfect but it's fine so and it actually does what we're expecting it to do so good for good for us so then I applied my fold rotate function to the filled curves adding the outer polygon removing any inner ones adding any inner inner ones for whatever reason you know I now have realized that that's not the right thing but that's what I did and I quickly checked all the areas and saw that the vast majority were um fairly small which is good you know that that sort of is a sanity check for me it tells it it's a corresponds with what I've seen in the um in the graphic at the beginning that we had only a handful of very large polygons and the vast majority were quite small so then I found the areas of the regular polygons and did a sanity check on those as well just out of Interest again similar results and then finally I was able to uh put all of that code together it's fairly uh complicated um it's also pretty hard to read because we're doing lots of Replacements here so we're using an outer replace but then within our cases we're also doing another replace so this is not the sort of code I would recommend writing yourself but again this was just this case study is the real world code sometimes you have to uh you know you have to submit what you end up writing you know you don't have time to publish it if it works it works is sometimes the case and of course if all I want is the final output I don't really care what the code looks like or how fast it runs I only care about that sort of stuff if I'm having to you know have this code running many many times or to work in generality on some other data set so for the time being this was absolutely fine and I did have to do a quick conversion of units because some of them were in kilometers squared but now we have everything in a nice uh in a nice uniform uh structure so we've got all the areas of all of our polygons and all of our filled regions and we can then simply say you know drop the first two that are the largest um this is I suppose I I might have been able to do a sort of inverse take largest buy to get this same data but it's fine so with any luck there we go so now we have exactly the same plot as before but we have dropped most of the ocean of course we've not dropped everything this island to my knowledge does not exist it's simply like some kind of underground mountain or something so that's where this algorithm that I employed um falls apart it's not really ideal so that was the shape data then I decided to you know I thought you know great I've got all my polygons now I want to work uh I somehow want to assign uh well I don't know why I had found all my polygons if I just go to the bottom of this slide again I find my polygons but they don't really refer to any geological regions yet any periods in time um you know there are 3 000 polygons but surely surely there aren't 3 000 or so different geological ages so I knew that this wasn't the final product I you know I needed to merge some of these somehow I needed a correspondence between these and some lists somewhere of geological layers so that's what I said about doing uh in this next section I went to the APG file looked back in the shape file and found this labeled data component and I sort of uh well when I decided to you know dig down a bit further into each of these I came across this data that we saw at the beginning so this GLG is probably like geological layer I guess and as I said it looked somewhat like geological layers now I wasn't 100 sure so I did ask the llm again but this time I decided to do so programmatically so we have a lot of these llm functions within version 13.3 now so once that's run I'll show you what I mean um so we're sort of at the whim of open AI here in a way because you know we've got to make a call to an API so that time spent there was simply going along and asking but it had mixed results some of these are clearly not geological layers um I mean it seems to have got vaguely geological but you know not ideal um so I'm what I'm going to do is I'm going to convert that cell to a bitmap which is something I like to do because it's a large output it's in the way but I don't want to get rid of it completely so I've converted it to an image which I can now reduce in size so that's actually very handy to do um so yes I said I would show you the various llm things that we have now if you just type in this so this question mark says you know find all the functions of the following form of the form llm and then something and here they are so there are a bunch of them I'm not going to go through them all now but the one I decided to use was llm function which is a bit like uh what's the one dream form yes it's a bit like string form in that it enables you to insert a template in fact it's like a string template and so what it meant was I was able to insert within my query the list of acronyms you know really easily so that's why I used llm function there now as I said that didn't really help so I did a bit more research on the USGS website and I found this great poster which I imported and as you can see we've got a nice list here now obviously the sensible thing to have done uh to do would be to Simply select that and copy it into Mathematica but being a programmer I thought well why why would I spend five minutes doing that manually when I could spend an hour doing it programmatically so I decided to first uh grab those acronyms and I gave the llm a bit more context here I told it that they were geological layers and I asked it to give me them in their full form so hopefully it will give me something like on the poster now as it happens I know that the um yeah so I know that this is not actually going to be the same every time so some of the code to follow will actually not necessarily work but that's fine so anyway what it did was it produced something that looks like a data set well an association and it seems to have done a pretty good job and it's certainly given me you know this this certainly seems to match some of the things in there now it doesn't match them all so I mean is that a real one um so it's done some duplicates here now for a little while as you'll soon see I thought you know what we really want here is to remove any duplicates so we want to dig down a bit and get something a bit finer so for example if we look in this on this um poster again we have quaternary but then we have quaternary sediments and the quaternary sand and dune so there is some deeper level to it so I thought I'd give that a go now note that this is not actually an association it's a string it looks like an association but it's not these llms can only output strings really you know anytime you see um the output of one of these that's not a string it's because some additional processing has been applied which I'm going to do here I'm simply going to make it into an expression um that's not guaranteed to work so we are sort of trusting that the llm put it in the right format I asked it to put it in the format you know this back uh this sort of less than symbol than a than a bar then a string with a rule to another string and someone until a full bar and greater than symbol so I you know I gave it the exact structure I wanted but llms are not per you know they're not um reliable um even if you've got a temperature zero they've actually been shown to be not a hundred percent reliable so this was almost like you know it wasn't guaranteed to work but I was pretty sure it would and converting it to a data set confirms that so this did actually work perfectly I didn't know for sure if I had lost any data so you may have noticed if you have been following along that within this list of acronyms we actually have somewhere an empty acronym so for whatever reason there was an empty string in here sorry and it looks like this has probably got rid of it which would be you know which is a sensible thing to do so that's something to be aware of but I also wanted to check that it hadn't hallucinated any new ones and it hasn't so that's fantastic there are as I can see even more duplicates so that's something that I should be aware of so I decided to try and get rid of these duplicates and the following code will not work um perfectly because as I say the output of the llm is different every time but nevertheless this is the general gist of how you might approach that problem so I first found the duplicate lists just to you know programmatically verify to myself that there were duplicates and you and note that I've had to use a resource function because for some reason we don't have a list of duplicates function or at least not one that I'm aware of um so if I look for duplicate functions um there's delete adjacent duplicates and delete duplicates but nothing about just listing duplicates um so that's unfortunate I'm sure we'll probably put it in fairly soon into the language so I've used this resource function duplicates list which simply lists them all and then I've selected those um from the llm output that are a member of the duplicates list is what it looks like I've done but anyway that gives me these so these are all the ones that are duplicated duplicated at least once well rather twice so the keys will be unique but the values may contain or will contain some duplicates so how do we actually deal with these well this is where I turn to that poster you know although I had looked at the poster I hadn't actually done anything with it so I decided to import the plain text and that's very simple to do and then find the first position in this sort of table where you know where does the actual tabular data begin so if I just scroll up for a second um quite a long way you can see like this there's this sort of column of data and I wanted to find where this began because when you import something as a plain text it gives you strings and lists and stuff back and the particular data you want will you know is usually easy enough to find uh as a sort of Nest you know as a list three levels down or something so for example if I just evaluate this line here you can see that it begins at position 192 to 196. so um what is this actually indicating oh yes so I'm looking for this or this so it's telling me that it began at these string positions so this is that string position 192 this is at 196 and then ended at these positions so I knew that if I just did a string take like so of all that raw plain text from 192 to 1542 I'd have all the string text I wanted and sure enough if I evaluate that and then convert it to an association and sort by key and then I have my data so then I can just do a you know a mapping between that and what I got from the llm and we ought to have a nice clean list of um geological layers and their uh acronyms or initialisms whatever you want to call them so I as I say I did do that I did compare them I used complement to see what was in one list and what was you know and not in the other and then I had to do some manual cleanup here but this manual cleanup only applies or rather only applied to the particular llm output that I produced you know a week or two ago when I wrote this webinar so that means that this manual cleanup will not actually apply in this case it's not actually necessarily correct nevertheless that's the sort of thing this is the sort of output that is you know what I'm looking for um one moment okay okay good so what I would say is that after this webinar I would recommend that you have a look at some of these uh periods here and try to map them to the built-in um entities that we have in modern language so we actually have a lot of these so if I just again in a natural language cell just type Cretaceous we get the Cretaceous Period and yeah it looks like that's pretty much the only sensible interpretation and I'm guessing that this will have a bunch of properties yes it does so it has things like rock layers which sounds perfect actually continental plates so maybe that will give me some of the geometry geography of it um so so far it's mostly telling oh there you go Bedrock polygon oh so this is exactly what I want um I believe that the majority of this data is for the us at the moment I'm assuming that we will expand out to global data but you know certainly this would have been absolutely fantastic for this um for this particular case study had I you know had this data existed for South Asia but anyway I would highly recommend you try to you know in your own time take this Association and clean it up I would probably you know where I to do this again I would probably not bother being so specific when it comes down to the uh periods so for example I would happily have um you know Mesozoic and call it a day I wouldn't need to dig down into mrserk and Paleozoic rocks um because it's far more likely that there's going to exist an entity that relates to the entire period and not that specific thing and in fact if I do control equals and then do Mesozoic maybe that'll come up there you go Mr Zurich area era so yeah have a go um so once I had um so once I got that my then my next goal was to try to do a mapping between those acronyms and the polygons that I'd obtained now as it happens I did actually find that there was some more geometry data within the uh within the file it came under the g08 AG ID key for some reason um not sure why this is taking a little while seems to be struggling here um there we go so this was fine um but well it seemed there are sort of two problems with it on the one hand I don't believe this is all of the geological ages within the data now I might be wrong but I would have suspected that there would be more sort of rock types there um so that's one issue the other issue is that currently all I did was I assigned each layer a value and I did nothing more so you know this coloring here is not ideal this coloring is a gradient and what I want is a sort of Swatch key service you know a a Swatch Legend is what I want a block of color referring to each as you part of this plot so basically what I'm saying is Geo region value plot is not the right one for the job so I decided to look elsewhere in the data and see if I can make my one-to-one mapping so I looked inside the database file and I found uh what appeared to be my acronyms again and I asked you know this was going to be my my way to finding this connection between the geological layers and the acronyms so I first had to double check that the acronyms in this file were the same as they were before they were of course they were all this data has been wonderfully cleaned up for me and so now I can just double check to see if there's this correspondence I'm looking for between uh the layers yes the layers and the acronyms so I've got my layers here and I'm mapping those to the acronyms so if I evaluate this again then we should get a similar plot to what we saw in the last slide but with any luck it should be the actual you know correct geological layers plotted I don't I genuinely don't know what the layers were that were plotted on the last one I'm not 100 sure so here we go so um I have no idea what this is sort of complaining about um so as you can see we have more layers here or at least what I think are more layers you know this is really nice beautiful looking over here sort of looks like oil on water and all I need to do now you know so I'm sort of reaching my communicate stage now you know I'm realizing that my analysis is almost complete I've almost found my you know my my final thing all I need to do now is let's fix my Legend basically So currently I've just got each layer attributed to a value you know I just chose a random value I mean I say random it's a a value from one to however many acronyms that were like 50 or so um so all I need to do is change that so that means we're going to have to change this plot type so it won't be Geo region value plot um which understandably as you might have guessed um plot a value for each region it'll probably be just regular Geographics so as I said I I then thought you know this is great I'm now at the point where I can finally communicate my data foreign so first of all I had to group the regions that were in the same geological age and that's where Group by comes in group by uh I believe returns an association so let's just first check to see what we're actually grouping here so if I grab the first couple of those yeah so those are the acronyms um I suppose I could have actually used the acronyms uh you know list that I obtained before but I can't guarantee that those are in the same order so just to be sure I grabbed these then I grabbed my polygons so I'm this is my sort of my one-to-one mapping I'm mapping my polygons to all of my acronyms now notice that um yes this is why I couldn't use the acronyms the acronyms are just the unique acronyms but notice this is all of the acronyms so there are going to be duplicates in there so this the length of this one is only 50 but I can go all the way up to you know I can go beyond 50 and go I can see the next five following 50 for example and so obviously there are duplicates in here because this is you know there are many polygons that are in the same geological age um I just want to assign each one to an acronym so this is looking great and then grouping by the last one so I'm grouping by acronym and then presumably this is me digging down into the polygon data so why don't we see what this grouped data looks like I have no idea how big this is going to be um quite big by the looks of it so as you can see we've now got a nice Association nice nice clean data where we've got an acronym that goes to a list of geopolygons well you know polygons so this is really nice now to follow on from earlier when I said that associations were like an ideal data type one of the reasons why perhaps the main reason why is because you can you can select from the data by key as opposed to by position so for example if I want just the PG data all I have to do is type PG and it will reduce it down to that data simple as that whereas if this were listed data so if it was something like like a b c d for example like uh even like something like C and then D if I wanted to get Part D I would have to dig down into the second sub list in the sort of the uh the second list here and then dig further down into the second of that so I'd have to do something like that that's how I'd obtain D whereas where this in association I would just do something like that uh like that so that is much much easier than this because you you don't have to keep track of the level specification or anything like that so anyway I've got my group data every polygon now has a uh has been assigned an acronym so that's pretty much the hard work over I now need to make it look nice so I'm going to give each one a nice tool tip so any that I don't know I can look up within them and say you know do I know it if I don't I'm going to give it unknown so rather you know um if it doesn't exist then I just replace it with unknown and yeah that's pretty self-explanatory what that does so basically I show my acronyms and then I show the full description that I was able to obtain with the poster and the llm output and just remember that this is not actually perfect remember we've said that you can improve this yourself but nevertheless this is actually quite nice we can forget that it's probably not quite right some of these may be misattributed but that's fine then I needed to construct a legend so you'll notice that I'm producing quite a lot of global variables here um I would say it's probably fine to have this grouped data as a global variable because that's quite an important thing but generally speaking I I don't normally do this I try not to create Global variables because you end up just having so many of them and you may end up overwriting them it's quite common to for beginners to accidentally you know make mistakes when they do something like assign something like like for example if you do something like this x equals one and then you forget about it and then you try to solve x equal to two for x and then he can't do it because that's trying to solve one equals two not x equals two now you could argue with that solve ought to um ought to localize its variables really but there you go that's that's just how it works so anyway so uh forgetting the fact that these are all Global variables and I should really localize them I like to at least when I'm prototyping make them more Global just so that I can have them in nice separate parts of my notebook so then I decided I need a color scheme for my swatch Legend my first thought was a rainbow color scheme because that would look really cool but that's obviously not ideal that's not really sensible now um I think the sensible thing to do would be to uh look up standardized colors for different geological layers that would have been the right thing to do but at the end of the day all I really wanted to do was to make each layer stand out and be unique so I decided to give each one a random color now I don't know for sure if random color is like random sample and Canon Theory produce the same color more than once at all if these are completely unique but basically it's fine for this result I mean there are only 50 colors so chances are any two adjacent polygons are going to be different colors if they're not I can simply reevaluate and then now that I had my colors and my acronyms with any empty ones replaced I could produce a legend um so yeah I mean I probably could have chosen better colors but that's fine with my Legend and my data all prepared so remember I've got my groups data all that was left now was to put it all together so I have my random colors and I make the face of each polygon one of those colors I make the edge form gray so we're going to have Edge uh gray edged polygons here then I'm going to plot my data so that's this part here and then finally each part of the data is going to have a tooltip which is going to be this part here and remember these tool tips were these so what's going to happen is I'm going to produce a nice graphic that when you hover over one of the polygons it's going to show one of these here and you know so that means that if you were to look something up on the plot Legend you know this doesn't tell you too much so then you Mouse over and it tells you so H2O stands for water who knew so there you go so this is apparently quaternary uh this is still quaternary but a different type oh a lot going on here I probably could have chosen a different uh plot Style with you know thinner lines so that's something you could do but yeah this is pretty much it this is my final result after my analysis of this geological data so in summary what I've done is I have come up with a question so my question was very open quite vague which was just you know analyze this data um you know you know oh actually I'm just sorry I'm just seeing a question and saying why not get the geological uh layer colors from the from the bitmap that is a very good point so what I could have done is uh where is it I could have taken these colors why didn't I do that ah see this is what I mean so foolish so that would be very sensible now how would you best do this so you so I mean I guess my first option for actually extracting these colors would probably so I obviously I don't want these background colors here this green here I don't want that I just want this yellow orangey pinky greeny colors so I guess I would personally probably crop the image to this column of rectangles and then run through it with something like table um and say you know take every sort of split the image after every X pixels you know whatever the depth of these are minus the padding and that would essentially give me a list of like colors that would look it would look a bit like this um I've literally lost my mouse so it looked something like like this um and if I were to evaluate that you know I would get a block of colors although each one would actually be an image then I would probably do something like dominant dominant colors to extract the dominant color of each image and then I would do my mapping back to my acronyms so it's a very good point Ian thank you very much I feel very foolish now all right so um sorry going back to this summary so what we've actually covered in this webinar so I began with a question a very open question just explore the data and I feel that I did that reasonably well but obviously there's so much more I could have done um and I highly recommend that you give it a go uh use the notebook yourself and you know expand upon it I then wrangled with the data I mean it's a bit of an odd term so what I mean you know I mean restructure clean up delete duplicates delete uh anomalies we didn't really do so much of this step actually and but nevertheless that gave us more time to do the exploring so this is where you'd maybe do a few plots of the data typically using things like like list plot or or histogram uh not not that so I tend to use those two functions an awful lot just to plot the data just to get a general gist of it this is a great way of seeing outliers in your data and then then you go into deeper analysis so this is not just finding the length of the data but this is like finding the depth of the data finding out which what the distribution of the data is so something like you can find what the distribution of the data points are and you might perform a hypothesis test where the distribution fit test to see if it fits a particular distribution you know you can do all sorts of analysis and then finally we communicated the results so I produced that uh pretty plot at the end I have left you some resources here because this because this is where the data has been presented by somebody else and this is actually where a somebody else on community possibly I will from colleague actually has basically yeah has basically done a similar analysis to me but because they're in the US because they've used us data they have all of that entity information built in so they do a fantastic analysis going to a lot more detail than I do and actually do a proper uh look at the different geological eras now before I go I'd just like to um Now remind you that this is just part one in a series of these live coding events we've got another two coming uh each Friday I believe the next one is on traffic analysis so finding how to avoid traffic going through London and then the one after that I believe is looking at images from Mars and trying to uh trying to take static images and produce a nice smooth video from that and maybe some object uh recognition within that data all right so I'll just hang about just a little bit longer to see if anyone's got any questions um if you do and we don't manage to get back to you that's not to worry we can always um we can always respond to after either posting on YouTube or twitch or perhaps we can do it on Wolfram Community I'm not sure um or possibly by email so you know we'll get back to you okay so I don't see anymore so thank you very much everybody for coming um I hope you enjoyed that and have a wonderful day
Info
Channel: Wolfram
Views: 2,225
Rating: undefined out of 5
Keywords:
Id: vs1WAaD9sck
Channel Id: undefined
Length: 88min 26sec (5306 seconds)
Published: Sat Aug 05 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.