GopherCon 2021: Ben Johnson - Building Production Applications Using Go & SQLite

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
i'm ben johnson i work a lot in go and databases and i'm going to be talking about writing production applications with go and using this great database called sqlite just a quick background on me i've been writing go for quite a while now about eight years now i write a lot of open source as well so bothdb is one example it's a key value store it's used in projects like etcd or console i also am writing a project right now or have a project called lightstream we'll talk a little bit about later a lot of my focus right now is getting sql light really production ready and getting it to deploy out in all kinds of projects i do a lot of blogging as well so you can find blogs about uh go project structure on go beyond it's a blog i have and finally i just really love embedded databases i think they're awesome so this talk is going to be very biased but i hope you uh you follow along with me uh so whenever you start a project with go you're always kind of the biggest question is you know which database do you choose there's a myriad of options out there there's traditional options like postgres mysql sql server oracle those kind of things but you have a lot of new ones as well you have all kinds of options from amazon so rds aurora you have planet scale out now just tons of database options what do you choose now a lot of these databases they have you know benefits and they have trade-offs as well uh a lot of these are familiar so a lot of you have already used maybe postgres already a lot of these databases have been around for you know 40 years some of them so they're quite robust and they just have tons of features in them you know they just collect features over the years now you get some trade-offs with all this as well one thing is expense so if you have a separate server to run a database on that's an extra server to run if you're doing cloud databases they have they get charged for operation and a lot of these databases get bloated over time with all these features and then finally the last trade-off i think is one of the most important is high latency so connecting to another process or connecting to another server can really take up a lot of the time of your processing and your query speed now in addition to the client server and cloud options one i don't think people or most people consider would be sql lite um and it's you know for a long time it's kind of considered a toy database or test database um but really um it's so much more like you have it's a it's an in-process database it's embedded database which means it actually gets compiled into your source code and deployed with your application everywhere it goes uh it's super stable super safe this runs on all kinds of devices so your cell phone has it installed on it it's installed on airplanes it's installed in all kinds of places and it's been around for about 20 years now first released back in 2000 now sqlite provides a lot of great guarantees like any other database so acid compliance we're talking about atomicity consistency isolation and durability that means that whenever you write transactional data it gets saved safely and you can trust it in addition it has a lot of sql support you might not consider has things like windowing functions it has things like common table expressions recursive queries subqueries most of the things you expect from a sql database are in there and finally it has non-sql support that's really great you have full text search so things like elasticsearch you don't always need when you have the built-in full text search within sql lite it also has json support so if for some reason you are storing json in your database you can query it i don't recommend that generally though so as far as when you should really consider sqlite you know historically it was used in embedded software that was kind of its biggest use case it gets used on like edge databases things like that small resource constrained servers and that was kind of its purpose purpose for a long time now there's been a lot of improvements over those last 20 years specifically in concurrency so it does really make sense like a multi-process or multi-threaded database so because of that it's really useful for read-heavy workloads you might consider it for something that like an e-commerce website for example where you have someone going to a lot of pages those are all read queries and then every once in a while maybe adding to cart or checking out those would be right queries it's good for you know small to moderate request load things like 10 you know tens of requests or hundreds of requests per second which probably covers most applications out there and you know really sqlite has kind of grown so much over the years that it can kind of be considered a general purpose database for a lot of applications um you know obviously no tools used for everything and they all have their specialties but it's i know i consider it a good default to start with i think historically i always consider postgres as my option and then try to think about you know why would i use something else these days i really look at sql light as kind of my default database and then consider why i would use something else so this stock's going to be split up into three different parts we're going to talk about kind of the development side the testing side and then finally we're going to get into kind of the production database side and what that really looks like from a durability and a performance standpoint now when you connect to sql lite you have basically four options out there there's probably some extras but these are probably the most popular now the most common library out there is going to be the matt and the library go sequel light 3 and honestly this should just be your default it's used in all kinds of places it's been around for years stable and great i have no complaints about it this is one i typically use there's also a project going on from modern c uh where they've actually transpiled like mechanically transpiled all the code from sql lite into go so you can actually compile it as a go library and then you avoid the sego overhead and any issues around cross compiling next we have the low level sqlite database if you want to avoid the database sql library you can use this one by david crosshaw and then finally tailscale is actually working on an implementation right now that has some improved benefits with concurrency over the matt and library it's a work in progress but definitely check it out they are doing some good work so when you connect to sql lite um this is actually going to be pretty brief um you use it like any other driver for database sql libraries so you import it with a underscore before because you're not representing the actual name of the library and then in the sql.open you are going to just pass in the driver name which is sqlite3 and then for the data source name instead of a url or a network path you're going to actually pass in the path to the uh the file on disk and then after that from then on that database connection really you can just treat it like any other database connection now i generally consider sqlite pretty configuration free there are a couple exceptions here that are worth noting and we'll kind of go over those but once you get these set there are some knobs you can tune if you really want to later on but generally it just pretty much works now the first one here is probably the most important i find it's called the journal mode and journaling and databases is how you safely write a transaction disk and what happens here is the default mode the original mode was called a rollback journal and you'd write the whenever you wrote or data to your database it would go into you take the old data put it into a rollback journal and then you write the new data to your database and then you delete the rollback journal there's a new mode in here since 2013 called the wall mode this stands for write ahead log and what this means is that you're going to write all your new data to the separate file instead of your database and you get a lot of concurrency benefits with this you actually get point time snapshots for every transaction as well as you're able to write and read at the same time there is a limitation on sql lite where you only have one writer at a time but you can have as many read transactions as you want next this is kind of an odd one if you're coming from other databases it's called the busy timeout uh the busy timeout sets how long uh a write transaction will wait to start now again sqlite only supports a single writer so that means if one's going if one's active and another right transaction starts by default it'll just fail immediately that's not usually what you want so instead we'll set this to the number of milliseconds we want to wait um i usually use five seconds sometimes you can use 30 as long as it's more than zero that's kind of the biggest thing uh we're going to use this format here with a pragma which is essentially a way to set uh settings on sql light so we're going to do pragma busy time out equals 5000 for five seconds uh final option that's really important is foreign keys and a lot of these are for historical reasons as to why you have to set these things or why they weren't enabled originally foreign keys are not enforced by default which is pretty surprising for most people you know sql lite started off in very resource constrained systems so you didn't want to necessarily use all that processing power for foreign key constraint checking so what we can do here is we just set pragma foreign keys equals on and it will uh enable enforcement for your foreign keys uh so those are the main settings you want to set with sql lite i find another side another side of sql light you really need to kind of get used to is the type system uh it's a little weird but it's not as bad as a lot of people consider it the first thing to note is that the type system is pretty small you have integer you have real which is the floating point numbers you have text for readable text and blah for binary data but this actually pairs well with the go type system we have integers we have floating point numbers uh we have strings and we have white slices and finally we have null which is kind of no dated at all now the really weird part about the type system in sql lite that people kind of get thrown off by is that a a column isn't necessarily associated with the type it doesn't have like a strict type every value you insert into the system has its own type and type definitions on tables and their columns are basically ignored by default so here down below you can see we have a create table with an accent integer and a y text but sqlite largely ignores those types and you can insert text into x or you can insert integers into y and it'll just let you do it which personally seems insane to me but that's how they set it up uh luckily actually just only two weeks ago they added support for strict mode uh which actually enforces the column constraints and a couple other options too so i don't have time to get into this presentation but it's uh something to look into the last thing kind of caveat you got to worry about when you're developing is that sqlite kind of has some some types i really wish it did have it doesn't have these two are timestamp and decimal types uh for timestamps while there's no type actually available within the type system it does have some datetime functions and these functions will work with three different types of dates the first one is iso 8601 and go this is known as rfc3339 this is a string format where it's nice because it's actually lexographically sortable naturally as you can just sort them and use them in indexes and it works as you expect the other option you can use unix epoc which is the number of seconds since 1970. this will give you an integer which you know shrinks down the amount of space you're using for each column and then finally there's something called the julian date i would generally advise against this one because most people have no idea what it is it's the number of days since about 4700 bc again it's it's pretty obscure but there are some applications for it but i would generally avoid that one now the development side once you kind of get it connected get it up and running um it's really pretty good pleasure to use once you get around those caveats in the type system another piece of it that's super nice with sqlite is the testing is just super fast now the biggest thing with type system like if you come from an orm that allows you to run against multiple databases a lot of people actually run their tests in sql lite because it is so fast again it's built into your application and compiled in there but the bigger thing is that you can actually has native support for memory back databases this means you know you're not going to have any slow disk access you're not going to have fsync calls everything actually lives on the heap we can do this by setting the data source name or dsdsn to colon memory colon and then everything will just be in memory from that point on and again you just use that database like any other database except now it's screaming fast we can do some benchmarking here with sql lights again this is very much loaded in favor of sql lights but we're comparing this here go sequel light in memory versus kind of a standard postgres setup that people use for testing um you know both are the default configurations and just kind of understanding how the performance changes uh so with postgres you know certain operations you need to do but you don't need to do on a memory database so if you need a drop of exists uh you don't need to do that for in-memory databases because you wipe it out every time you um you close it so for postgresql we're talking you know five and a half milliseconds for create table you know it's three times as long as the in-memory version and some of that is just the connection time uh and then when you get down to inserts and selects you know it's again screen and fast where we have 25 microseconds for selection inserts versus you know around half a second for inserts and selects on postgres so again you know it's not quite apples to apples but it's a good idea to understand really though the scale of difference between the two another thing you can do is parallelize your tests for sql light again because it is in memory there's really a shared nothing approach between the different databases for each of your tests you can use t.parallel to run these in parallel tests in parallel there's some options on go test where you can flag the level of parallelization the one caveat here is that sql light 3. sorry go sequel light 3 has a global lock on its connection so it'll actually slow down parallel tests so if you want to get better performance for these uh parallel tests you need to use a tail scale implementation uh here's a quick benchmark here to kind of illustrate the point so with ghost sqlite 3 and tail scale they both run in just under a second and a half and here we're running about 10 000 small tests each test opens the database creates table uh does an insert select you know some very basic uh testing and then closes down uh we can run that about 10 000 times again in memory but as we do as you scale this out over more cores for ghost equal light it actually ramps up over time to three seconds and gets slower whereas with tail scale you can see that it actually speeds up quite significantly so on the development side we've really seen where you know it can be pretty easy to get up and running on the testing side we find that it's screaming fast and we can just constantly test and get feedback and what we need but then the thing that people generally worry about is production experience and that kind of factors into two different ways we have the the durability side and the performance side now before we get into too much on the durability side on speed sql specifically this is kind of a quick aside about durability in general i feel like over the last 10 to 20 years you've had a lot of conversations around availability where people had expectations of 100 uptime and that's not realistic really you know you're looking at 99 99.9 percent uh uptime and there's cost associated with trying to get higher and higher up time so it's really a trade-off of how much one you want to spend and the complexity around that system and what level of time you're expecting however on the data durability side we really don't have that same conversation and uh before we talk anymore just important to note that there really is no such thing as 100 durable data uh it's you know you can make as many copies as you want of your data but you can always lose them all so sad but sure now there's kind of a spectrum of durability when we're talking about hard drives and systems so you know when people think about running sql light on an application uh sometimes they just think it's like a server in your closet that has a hard drive and that's not very durable i mean backblaze puts out an annual report where they find you know they check all their hard drives and see how often they fail and they find that about 1 in 100 fail annually so that's a 99 durability uh you know 100 chance isn't too bad for a lot of things but losing all your data you probably want something a little more safe now you can upgrade look at something like cloud servers where they tend to have a bit higher durability with their hard drives they can be rated they have other options around that too evs most of its volume so this is amazon's elastic block storage most of their drive types have three nines of durability this means that every year they lose about one in a thousand hard drives you know that's that's better than what we had with one hard drive but still it's not amazing when you're considering losing all your data um you can look at something higher like i o two on ebs this has five nines of data durability that means you have one and one hundred thousand drives fail annually uh you know this is getting some much something that's much more comfortable you know it's probably you probably have a bigger chance of dropping the database or deleting a table uh then one and a hundred thousand so you know something you can get more comfortable around in data durability terms uh finally we have s3 uh this is kind of you know on the far end of durability it has eleven nines which is crazy uh that means you lose like one and a gajillion objects every year so generally when you put something in s3 you kind of consider it safe to be there so we're talking about data durability beyond just a single disk we start talking about you know one or more servers so obviously single server is going to be your fastest cheapest office option but it's less durable uh you have options for doing a primary primary replica setup where we stream changes from a primary down to replica this is obviously more complex you have double the number of servers so there's additional cost and finally it's you know it's more durable you have two servers instead of one there's also you know more complex situations you can get into distributed consensus for example you can have a cluster of servers that need to confirm rights before you let them continue this is really complex though if you've ever run say at cd you can kind of understand that there's a lot of complexity and performance you're trading off with using this but on the other hand it's really quite durable you can lose multiple nodes possibly and still have your data so only in this kind of aside this little rant here with a quick story um you know the idea here is that no data loss is good but not all data loss is catastrophic um i'm not trying to throw anyone under the bus here i think gitlab did a great job documenting a an incident they had four years ago so in 2017 they lost about six hours of data which sounds you know that's rough no one's going to have a good day with that it's unfortunate but you know what happens sometimes they had a primary postgres replica setup so streaming data from one uh primary to a replica however the replica just stopped getting updates for some reason they were trying to figure it out and during this uh while they're trying to figure it out the operator accidentally deleted the data on the primary when they were deleted on the replica and they lost six hours so terrible but you know get lab still around and they haven't fizzled out in fact they actually ipo this year for 16 billion dollars so again this is really a trade-off no data loss is good but it's not necessarily catastrophic so i'm not encouraging data loss by any means but i think it's something to consider how much your data costs and what you're willing to trade off for it in terms of durability and performance and cost so without that all said let's talk about data durability in sql lite specifically so uh your your first option you should probably consider is just honestly regular backups um you know if you're talking about running something like io2 on ebs you're already getting you know tons of durability with that in the first place um and then if you're regularly backing up then you really have a fallback plan anyway after that if it fails so regular backups uh the pros here is it's it's super fast uh super cheap and it's really hard to mess up it's not a lot of configuration uh the main tradeoffs here is that you start to get a bigger data loss window so a data loss window is when you just have data that hasn't been backed up and you lose your primary data set anytime in between the last backup and your current data set is your data loss window now you can set up you know a regular hourly backup for daily backup to say s3 and again you know sqlite is just a file so it's really simple to work with we can back it up uh we can compress it shove it up on s3 and don't worry about it which is nice we can use a time based file naming schemes here so if you want to name it after the current hour you can just have it have replace the current or the previous snapshot from yesterday and just have a rolling 24-hour backup which honestly works great for a lot of applications uh the other thing too is b tree databases which is most sql databases uh they compress really well there's usually a lot of extra space in there so that they don't shift around records when they update and insert all the time and it means that if you have you know say a gigabyte database and compress down to you know a couple hundred meg and it's not as painful uh also something else to note with s3 that's uh interesting is that they make it so it's very cheap to get data into s3 but very expensive to pull it out of uh so it works great as a backup solution you don't do a lot of operations where you're restoring constantly but those operations pushing it up can be quite cheap so sqlite has a command line sequel light 3 and here you can see at sqlite3 we'll pass in the name of the database that's mydb and then we pass in the full command here in the double quotes which is going to be dot backup and then the name of the file we want to backup to so again back it up to another file compress it push it up on s3 and uh yeah generally actually works pretty well for a lot of applications now if you can't handle that data loss window of an hour you know that it can be pretty long for a lot of people even if you have you know again say five nines of durability on io2 um you can look into other options so this is lightstream this is a tool that i created um so obviously i think it's the best but you know every tool has its it's a situation where it works well in so the idea with light stream is that because s3 is super cheap to back up to um is that we can actually continuously back up to light stream which is really nice uh sorry to s3 so what lightstream will do is it'll snapshot your data at the current point in time and usually once a day and then whenever you make changes to your database it'll read those off that right ahead log that we talked about earlier and then it will compress those push them up to s3 and if you ever need to recover your database it can pull down that snapshot replay all the pages from the wall after that and you'll have the exact same bike for by database afterward you know this works too if you have catastrophic failure you can set it up to automatically restart and reload and go with that so again a bit more complex than your regular backups but you know it's a pretty nice option uh one user of this michael lynch he's written a blog post on this um where you know he's looked at moving from different uh cloud services over to lightstream and he found that he only spends about three cents a month on these uh the s3 backups so he finds it as a great super cheap option so something to keep in mind if you do download it lightstream actually runs as a separate process sqlite supports multiple processes connecting at the same time and your your actual application has no knowledge of light stream uh it's really like an ops consideration so here we have a light stream command uh there's a sub command called replicate and we'll pass on the path to the database which again is just a file and then we'll pass in the uh the s3 bucket uh here's just kind of a visualization of lightstream so we have our database on our application server it whenever it has changes it attacks those onto the wall and then lightstream is you know pushing up snapshots and continuously pushing up new wall pages um as soon as they happen so this is again a good option for reducing that that window of data loss but again it'll trade off of a little more complexity finally on the durability side you know the most extreme example once you get into data durability is clustered sequel light which is a bit of an oxymoron it kind of seems but it works for some people so the pros to this is that it's super durable i mean you're going to run a cluster of servers so the cons here obviously that's going to be more expensive you know three servers or five servers is going to be more expensive than one and it's significantly more complex if you've ever run fcd before and try to manage that you know it can be sometimes a bit of a headache so keep that in mind now and these kind of get split out into a few different types we have raft-based so raft is a distributed consensus protocol etcd uses raft for example but rq-lite was written by philip o'toole and that's a great option he's done a great job with that canonical also wrote their own implementation of a distributed sql light called dqlite and then we have uh on a primary replica setup where we stream from a primary server to a separate replica server we have something called light replica this one i haven't used this personally it does have a gpl license so watch out for that if that's a consideration you need but they do have a commercial license you can check out too and then finally there's kind of a crazy project from expensify called bedrock i haven't personally used this one it's blockchain based and i have no idea how they do it but it sounds very complex apparently it works for them so kudos to them but i have not used it personally so you know on the performance side you know we got durability and if you're comfortable with the durability story then performance is kind of the next thing you really need to consider when you're talking about performance or sorry production now performance between sqlite databases is always hard to kind of quantify you always see synthetic benchmarks out there where one database is x amount faster or x percentage slower but really kind of have to test it with your own workload to know what is actually going to happen in the real world because they're all against synthetic benchmarks now the caveat here is that most sql or sql databases in general are b3 based that means they have the same underlying data structure and because of that a lot of performance profile is generally going to be fairly similar you know some are going to have a bit better performance some are going to have a bit less performance and it's really hard to say for each situation one might be better than another however the one consistent thing is that network latency can be a huge component of performance and this is really where sequel light shines because it basically skips over the network piece entirely because it lives right in your application so doing some benchmarks here uh against sqlite versus postgres um we're doing point queries in this instance here uh so it's running about 10 000 point queries on a fairly large database and it is we're calculating the time it takes on average for each query so for sql lite we see it's way down there at only about 18 microseconds which is screaming fast and then you know once we start going out of process to another process over some kind of network connection uh even on the same box so postgres running on the same box uh still takes 10 times as long just to get that one point query again this is you know connection overhead this is serialization there's all kinds of you know overhead here that you need to consider you just don't have a sql light again it's a it's a lightweight database but you know again has some features too uh once we move out of the same server into the same availability zone here we see that it jumps up it basically doubles this is all running aws but we see that it goes up to 300 microseconds you know again not a ton of time but if you're running hundreds of these queries or even tens of these queries that adds up to real numbers and then finally running it within the same region we see it gets three times as slow we're now up to almost a whole millisecond at 900 microseconds so going down from sql light at 18 microseconds to you know same region at 900 microseconds is a huge jump now you know that that's how most people set up their applications when they run their own database server if you start running against a cloud server that's a whole different story it might not even be in the same region as you so you need to really consider latency at that point so once we add in cross region latency we really start seeing big numbers now this is adding in us east one so virginia to usc's two ohio which is not very far away uh but it still takes about 11 milliseconds you know you can't even see sql light at this point on the graph it's so tiny um but you know it's something to really consider this network latency is going to be a big part of your performance now you might consider you know that queries can run in parallel and that works in some situations for sure but we see we when we do parallelize them we you know we get better performance generally especially as we have more network overhead but we really don't get anywhere close to that latency of sqlite because again it's in process and it's so close to your data now a lot of times you can scale up a single database on a single server quite large you know we have aws where it can scale out to you know 96 cores or some ungodly amount of ram which is great you know our machines keep getting faster every day uh if you ever do reach your maximum of what you can store on a single server and process on a single server it's time to start thinking about moving horizontally and horizontal scaling sqlite is not an often talked about topic because it's kind of you know seems antithetical since it is an in-process database however uh the benefit here with sql lite is that it's so flexible you know it really is just a file that's that's all it is uh so we can do some pretty interesting things that you might have a harder time doing in other databases so for example if you wanted to isolate your tenants put a single tenant per database which can improve your security um you know they're all just individual files so we can move them around we could put them onto a cluster servers and then distribute those tenants based on like a consistent hashing scheme um you know this talk we don't have time to go into consistent hashing schemes but it's something to uh consider if you actually do want to fan out your uh your tenants and their data over a network or over a cluster so again options out there if you really do need to scale to those uh bigger data sets now as much as i love embedded databases and sql lite um there's always times when you just shouldn't use a tool so you know i want to you know get into that you know it's not all sunshine and roses out there there are considerations here as well i would say the the biggest time when you don't want to use sql lights by far uh is if you have a working system please please do not rub out your database and put in sqlite you know your colleagues will hate you you know you'll never find happiness in doing that so definitely avoid that with a working system now you know a good way to do it is to to start off small have a smaller application you know get going with that and if you really like it you know there's always some um some operational pieces you need to learn around any database so it's good to get comfortable with that at a smaller scale for sure another piece you want to kind of avoid if you have long-running transactions in your database usually you don't want those in any database really with sqlite is really a requirement though because it has that restriction of a single writer other databases get around it with some optimistic locking schemes which can work to some degree generally you want to avoid long-running transactions another time to really avoid it is if you have like a truly ephemeral serverless environment you know if you don't have a disk or if your disk suddenly disappears when your service is done running it's definitely time to you're going to lose all your data so that doesn't work great there are some options out there services like fly.io they kind of work similar to heroku so you have kind of those kind of dinos or kind of serverless environment that kind of feel we can push up uh to basically get repo and deploy uh you can do that but they actually provide persistent disks as well which really helps a lot so that's something to consider uh so you know just to kind of wrap up here a little bit um you know sqlite's great you know in a lot of different ways it's it's kind of a different paradigm from what you're probably used to with a client server database or a cloud database for example you know going back on the development side you know it's great because there's just no dependencies at all you know you import a library and you're done basically there's not really any configuration you just get up and running you don't have to worry about users or grants or privileges you know it just kind of works uh there's not much configuration you know we have kind of those three pragmas that we set the journal mode uh foreign keys you know that kind of thing so make sure you set those those are important and then again the type system it's kind of weird a little esoteric but it actually works pretty well against ghost type system and you know enforcing your types at the application layer and then as far as types go you know time stamps and decimals those are always going to be tricky but you know generally i would say that rfc 339 timestamps work pretty well uh on the testing side you know the in-memory support is fantastic you get your tests running super fast you can just constantly run them whenever you make code changes instead of waiting seconds or minutes you get your feedback in you know sub second even for hundreds of tests there is parallelization you can use but again there's a caveat with that the the go sequel light 3 library doesn't work as well with parallelization and you really need to evaluate kind of the tail scale implementation even though it's you know it's still a work in progress uh on the production side once you actually do put it out there you know the there's a couple different options again you have your regular backups again you know some people may scoff at this but it's really a great option especially once you have very high durability drives anyway you really you have a known data loss window but it still works you know great and you put yourself a pretty low risk of data loss in general uh if that doesn't work for you though options like light stream where you can stream your backups up to s3 also work pretty well again you're going to trade off a bit of complexity for that so just kind of go there know that going in and then finally we have clustered sqlite which is kind of on the extreme where we're using something like graph to replicate either through arculite or dqlite we have light replica which is a primary replica setup or this bedrock which is the the blockchain based one although if you get into cluster sql light again there's not as much of a difference of just using a regular client server database at that point so that's something to consider too on the performance side you know single node performance is just screaming fast like you can't beat it once you remove that network latency and your serialization overhead you can really do so much um and bring your request times down significantly and a lot of times you just don't have to worry about things like n plus one queries and there's just a whole class of performance problems you generally don't have to worry about sqlite and then you can scale it up you know there's pretty large nodes out there either on aws or whatever service you use and you can also scale it horizontally with some creative ways again sqlite is just a file so it's super flexible so really in conclusion i hope you give sql lite a try you know i love the database it's not great for everything maybe you'll hate it maybe you'll love it um but i think it's worth a shot and uh yeah i really like it a lot so thank you for uh listening to my talk really appreciate it uh you can reach me either by email or you can find me on twitter as ben b johnson or pretty much anywhere on the internet as ben b johnson and yeah thank you very much you
Info
Channel: Gopher Academy
Views: 1,020
Rating: undefined out of 5
Keywords:
Id: XcAYkriuQ1o
Channel Id: undefined
Length: 39min 18sec (2358 seconds)
Published: Fri Dec 17 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.