"Introduction to SQLAlchemy 2020 (Tutorial)" by: Mike Bayer

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] so now you guys can see you know does he talk me logo so see what I'm not so there's a lot of material in this thing and in the past it's taken me a long time to get through and I have to rush so I'm gonna try not to rush too much and the material is all you know on the github people are in the slack I put the link to the github that has all the materials so if my entire thing blows up here you can always just go read all the content on github I wanted to point out to people that when you do download Seco alchemy you actually get a free space invaders game so people should see a black screen here and this is actually a space invaders demo running the sequel alchemy and you might not know what you're looking at this is an ASCII art space invaders and if you guys can see to the left of my shooting these spaceships there's a whole lot of SQL going by and that's because this space invaders game actually stores all of its state in a sequel light memory database and is using the ORM to save and retrieve the coordinates of all the glyphs on the screen so I wrote this a long time ago for whatever conference I was doing and if I can get through this screen then we can actually start talking about sequel alchemy so when you download there's actually an examples folder and this is the one example that if you were to read through this how this example works you would pretty much get the entire zen of this equally Orem and that's how that looks and there you have because it's using curses it's it's all messy so you can see over here there's a whole lot of SQL that happened in the logs it's just doing a lot of select queries where it's just getting ranges on coordinates and things like that it does it fast enough that you can actually make a game out of it so if anyone happens to like I just showed you so usually when I do these talks I'm in person so I can see people so now I have no idea if people enjoyed that or didn't enjoy those or is there a clap button in there anyway there is something like okay so go to the slides so the way this talk works is there's some slides that are not really their introductory they're just quick to get you oriented and then after the slides then we go to this part where there is the a lot of code examples that we're gonna run through and then we go back to some slides so here's a slide see if this works prerequisite knowledge when we do this talk more formally there are some things that are good to know that would make the talk more understandable basic knowledge of SQL and we're of importance if you know about create table how tables work columns querying rows with select modifying data with insert update delete is good to know the general idea of a transaction is good to know and joins joins grouping not that important for what we're talking about just things that I'm gonna be showing how to do in C Qualicum II and if you don't know what they are in sequel it would make less sense but you can always review later overview of seek alchemy hates make sure I'm looking at the time so for this we have told 20 we call ourselves the database toolkit for Python in your just 2005 it's an end-to-end system for working with Python database API relationship databases SQL language you'll notice that I change how I pronounce SQL between sequel and SQL I tend to say sequel but I think SQL is probably more correct so you're gonna see me just switching we're currently on one 3.17 and one but four is the version I'm working on quite a lot and that's considered to be what's I'm calling a transitional version so you seek alchemy 2.0 which is basically the three only version with a lot of other rethinks of some fundamental assumptions goals of sequel alchemy provide helpers tools components to assist with an automate database application development provide a consistent fully featured facade over the Python database API industrial-strength put optional object relational mapper you know Tsubaki is not just a no where am we've always made sure that we're not just the ORM you don't like over M's you don't have to use it act as the foundation for any number of third party or in-house tools philosophies bring the use of different databases to interface as consistent as possible but expose the distinct behaviors and features of each back-end and they all have that especially Postgres probably has the most crazy features and we try to expose them and we try to let you use a database to the fullest extent that that database allows while still having the majority of your code be agnostic between different databases we don't hide the database we're not about the idea of an abstraction layer hiding a database you need to know about sequel to work with seek welcoming what we're trying to do is provide an automation layer and your I means do not repeat don't repeat yourself and then we're trying a favor declarative and compositional patterns over flags and switches whenever possible and I think that'll just be you'll see that when we do some code examples hopefully there's not too much prevalence of one big function with lots of lots of flags you're going to be see more compositional you know X becomes Y become Z so it looks a little more natural and we really want to stay true to a sequel and this is something that other tools don't necessarily do we don't want to invent a new query language or a relational paradigm we really want when you work a sequel kimete you feel like you're working with sequel just in this way that's more automated and less boiler plating this diagram is on the website so people have read the docs have seen this is what you call a pancake diagram because you have these different layers standing on top of each other the highest layer is the object relational mapper that builds on top of C clock in the core cores major components are schema sequence Russian language engine connection pool and dialect and then all of that talks to the Python database API which is your driver like psycho PG 2 or my sequel client PI my sequel CX Oracle then bullets for those things engine is a factory for database connections which are maintained by a pool the dialect sits underneath the engine it's it's hidden usually and it's the thing that knows about your using my sequel you're using sequel light we know about how to generate sequel for that the connection pool um you know is kind of the basically where connection connectivity is maintained it's not necessarily a pool it usually is a pool that stores connections to memory but the examples today will be using a pool called a null pool which he doesn't pull just connects and disconnects each time use a connection sequel expression language we use Python to create sequel queries the schema system is using Python objects to represent DDL concepts tables columns data types and then on the ORM side which hopefully we'll get to near the end of the talk probably but let's see 11 o'clock is when we hope to do over em maybe you'll hate their construction of Python objects which can be maps or relational database tables so basically this is where you're creating business objects like your your official banking application or whatever you're doing and then your objects have data that's in the database they can be persistent someone's got a microphone on with breathing transparently persists objects into their corresponding database tables using the unit of work pattern provides a query system which loads objects and attributes using sequel that comes from mappings the ORM builds on top of the core and it presents a slightly more objects enter object centric perspective as opposed to a schema centric perspective sometimes I have to move you might not see I have a little zoom group chat and I have to move it to see the end of my slide but well and then I gotta change the focus quick other subsystems that we're not going to do much with in the talk here but they're there just to know about there's an event system event hooks through a sequel need to tell when things are happening you can modify how things work the inspection system provides in insight into various objects like engines mapped objects the Seco alchemy extension package is a lot of extensions mostly ORM related if you're writing a third party dialect for another database unless you're use database or some other database that's not included there's about 20 30 third party dialects on the site now you would want to use this equality testing package it has a compliance suite so your test suite would be using this suite and there's a sequel comedian examples a folder package that has an in-depth performance suite and it's also where you get your free space invaders game so rush right out get your free space invaders game so the big thing I'm working on and it was kind of inspired by Python 3 Python 2 being retired you know in Python 3 having a lot of great stuff 1.4 in 2.0 the standardization of Python 3 cycle alchemy is on the path to an all-new 2.0 release major features streamlining of api's and it's gonna be a pretty deep cut into pretty much every legacy pattern that has ever bothered me a lot of the patterns of sleepwalking II still date back to the 0.1 0.2 days when we had a completely different way of working and you know to we we changed the api's pretty dramatically but we still kept things around from the earlier ways of thinking and it's time for those to just go so when you upgrade to 1.4 your code that runs to go alchemy now and 1/3 will continue to work in most part is a couple of inch if you have some internal extension stuff that might change a little bit 1.4 works like 1.3 but it has all the 2.0 API is present within it sometimes you'll have to import from seek welcoming I'll show that how that looks and then you can have the 2.0 version and there's basically then there's a deprecation mode so that you can migrate your application to be totally 2.0 style and then you can migrate hopefully by then so you can be super I know we'll be in beta and then you can try 2.0 and you'll have I think hopefully much more readable code faster code because it includes caching major changes for 2.0 it'll be Python 3 only we're still doing Python 2 for a 1.4 because we're nice like that people still are straggling on Python 2 engine is gonna get very refined the engine level auto commit will be removed but there will be another way to do a lot of commit connectionless execution is removed the result set is gonna is very refined it's got many new features rows are completely temple-like and a lot of nice stuff with results the vast majority of sequel compilation is going to be cached meaning that when we generate the strings all the Python overhead of that is usually going to be cast for most of the regular bread-and-butter sequel statements the ORM query is now unisex a in master this has been committed as unified with select you can actually use a select object instead of query to get ORM rows as far as if our own query goes away entirely that's so still a little bit up for grabs but technically Oh arm query won't be needed you can use the selects and then also when you use selects and use all the new style you get back the result sets for both core and ORM queries and all of the same ability to slice up rows and unique fi them or get certain columns only get scalars there's been a lot of requests for more flexibility and how over m results are returned so this will give it to you this is the onion diagram this shows the four components of SQL alchemy or seek welcome you as I usually call it in my head these colors actually matched to the colors in the first pancake diagram you know these the four main components and we're gonna start from the inside let me see how the time's going see how long I'm talking okay so I'm going this is good I'm speaking fairly relaxed and I'm actually ahead of the game so maybe we'll have time to get through all of the material and also I'm assuming people are gonna I'll see questions in the chat if people wanted to know how do any curiosities about anything and so level one is the green circle the engine connection it transactions and then first we'll have some top level what does sequel alchemy engine trying to do well the first thing we have to deal with is the Python database API which is pepp 249 which is the system that everyone most people use for providing Python database interfaces there's lots of them most databases have more than one they're very very different even though there's a spec the DNA base api's are super super different from each other and this is what a DB API looks like this is the psycho PG 2 DB API which is pretty popular it's not since that's got some performance issues but it's the one everyone most people use right now for Postgres there's it some others that are more async i/o oriented right now there's actually a psycho PG 3 in the works that will also do a sec but for now a DB API looks a lot like a connect where the arguments are kind of open-ended for different database api's you want to get a cursor to work with sequel the cursor is also how you fetch rows the cursor represents an active excuse-me resource so you want to close that at the end and then the connection itself has a commit message so database Python database API does not have any auto commit turned on by default so this is not see Quogue me this is the Python database API that see fog me is going to internal e you in order to do things important DB a pair of facts the DB API by default is not auto command there's always a transaction you had you have to call commit or rollback when you want data gifts call commit when you want your data to go in DNA base api's achieve this by employing an auto begin meaning when you first run a statement or first connect sometimes depending on how they want to do it a transaction has begun and it's open most database api's now have an auto commit feature that turns this thing off and sequel alchemy supports that there are six different bound parameter formats for database api's bound parameters are obviously well they're very important to use because that's how you take numeric dates textual values any kind of user input and that is part of a sequel query you want to put it into a bound parameter so that the driver can make sure that that string is is sent to the driver correctly and it's also safe stringed it removes the possibility of a sequel injection attacks it also has a lot to do with mmm coercing the typing information like a like a date needs to be formatted a certain way or a binary he needs to be sent in certain encoding things like that but the Python database API decided to have six different formats for no reason I can imagine and it's kind of inconvenient and as I already said they have lots of inconsistencies and in my opinion it's in it's not possible to write non-trivial db-api agnostic code without using libraries meaning when people say I'm just use a DB API will find you're not gonna you're you're you're gonna be hard coding to a certain database for sure anything non-trivial because they're all very very different and how even simple things are done you know quick sample a API consistencies they all have different I mean they have there's a standard exception hierarchy but you had to import those classes from that module so it's not easy to catch an exception in an agnostic way for two different database API s if you want sequel Lite there's no daytime we have to turn them into strings other DB api's will take the dates if you're on PI ODBC with sequel server you might find that sometimes your index don't get used because Python 3 unicode strings are gonna force it to use n VAR care for some data types what is the artifact of mic screen oh is that does is that the group chat is it in the lower right yep oh right so I'm looking at the chat so people talk to me so I'm not quite sure if how about if I just do that so I can that yeah where should I put it uh yeah is that better better all right I mean cuz if you're talking literally I see like a few people and then if anyone's talking to me I won't know unless I see coming up in there so yeah this is the technology and then I got changed that so I could BG to execute many is slow and there's some special steps to make it faster you know you just wouldn't know this if you're just coding to the DB API in an agnostic way getting it fun yeah CX Oracle is totally crazy you have to do so many things to make it act normally it's just nuts not even want to get into it my secret drivers rowcount decides to and people who work with my so you'll probably notice that row count is different on my sequel there's special flags to change it this is not important for anything it's just a show how there's all kinds of silly details that are gonna be different with different database api's so what a sequel alchemy doing with the db-api want to see what my time for this section is okay sequel Alchemy's first goal is to tame the database API conferencing in 2020 provides a consistent URL based connectivity pattern anyone is already you see quality knows this extensible type system this is the big deal that people don't notice but it is very complicated when you insert rows in a database usually not always but often you have integer primary keys that auto-generate from a sequence or auto increment or identity the Pratt the act of creating that new identity and then getting it back from the RO is this completely crazy thing that is totally different on every driver I mean it's not totally but this different techniques that you need to use for different databases and different drivers and sequel alchemy is one of the very first goals was to normalize all that and it's a no it's just a complicated thing single bound parameter format a fixed exception hierarchy but we don't have the same messages so if a certain database spits out a certain exception message it's not going to be the same as a different database it's there always to normalize that too but that's not part of seek alchemy um and then when you see when we come to this slide that has this slide ripple thing that's where we go to the slide ripple thing and if I can get to there so now you guys should see some a terminal window so this is what it looks like when you create a sequel management anyone that's already you see Kwame has done this because this is basically ground zero for how to get to your database and there's a URL for sequel Lite here if you were connecting to other kinds of databases there'd be like a hostname it'd be like you know my sequel Scott tiger at hostname database name like that those are other kinds of URLs when you do create engine it didn't actually connect to anything it didn't create any socket connection didn't open the secret file yet it just basically verified that your URL makes sense you loaded up a driver it loaded up a dialect secret like dialect it has the pool inside the null pool which hasn't done anything yet so it hasn't connected yeah it's just kind of setup whoo what you're gonna be doing and then for the purpose of these slides that you're gonna see this in a couple of things there's this thing called future and this is where I'm gonna also try to show some differences between the 2.0 style of doing things and the 1 point X will call it style doing things there's a package called sequel alchemy future that's in master its 1.3 it's only a master which will be version 1.4 and it has basically a create engine function and has a select function in it that's all it's got two functions and that makes all the difference so we're gonna make another engine that I'm only gonna refer to a little bit which is the future engine basically it's from C Kakinada future import create engine the same create engine with a little flag set on it basically and we'll call it future create engine and we'll make a second engine that's called future and the reason I'm doing this is so that later when I show you how to get results sets I'll show you the slight little differences between the two engines does that make sense to people that there's to see I wasn't sure much to be introducing this in a beginner tutorial but I also wanted to kind of show people who know the C clock me here's the 2.0 stuff so the book is 2.0 yes anything is future is 2.0 however this package will be in C called a 1.4 and when your future package will be in 1.4 yes the future package will be wound up for and then when you go to 2.0 the future package will still be there however that'll be just a synonym for just imports equal Khmer understood that make sense it's a little weird right the thing is the future packages ended up being so small that I might not even do it like this is literally two little things so maybe I'm not gonna happen for now this is how it if you if you download if you get master from github and you want to use the future stuff this is how you get it right now so yeah um so back to our normal engine without the future stuff we now we're going to actually connect so now we have this object called the sequel alchemy connection and what's happened is we actually have made a sequel light connection using the PI so you light driver in Python it comes packaged as sequel i three it's of Python built-in library the connection is a facade around the sequel light connection or a proxy if you want to see that you can say connection doc connection connection there's two of them connection is the sequel all communication that will work with connection dot connection is this little magic that you don't need to know about called the connection fairy and people have debugged sequel ogni applications have certainly seen this and there's trace backs it's kind of like a little wrapper that tracks when the connection goes out of scope and then connection is sequel light and that's your actual yes fairies so you go like connection the reason it's called a fairy is that was it was meant to be kind of an ephemeral object that when you connect to the database there's this little wrapper thing that has a weak reference to it and then it just kind of magically vanishes and that's what it's called anyway yep so one thing that happens in sequel alchemy 2.0 is that we don't let you send a straight string straight to execute anymore over the years there have been very few people who are upset about strings being passed to certain places and we want to make it very clear that when you are sending a string to the database you know that you're sending a raw string to the database so that there's no chance we hope that you are taking your string from someone's webform and having them own your database so when you want to execute a string the the correct way to do it is to use the text construct which will also normalize the bound parameter format and we say statement equals text select select M PI D amp name from employee I should also note that this slide Runner thing actually created some tables for us and that's why this is going to work the secret white database has a table created already and then so basically I have this statement called from sequel game port text create a statement object with some string in it and then we're going to execute it we're passing a parameter of 2 for M PI G and then we'll get a result back and then when I press return you'll see a sequel logging that just shows what we just did one thing to note is you notice the bound parameter and the executed sequel is a question mark that's called cue mark format that's one of the six database API formats so sequel alchemy in this example converts from its internal colon format to question mark and then you see that so far so good we have a result object back it acts a little bit like the Python database API cursor it has methods like fetch one it has rows that come back as tuples they're actually not quite supples there first and foremost there are names tuples meaning the names of the columns are in the the row as names you can access and they're actually here for the keys so M PI D and app name are the keys and this is actually the first place where something is changing and sequel alchemy 2.0 is that they're gonna be even more like tuples right now for sequel you since 2005 if you have a row you can also use it as a dictionary like this row with dictionary brackets and then the key name and the problem with that is that's what we call Python dictionary or mapping it's not actually fully compatible with being a tuple in every way shape possible so when you so you can always use named tuple because as people might know Python has a built-in class called name tuple where you have a tuple that has 0 1 2 3 4 any number of elements in it and then you can access them by name like it's an object that's fine but when you do the thing with the brackets it's a little more controversial let's check the time 10 minutes so and I'll show you why in a minute so in signal every 1.4 when you you can it the old version will be deprecated 1.4 but when you want to do with the new way and you want to use the a string name for a column name you do not understand a ping first to get the dictionary view of the row if you don't do mapping you're on the tupple view of the row i'm in 2.0 this will be strictly my phone started talking to me thank you phone so that's that part and then what happens is you have the name couple so it's our first one three two one for a future a migration note is that in one point x and a one point four deprecated version the contains method of this tuple which in python is how you implement the in operator people know about like X in Y is implement using a method called under contains people have no idea ever so if I with the the row does in sequel alchemy classic since 2005 and this is straight up my stupid mistake it contains it looks for the keys of the columns the names of the columns in the row which is actually kind of useless because you've already done a select statement where you know with the column names in the row are so it's kind of silly that it works this way and that's why row amp you know amp name in this format is also kind of incompatible because if you assume row bracket amp name returns a value then you would assume amp name is in row but it's a tupple and it really shouldn't work that way so in one point for a future if you use your future engine it will reverse this so that amp name is not in a row and it's actually the value sandy is in the row so that's kind of a change that is small it's not hard to make on the code on the seat besides easy for me to make a change but if there's some big application relying upon this yeah it could be a huge deal so we're making is really gentle upgrade path that by the way this is what happens so far so good how's the time see now I'm running out of time the result object actually has a cursor inside of it so it doesn't have a resource that can be closed the result normally closes the cursor automatically in almost all cases you normally don't have to call results that close but it's there in case you're doing something like cutting off your results which you you wouldn't normally do you can iterate a result you can just say connection execute sex are from employee people might notice what the subject of our sample data is today these are characters from Sponge Bob Squarepants will be working with spongebob sandy Cheikh Squidward Tentacles is his full name and Patrick will be making an appearance as well you can do fetch all which is another db-api method same thing gives you a list and then in 1.4 there'll be more fancy methods they'll have some more days later I want to get just the amp name column by itself without using temples and gave me all them back so I get just a list of strings and this also will apply to the ORM because the ORM will also give you this results object so you can do all kinds of slice and dice things that I think people will appreciate because they've been asking for it connection itself has a closed method and this is a more important method this does a thing that we call releases it means it doesn't actually necessarily close the database connection in this case it actually will cuz the sequel Lite database is not using pools but when you use a connection pool releasing the connection means that you are no longer using this connection it can go back to the pool and be put back in the in the connection pool to be used again if the connection pool is configured such that it's already full and doesn't need this connection it'll close it if the connection pool is configured to have infinite connections then it won't close it if you're using the pool we have here called the null pool then it will close it the point is when you release when you say connection dot clothes from the sequel alchemy engine side the connection is released to the pool so that the pool can choose to match that connection in whatever way it's programmed to and you don't have to worry about that at the programmatic level it's more of a configuration time thing now you probably won't be calling connection dot clothes because in modern sequel alchemy modern Python you're gonna use a context manager now I was just talking on the slack chat with Paul Everett and we were talking about how he clogged me in 2005 Python was really different context managers meaning the with statements we're not introduced until pythons 2.5 2.5 and see cockney was running on Python 2.3 so we had a couple of years before we even knew before we even have the ability to have a context manager so a lot of sequel Keys patterns were not well formed the context matters because it was on the thing that was added to Python later but nowadays we should be good if you want to use an engine directly you want to connect and you want to do something on a connection and then close it you should use a context manager and then also you might be noticing in the log output this this is something you'll become more interesting later the log output from select star from employee is showing that that state it was actually cached which is meaning not that the results were cast it means that the string compilation was cast which in this case seems a little funny because we already gave it the string exactly but later you'll see there's other kinds of objects that are not strings that get cached so look for the log output to see that happening someone has to question it will the result object close because thatís cope yes it will close yes because the cursor goes out of scope and the database driver usually has the idea that the cursor will close when it goes out of scope the connection in modern sequel document will not go out of scope if you were to drop the connection also the connection pool has a logic that will receive the connection and be like oh you lost scope to this thing it uses a weak ref callback and then it will return the cash to the pool automatically however this is an anti-pattern you should always make sure things are handled by probably by context managers make sure things are closed when you let the garbage collector do things you have problems like other threads do GC collect sometimes and it's not a good idea so we're probably gonna be having warnings popping out when you don't close your connection and the pool grabs it cursors are less interesting they're they're not they tend to not even be real objects and some database drivers so it's it's not as critical but yeah for scoping it's definitely better that things are explicitly cleaned up in the way you write your code so context managers and fully exhausting your result sets is the best way if the result set does not return any rows like it's an insert with no rows returned it'll the cursor will be closed already when you get it so is the cursor closing is pretty foolproof mostly um should a connection obviously be global no connection homage to never be global these so the question is should a connection ought to be global the thing that's global is the engine the engine is the factory for connection so you make the global engine and that should be a global object and then when you want to do something with the engine you want to do some work with your connection you have to think in terms of it mean it depends on what kind of application you're writing if you're writing a console script that's just gonna be a sip you run through a bunch of data and be done then maybe you could have the connection be global otherwise you generally want to use the connection kind of like a file handle like in Python you don't want to open a file and leave it open throughout the whole program you want to open the file read the data and then close it you want to open the file write whatever you're writing to it and then be done with it so you should try to limit the scope of connections to hopefully not be global using a context manager is a pretty good way to have it visually displayed what you're your scope of things is 1825 ok now I'm falling behind so yeah some work on auto commit so db-api does not have auto commit seek me has implemented its own auto commit that we're gonna be changing if you run an insert statement like this there's a regular expression that tracks the insert and you can see in the logging when I run that statements with connect it doesn't commit that's gonna go away in future engine because it's an ANSI pattern and people you usually will run lots of statements and they're all auto committing and they don't realize it and it's not efficient it's not a good way to do things and I think I've come to realize that the way the DPI was designed that commit is not automatic is a good thing if you want true auto commit you should use the database level auto commit which is a different option in C kwaka me I'm not covering it here if you want to do a bunch of things with true auto commit there's a flag that you can use to set that on but when you work with the engine normally it will not be auto commit so the future engine has a little commit as you go method right on the connection which is not there in current sequel me and if you want to explicitly demarcate where your beginning and committing with a connection well first of all you can you can begin right from the engine and this is probably the best pattern I use if you're running a bunch of transactional stink wall with engine dot begin as connection then run some stuff on the connection and you do that and then you can also run a transaction in the context manager from the well there's no consummated here you can run a transaction from the connection itself so connect us connection and then you say trans equals connection begin and then you can do some things and commit it or you can roll it back you see here's where Patrick becomes an employee and he tries to become employee of the month but we roll it back I'm moving a little quick because I'm falling behind my time a little bit so this you know so there's the sequel for that so far so good some more of context manager stuff if you have your connection with anji and connectors connection you can say with connection dot begin that's trans this is the best pattern if you're doing it like this if you want to run multiple transactions on the same connection run that I'm you know running is kind of trivial this is a thing called a nested transaction which is how we represent a save point in the sequel let me just pick up my schedule that I just dropped on the floor okay in sequel a save point is like a named marker inside of a transaction that you can roll back in the middle of a transaction so you can have a transaction that's partially a bunch of stuff that you put in and you make a save point and then you can rollback things subsequent to that save point back to that safe one so it really ends up acting like a nested transaction so we call that begin nested so here those two examples would begin nested we're going to first do is explicit begin nested and then roll it back and then the second example we're going to use begin nested as a context manager one thing I forgot to note about these context managers when the user transaction in the context manager when the when the context manager completes it automatically committed transaction for you if the operation you're doing throws an exception the context manager will omit a rollback for you so that's kind of the important thing about context matters is that unlike the connection manager which will only make sure the connection is returned to the pool the transactional context manager will also rollback the transaction if anything goes wrong so here's safe points and you can see here that safe points have a name so when it runs equality creates a safe point s a save point one does some things then rolls back to that safe point everything that happened there is gone but we're still in the transaction then we do some more stuff on save point two and then we release that so save point two actually is part of the transaction then we commit and the data goes in so that was the engine chapter is there anything else that we wanted any other questions we want to do on that before we do the next part of this so I'm basically watching the zoom group chat so I don't know the level of programmers that are on here I mean this is an intro to sequel alchemy and it was originally for people who know nothing when one point is so the question is do you recommend building to future at this point so one point for I'm hoping that betas will be happening by the end of the summer I've got most of the hard stuff done so when one point four beta comes out I hope people will try the betas and they will try experimenting with the future stuff so when one point four goes final which will hopefully be sometime in the autumn then people will upgrade to 1.4 and then you should have fun with that then the next thing is make sure your application is totally on Python 3 and then when you're in Python 3 then you can begin migrating to the future stuff which by then should have had a lot of critique and people you know correcting what we're going to do the migration notes on the website for one point four which are already live on the site have a lot of information on how this upgrade process should work so right now one point four is still very much under development I have been working on a stuff for over a year but it's under development so I would encourage people to play with it and master if you are feeling experimental yeah so this is verbose um yeah so build a future probably when one point four would be what you want I mean you certainly want to look at one point four at future and see what's gonna look like if you see any major issue you're gonna have with it let us know so I can think about because there are some changes that I'm not sure I wanted to I don't want to get into it here but there's certain things that certain decisions that are gonna be inconvenient either way this is the things that are going to be inconsistent in one direction or inconvenient in the other direction so we've got to decide between consistency versus convenience for a few things anyway we'll move to the next section and then after this section sequel part well kids we have two more sections so that slideshow is over then we go back to the slides so I can introduce well there's some more engine stuff actually here I forgot now level two okay level two now we're on the blue circle so now we have an engine we know how to connect to the database and omit sequel table metadata reflection and EDL just check something here yep okay so what is metadata I got this name from Martin Fowler who wrote a book called patterns of enterprise architecture it describes the structure of a database like tables columns and constraints in terms of Python data structures it serves as the basis for SQL generation and object relational mapping we can create table metadata and generate a schema meaning we're using a DDL which is called data definition language we can also take an existing database and introspect the database and that will return for us Python metadata objects and it forms the basis of migration tools like sequel alchemy alembics Luva square there's a line back you can see the word Olympic that's our migration tool click and then we can show some metadata stuff Chhotu so this is what it looks like when you define the metadata for a table in your database it intentionally looks a lot like stepping away okay it intentially looks a lot like the a sequel create table statement and it starts with the table the name of the table and then the columns are set up in the list it's it's meant to look like a sequel create table statement so here we have a couple of data types integer string and there's also this object called metadata which is actually a collection metadata will be a collection of table objects and other things too but mostly tables and so as you make each table you tell it hey I'm going to be called user and I want to be part of this collection called metadata and metadata is basically a place that we can manage a group of tables when you have a table object you can look at name you can look at the columns which are available from an attribute called C so that's the column object that we just created and it's inside the table in this associative array hundred C dot C can give you the list of columns it's kind of like a dictionary but not totally it's a namespace when you have a column itself you can look at the columns name you can look at the data type for the column and the table also has things like primary key constraint it has indexes on it it has you know constraints uniques form keys sequences sometimes are associated with the table although they're not part of the DDL for the table itself when you have a table object you can do things like create a select statement and this is a little show to do that more later this is a preview of that you can take a table and send it to this concert called select and you get a select statement when you print it or send it to the database now we're going to create the table so we have this metadata we say hey metadata create all the tables using this database connection so and we use a transaction because a lot of databases have what's called transactional DDL meaning when you emit create table you have to also commit the transaction the sequel light driver secret light I think has transactional DDL but the secret light driver turns it off so basically using engine begin is a good idea when you create your tables so it creates the user table so you could see see clogging we did a few things to check to see if user already existed doing secret lights pragma command and then you have the user table and you have where we use the string data type you see var Kara came out you can see our primary key here's a fancy table with more stuff a date/time column a precision numeric column use a string as a primary key if you want do whatever you want here's an enum secret light doesn't have a number 10 so when it does the Create table you see var Kara 1 its var carrot 1f you should be Kara 1 maybe someone doesn't have care and it of course it does it's a size 1 because the UNAM had one character elements in it so that's a fancy table when we have the metadata object we've made two tables now we can actually see those tables inside the metadata in a dictionary called tables so when we look at the tape the keys for tables we see user and fancy and we can actually pull up the metadata the user table we can look at the fancy table like this and that's basically the same object as the fancy table mate so it's inside of a dictionary another table to show some things off and this will be kind of one of the tables we're using some examples is the table to store email addresses for each person that has a user account so this has something called the foreign key and hoping people know what a foreign key is a foreign key in a database is a constraint on a column or set of columns that means that every value in that column must exist somewhere else typically as the primary key of a different table or of the same table primary key a foreign key is usually referred to the primary key of a know table some databases like Postgres allow you to refer to unique constraint as well but in the vast majority of cases a foreign key is a constraint that makes sure a value in this user ID column will definitely be in the ID column of the user table so we set this up again in a way that looks like DDL column user ID is a foreign key to user ID we name it by its it's string name and then we create the address table you can see at the bottom foreign key user ID references user ID to show a composite foreign key a composite foreign key is a foreign key that has more than one column and you use that when your database primary key has more than one columns called a composite primary key you want to create that constraint as one constraint not two separate constraints on each column so a foreign key constraint on moblie columns is declared at the table level using the foreign key constraint object and you can also use forma key constraint for a single column key it's just more convenient to use foreign key these two tables are not going to be part of our examples but just to show some more stuff this is a stories has a primary key for each story but then it's different version so the primary key is composite I guess people kind of know this composite form composite primary keys hopefully create table for that you can see the foreign key at the bottom has the two columns in it referencing the two columns check on the time so for this section we have two 9:55 okay reflection so that's D DL that's how to create tables and run DD l reflection is the other direction I'm gonna try to get this done by 55 okay you let's make a brand new metadata object that's empty are there plans to make Alembic a built-in feature of sequel alchemy um well no Alembic is very linked to see quality but it's a separate package so it is certainly very joined with it as far as how it does things but it's it's always gonna be separate because it's its own thing and there's other ways to do migrations too so yeah it'll be separate metadata we make a new metadata collection is empty we put a new table into it but we don't give the table any columns we just say it's empty and then we say hey table can you please Auto load the information about your columns and indexes from the database so here we do anything about connect we don't have to use a transaction we learnt to say begin because this is just a read-only method so this table object will when it creates gets created will load its columns from that connect collection connection and then you can see a bunch of secret light pragma stuff it's a little redundant the reflection logic is not well optimized yet for sequel light in some other databases that's a long term cold and make it more optimized anyway the user table that we just reflected is a brand new object and it pulled up all the information about its columns and primary key constraint and the other constraints it might have had from using database database introspection if we want to do more fine-grained inspection we can use a excuse me that's breakfast a function call it's eco alchemy inspect you run inspect on the engine and it'll actually handle connecting for you so read-only operation and inspector can give you things like the names of tables you can give you the names of columns for a specific table it can give you the list of foreign keys for a table it's giving you the data in a raw kind of JSON ish format it's actually Python dictionaries so it's not string and then also there's a feature that people like a lot which is that you can get another blank metadata or I'm editing that's already got tables in it until to reflect every table that you don't already have get every table into the metadata all at once you say metadata dot reflect with a connection and you see I ran a lot of queries and you can see it as pragmas for all the different tables that's because the first thing it did was it did select name from Seco ID semester table table it got all the names so two questions the thing about someone has their are back graphs back graphs have to do with the ORM so back rep does not apply to what we're doing right now it's only when you use the ORM that you might want to use a back ref and the answer is the back rub is always an optional thing you don't need to use back refs and actually if and when we get all the way to the ORM part of this tutorial I'm actually not I'm gonna use back ref I'm gonna use a different keyword called back populates which is a little more clear than back ref wise reflect use for I think reflect is used for well first of all the Alembic tool uses it because it will compare your model code to what's in the database so it uses it like that it will it can be used if you have an existing database like if you have like a WordPress database or PHP whatever database and you want to just play with that data and migrated and move things around it's very handy to just reflect all the tables and just work with them rather than having to rename all those tables that's basically use case I see for it caching metadata I think people try that I don't really like that use case I think if your application is coded to a set of tables and that's the tables you're using I would be using the metadata would be first and foremost in your database in your application that's the the canonical place that your database metadata is represented if you're doing something with a database that your DBA manages then maybe you want to use reflection reflection is a little bit slow though so yes you can you can pick all the tables yeah it's good for migration yeah I'm not a big use reflection just for migration stuff and quick and dirty stuff I don't use it for a main application development but some people do sneak Laureys trying to be open-ended about what you want to do so if you want to read the tables you can do whatever you want with them once we reflected the whole data just now now we have metadata the table has all the the tables that we just did inside of it they're all there and you can you can get them all back so that's that yeah so you can now use those tables to make sequel statements so that is the end of the metadata part see these are the these first two sections are intentionally kind of quicker because when we get into the ORM stuff it's gonna be slower because there's gonna be a lot more questions there's a few more slides to do with metadata here are some just some basic types integer string unicode which is a string but it's gonna generate different DDL and also it might have different handling inside of it for how bound parameters and result sets come back particularly in Python 2 we have to do a lot of unicode conversion stuff python 3 it's pretty much taken care of boolean is tough because databases don't all have boolean well string go away for Python 3 is a question um no because the string is var care see this is where we're kind of honoring what the database does so your database has a var caret type and it might have an n VAR care type and they're different so if you honestly want to work with our care as an Oracle you don't want to use string and not unicode or if you want to work with the Vark errors and sequel server that has n VAR care you'd want to use string so the thing is a string datatype will take a Python Unicode and it's fine so the the string Unicode thing is a little bit fungible you should use the unicode type if you definitely want to store unicode and you want the end of our car type for those databases that support it boolean is tricky cuz davis don't all have boolean so like you gotta use tiny ant or bit it there's some crap happening the boolean date/time returns python date/time objects receives python date/time hobbies also date and separately floating-point values which are different from numeric so it's the difference if you want to get a Python decimal object or a Python floating point we don't have JSON Postgres my sequence equate all have a JSON type now so use JSON for that Postgres also has JSON B which is like a binary JSON this more efficient or something like that Postgres has an array data type you can also just as this is a quick rundown of the creating drop methods when you have a metadata object you say I'm gonna create all you can say table dot create you could say metadata drop all table dot drop they all accept a database connection as their argument and an optional check first which will either run the create drop unconditionally which means it'll fail if that object is present or not present check first will run an introspection to see if it's certain table exists or not as appropriate you can also pass an engine to create all create drop a long drop however I might be changing that for 2.0 it's better to send the connection because that way you are defining the scope of this connection and the transaction stuff like that it's a bit up in the air if these methods will continue to set an engine or connection is required that brings us to the core sequel expression language and this is where the slides begin to get longer because there's a lot of talk about core sequel special language builds upon the table metadata in order to compose SQL statements in Python so we will build Python objects that represent individual sequel strings that we'd sends database these objects are composed of other objects they're good they're using the compositional pattern that represents some unit of SQL like a comparison select statements conjunctions casts all kinds of things anything you can imagine we have some kind of version of it let me take this you can group chat away we work with these objects in Python which are excuse me sorry that was been terrible with a microphone should thank you for the strings when we execute them as well as if we print them focus and sequel expressions in both the corridor invariants rely heavily on the method chaining programming pattern which means you have an object and you say object dot methods dot method method you keep getting a new object back it's or the same object depending on what kind of method chaining but method chaining means you're gonna have object object function function dot function method all right so we have two sets of slides for these and let's go to basic and then after this basic one we're gonna have probably a 5-minute break where we can hydrate a little bit so let's again begin with the table object that we had before not quite the same but close enough and then we're gonna create that table that are in our sequel Lite database all right this should all be old hat now so there's our user table so now we have a table and as we saw earlier it has this collection of column objects on this dot C attribute what can we do with dot C with user table C dot username we can make an expression user dot table user table not see that username equals equal spongebob and unlike Python wished if you say one equals two you get false one equals one you get true for some reason when we did this user table thing we got this whole new object called binary expression which is unusual and that's because python has magic operate magic methods called some EQ and not equals and GT and you can actually make an object in Python that overrides these to do something new and that's kind of sequel Alchemy's main thing is that you make expressions using overloaded Python operators and I got this from a tool years and years ago called sequel object which was first written by an making and single object still exists today and the idea of using the Python expressions I was ripped off from me and micking so in picking gets credit for the equals double equal sign thing when you have this expression if you stringify it you get sequel which instantly enough does not say spongebob it has this bound parameter inside of it that's weird so keep in mind that when we have these little objects you can call compile on them to get the full call compiled object it's a psychological the compiled it has the parameter is separate from the expression so if you do the compile there's also string yeah there it is so we have a question and it says the DB is doing the expression not Python we haven't done anything with a DB yet this is all happening in CQ alchemy so you basically have this this expression object so let's say exper equals user table see user name equals x expert is the binary expert has a left element it has a right element it has an operator and then there's this thing called the compiler in sequel alchemy that just takes that basically expression tree and does this thing called compile and then we have this sequel compiler object which is all Python you know it's got a whole bunch of state in it and we're just in Python we're just taking objects and converting them to strings basically taking is just like if you had a JSON object you're going JSON dump string we're back in the old days we was the XML dump no database yet but when we make these strings those strings are what we will sense the database a more elaborate string a statement that is using the Python bitwise or operator to do things uh another one using explicit method functions and and or like that and you can see this makes a bigger expression that's happening and comparison it's just different things that can happen with columns you can do comparison to none and it says oh you use none so let me do is null or is not null you actually don't want to use the equal sign comparator or not equals in sequel to compare to none because it will return the value none because and no because in sequel null is not a value it means we don't know the value operators are also a type sensitive if you have an integer and you use the plus sign you get the addition operation if you have a string and use the plus sign you get the concatenation operator which for the default comparator is the twin pipe operator there's also an in operator that does some complicated thing in one point you actually have this syntax in one point three but in one point four in operators are expanded was it's kind of hard to explain you send a list of values to in and they're stored but they are expanded into bound parameters when you execute the statement I think we'll see this later it's probably a little hard to explain it this is annuity if this is new to people when I do an in the parameters that are sent in a list are stored as a single bound parameter that will be expanded into individual parameters when we execute the statement I'll show you that later I think this slide probably skipped a little ahead so don't worry too much about in just know that in is getting better in see going to be one point four databases don't have an empty in if you send an empty list to in it'll give you an error sequel alchemy now implements this by using special select statements inside of an in that return an empty set and that is this is this logic is is available in 1.3 but in 1.4 its automatic this is kind of Mike and stuff for people who know about working with a Nancy Kowalik mean we've had some bumpy history let me up this little thing also if you have an expression based on what kind of dialect you're using meaning if you're connecting to my sequel or Postgres there's a database dialect that's in use under the engine so if we have this expression that is string concatenation if I run that on Postgres I'm actually not again I'm actually not running on database I basically have created a dialect called the PostgreSQL dialect so PostgreSQL dialect is again a Python object didn't seek waka me that just knows about how to generate sequel for Postgres this one generates the sequel with a certain bound parameter format and the pipe symbol if I run the same exact expression on a my sequel dialect my sequel maybe does now but for a long time didn't have the twin pipe operator it would use the concatenate an expression in Python that is database agnostic this is a very small example of that because we want to work with some data and selected I'm going to first insert some data so this indicas a concert called the insert construct giving given our user table that represents the columns and the table name of this table we can say dot insert which returns to us an object called dot insert and we say hey answered object insert these two values that gives us a statement object kind of like when we made a text object before and then we run inter statement into the connection execute inside of our transaction with engine dot begin and you get an interesting it so we wrote some Python code that you know has all the nice autocomplete stuff or whatever you want to have in your IDE and it looks pretty nice and it created this sequel for us and it actually did return the result but the result is automatically closed so you don't have to worry about the cursor being closed and it created bound parameters and put a SpongeBob and SpongeBob SquarePants full name into the table as our first row and then because we ran into the transactions did a commit let me see what time I need to do for this see one oh boy we're behind okay twenty we can also run multiple statements so insert and also the update contract which we'll see later they normally generate their values and or set clause from the list of parameters so here we have user table dot insert and we didn't say dot values we just said hey execute this insert statement and here are some parameters sequel alchemy for inserts and updates will automatically figure out the values and or set Clause from the parameters you pass that makes it convenient when you want to do multiple rows cinema lists dictionaries you can just have the names in the dictionaries be named after the columns and the insert will just work so we have some rows being put in and we have all the the four characters we're going to be using in the database when you send a list of multiple dictionaries to engine connection that executes the DB API has a mythical execute many that allows the back-end driver to optimize the invocation of multiple statements the my sequel driver does a good job at this optimization the psycho PG two driver by default does not do a good job of this optimization however see quality includes flags some some options to enable those optimizations and seek only one point four I will probably make them the default so that psycho PG two inserts should be way way way way faster because they really had a very bad performance issue for many years that they've provided workarounds for now alright so now that we have some rows that's why I did insert first so we have some data to select there's another contract called select and we can build up a select the legacy version of select is passed a list of the tables and or comms wants to select from and the list is kind of a legacy thing which is because this the legacy select also accepts keyword arguments of like the where clause and the order by clause that's all going away the Select will only be used by just sending it the list of columns or tables you want to select and then when you want to add the where clause and the order by and whatever you have to do dot where dot order by dot group by etc it's gonna be method chaining only most people use select this way anyway because I've changed all the tutorials years ago to only use this style so here's a select statement we create the statement with username and full-name columns we add a nice where clause where we put our expression and the whole thing is built up and then we can run it and we will get the statement run and you get the row back and then also and I'm actually wasting time here but I don't know why I'm doing this because it's not interesting if I run it again you can see that on the second run it was actually cached so that's the new caching in one point for that one of the word took me a year and a half to develop anyway here's how to select all calls from a table you pass just the table object altogether and I can do that I can get everything back all the columns in the rows I can put a where clause that has or in it and in order by these are examples okay are the methods change okay question are the methods chained in the same order they would appear in a sequence question no you can chain in that's okay so that's a thing to note here even though I'm drawing I'm when I'm doing these like a official sequel statement you can do where and order by and group I and having in whatever order you want and they will be amalgamated into the statement in the correct order where that is not the same thing as if you're using sub-queries you want to if your nesting queries then obviously you have to do one query first then nest it and build it but I can say so user table and do it and do it all in the wrong order watch me get all the name the IDS wrong okay print did I get all right there you go so yeah you can you can do the group you could do whatever you want and it'll it knows where the where Clause goes it knows where the from Clause goes one thing to note that's happening here and I don't even mention it goes just so normal to me is that we're not even telling it about the from clause the from clause is usually automatically derived from the thing that we're selecting from and the where clause if we were selecting from one table and saying where something about another table it would put both tables in the from clause so it's called its it's like an implicitly generated from clauses Coast and it actually has some nice side effects that will show off in some subsequent slides we want to be dumb in four minutes good so we're in good shape okay so if you can you can say multiple where clauses that'll be joined by and so you can have where this and that you can use you can use the ends function directly you can mix and match it should you know usually do what you would expect an update so we've sewn we've seen insert and and select an update looks a lot like the insert but it has a where clause so in that way has I mean that the Select also has so we're gonna do update user table where the user name is patrik we will set full name to be patrick star so in this case there's two different bound parameters for this update statement one of them is derived from the where clause and the other one is on the fly from the parameters that we passed as the set clause so basically the set Clause comes normally from what we just sent we also could say update values do that right um you can do that too um an update statement does not having values clause in the sequel but we use the values method that's also the same values method that insert uses we might eventually add a dot set to make it more clear but generally the set Clause of the update comes from the dot values or comes from the parameters that you pass to execute and then finally I think it'll show a delete nope first okay first we're going to say update with some expressions so values can also do expressions we want to set the full name column of the table we're going to be actually the concatenation of username and full-name which doesn't make much sense operationally but just to show that you can put sequel expressions in the values Clause and there's actually also I'm not gonna show this here you can actually update some databases allow you to update the sub element of a JSON object as well and we support that too it's a pretty edgy edgy case and then a delete again looks like an update except there's no values or set clause so we want to delete Patrick and then we've run that and that will run the delete statement so that is the end of sequel part 1 and so we have any remaining questions otherwise this is the d five to ten minute break where i'm gonna first go to the bathroom and then i'm gonna probably go on slack and chitchat if anybody has any questions definitely feel free to unmute yourself and and ask you know welcome let me see what's happening in slack so when I pull up slide you can now you guys will see my slack right okay I don't you slack much so what is this little Wiggly thing that everyone's doing yeah the bird is it a bird yeah I don't really know the origin but I didn't I didn't know until I started really using slack to be frank I'm more of a discord guy myself but I guess it's just a meme and it looks funny so so folks is it yep I also didn't mention that I did I did make the source code to all this public on the github link I see someone asking for the link that people will have the link you'll see you should see that that that third messenger from from the bottom you don't have to run the code like back when we used to do this tutorial in person in the real world we would have people running it and obviously it's not always easy for everyone to get it installed so I'm going to take a quick break for about two minutes and come right back so you'll just see my empty chair is that cool that works I'll throw some music on how about that nice okay [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] well what is the story I think we actually have some questions here all right do you want to do some questions I mean I made I'm pretty happy I made pretty good time when I've done this talk in the past I'll be at the a me like to the end so I hope that we're not doing that cuz silly me I put even more content this time so I yeah I don't know yeah I think if you mean if you want to answer a few questions right now this is just definitely time to do it if people see me right now right yeah then they should great and so let me let me pull up slack here well I see a question that's kind of for other people there so that I that's refreshing yeah so what can for others I don't know you know it's my winning smile the sequel can be 2.0 compatibility someone has about that yes so it's T kabhi 2.0 has a couple of hard incompatibilities which is why the 1.4 version has a transitional mode so 1.4 is backwards compatible and then there will be a environment variable you can turn on where 1.4 will act kind of like I don't know people remember when Python 3 came out there was a Python 3 warnings mode that would warn you and Python to you but all the things you're doing that are not Python 3 compliant this is simpler than that's not nearly as bad so there's a modem 1.4 well you'll be able to have these warnings come out saying okay this won't be able you have to do it this way so you'll be able to write your application you'll be able to upgrade to 1.4 then you'll be able to migrate excuse me your application to be excuse me one point for a future compatible and then the theory is and we'll obviously be testing the crap out of this is that you can then put 2.0 and so 2.0 is compatible with 1.4 if you've done one point four in the in the future mode so it there's an upgrade path that I mean we've had we've had so many big hard upgrade past we had 0.12 0.2 0.2 0.3 we kept we change the API a lot so we've done this a lot and this is the biggest one probably but it's gonna be fine yeah SMU is really long what was that are you trying to get started like what like 10:30 okay yeah I mean if anybody has any other questions again feel free if you're in the zoom or if you're using slide o or or slacker whatever using it I definitely feel free to to ask some questions to do it please with the expression language do you have like left join and right joined methods yes so we're the next chapter is gonna show joins but funny funny story we don't actually have right joined we have left join so we have we have inter joined and left left outer join and then we have a full join a right join is kind of a controversial black sheep of sequel because it's left joint in Reverse so almost all but no no right joy is not so with the right join you need to just reverse the order of your tables okay thanks yep oh good question I would never understand right when I see right that's too much for my brain it's like no I sorry okay please just reverse it there's probably some Oracle query planning crap that only works with right joint but I haven't heard about it yet yeah reading left to right makes more sense for most of us yeah there's an eensy cognate with Joran there's enough left-right right-left stuff that I've spent years learning to be second nature right joint I can't my brain is full maybe in this sense for those who read rights like Arabic speakers maybe maybe yeah it's yeah anybody have any other questions real quick one one quick one or if not him get started might take way too long but I was migrating data from an Oracle database into Postgres I had a results object of my Oracle data and the only way I thought I could really make sure each one of those rows was committed was to iterate over the result object one by one and then do a do a commit after each iteration and it turns out it's really really really slow is there a way I can perform a commit on like a results object into another table without having to iterate over individual rows yeah so if you're transferring into Oracle to Postgres so the data has to be in your Python app first so yeah you take the rows and they're basically their tuples but you can make them into dictionaries and then you would pass them to connection or whatever you're using execute the Python database API can have a single entertainment with any number of arguments that's passed to it at once so in executed many style call which I showed in one of the slides previous you want to probably do batching meaning if you have 20 million rows you wouldn't want to have 20 million rows all memory at once so you want to maybe get chunks of ten thousand rows at a time but you can send 10,000 rows or whatever at a time to a single connection dot execute statement and it'll be past the sec opt to driver as I mentioned before psych opt to is a little slow but there's some flags that you can turn on in sequel alchemy to use the faster insert style which actually is going to convert the query internally some other stuff but yeah when you integrate a result set you would not want to commit after each row you want to even if you were running multiple insert statements per row you still want to get the results and run each run the statements within a single transaction using engine dot begin if you're using core directly okay good question and we'll Mike again I mean if you want to just take over and drive again all right so is a - screen share because it looks like it's different yeah you just got a screen share again I stopped it again okay good cool so now you see my slack okay and then we good here and now you see that okay so now we're gonna go to sequel advanced oh this is nothing okay so now we're gonna do joins and sub queries and common table expressions so this is the first version of this tutorial that has common table expressions in it if you read saw the old one on YouTube there were no CTE s so again we're gonna start with the table like we had before we're gonna keep using the same table so it's familiar and then for the second table cause it's gonna be doing joins there's the second table that we've seen with the foreign key oxidase but the columns different order here doesn't matter and then we're gonna create those tables so we have these two tables user and address is the bandwidth okay I'm noticing that my image is herky-jerky all of a sudden you guys getting me okay thumbs up good okay so use your in address and the foreign key and then we're gonna put some data in so this is now now that we know how to do this this bigger data so this is the person that asked a question about insert execute many if you have a lot of rows you can convert them to dictionaries here and you can actually execute and then send the list of rows so we're gonna put in first a bunch of rows in the user table and then we're gonna put some rows that refer back to those rows into the address table now sequel light by default does not actually enforce foreign key constraints there's a pragma you need to turn on but normally you want to put the rows that are dependent in second you want to put the the first rows at a primary first and then so that those primary keys are present then the next set of rows would be the ones that point back with the foreign keys when we use the ORM this detail is all taken care of for us but anyway we're using explicit user ID one two and three and then the second bunch of rows are going to set some email addresses for some of our our users so we run that and you get all the insert and actually it's more succinct because you could see when it runs you see the statement is hit only once and you can see the individual sets of rows are put in and the parameter format comes all out it all gets done for us and like before a couple of slides back whatever we had a future engine we're gonna do that again we're gonna have another engine actually no I'm wrong I'm reading my slide wrong we're gonna have as a future select so the two things in one point for right now that are part of the quote-unquote future package there's a new create engine that gives you a slightly different engine and there's a new select that gives you a slightly different select so we're gonna compare and contrast the current select object which everyone who uses sequel probably knows to the new one that we will name here as future sucks so from sequel alchemy enforce select from CQ alchemy dot future import select as future select and then we're going to just connect to the engine we're not gonna use a context manager because we're gonna do it a lot of queries so we're not gonna have the indentation of the context manager it doesn't work too well and the slide runner here so the first thing that we can do with the two tables is join them so this is the where we talked about the inner join this is an inner join and I could also say user table outer join address table and if I print that you'll get left outer join and already people might notice something is that there's an on Clause of this join the one up above up here I said join object was user table join to the address table and join on user table dot ID see ID equals address table to see user ID okay and then you print that and you get this join the second one I didn't put the on Clause and that's because user table and address table have a foreign key between them sequel alchemy already knows how to join these two tables by default there are other ways to join the two tables and if you want to join on like the name matching the email or something like that you could name that explicitly but if you don't give it an on Clause and the two tables have a non ambiguous foreign key relationship between the meaning there's not multiple foreign keys it will generate the on Clause for you and people used to clock me he pretty much know to do this and then the next example shows what I just told you mmm-hmm so using the the core using our classic one point X version of select if we want to select from that join recall earlier I was saying that the from object is implicit if we want to select from the join we had to tell at that so we say hey select a column from the user table select a column from the address but don't select from those tree table separately select from the join of them that we just created so if we run that we will see a statement that is a select username email address from user joint address there's another kind of statement where you could have done what's called an implicit join if I were to say let's do this statement equals select then we move the zoom group chat over so I can type so what I just did there is called an implicit joint meaning I'm selecting from the user and address tables but there's no join keyword it just says from user a comma address and then in the where Clause I joined again that's called an implicit join the actual sequel join keyword is way more flexible because it includes the ability to do what's called an outer join which everyone that here that no sequel would know means it's going to select rows in the primary the left table that also do not have a corresponding row in the right table to use an outer join so an implicit join using the where clause is basically an inner join so that's what we're saying so you can do either way sequel alchemy is very explicit we're not trying to like gasps how you want to do this if you say you want join you get joined if you say you won't wear one column as the other column you get the where clause it's very explicit it's very you know WYSIWYG about that now the future selects includes a feature that users of the seek alchemy ORM would know very well which is that in the ORM there's an obstacle query which we'll see later has a method called join we now in the future version of selects have a method called join on the selects object now the reason join is part of the future version is because there actually is a join method on the current sequel alchemy select construct but it does something very useless and nobody uses it so in one point for at the moment actually having the default join just raise an exception because it nobody uses it and when you do use it it ends up making a quarter that's very broken so for the moments the new join is only in the future selects I might move it I'm not really sure how to do this because the thing is dot join does behave differently and the future version of select then in the classic version it's just that the old API is is very broken and useless so I don't when I use the future like I want to join between two tables I have the option I can still use the other form I can say select from join I can also say select from user table and address table and I say dot join to address table which means joined from the first table to the other table and this is how the ORM query which is one as many ways to make this explicit so that will do the same statement it'll be like user name address email address user join address and part of the reason this is doing this is because as I mentioned before and I hope if we get all over the if we get all the way to the end of the slides you'll see this this selects this feature select object can also select ORM objects too and actually acts as a complete replacement for the ORM query so it had to have a join method because the dot join method in the ORM is pretty important but for people who don't know this I shouldn't be talking X I haven't shown yet does the join method on hugest of luck and it is more automatic what's next when we work in sequel with tables and we want to select from the same table multiple times there's this thing called an alias which is like select from table as some name or as some other name sequel alchemy has a concept called the alias which will generate a quote unquote anonymous name for you or you can give it a name in this example we're selecting from the user table and then we're gonna join twice to the address table as an alias so we want to join through the address table as a certain alias and look for a certain email address and then we want to join to but again has a different alias so to do that we give it the alias method and then we assign there is alias to a new object so this is something about seek walk me that is again to me it's hard for me to even notice it anymore is that when you're writing a statement and you refer to different tables and things to select from they render their tract in the statement based on their Python identity meaning any time you refer to the address alias one object in your statement that will render appropriately for that clause in the sequel and anytime you refer to the address table plainly it will refer to that so you can have any number of Python objects that represent tables or aliases of tables or sub queries and as you mix them ask them in this object we're creating which is the Select objects basically the expression system when it compiles it knows that the different Python identities refer to the different elements of the of the Select so and this is not something that you really get in sequel because in school you're writing a string you say select star from address as foo where food x equals y you know in here it's not about knowing the string name foo it's about knowing the object address alias one so you don't have to worry about the strings and that's why when you make an alias you don't need to give it a name see Kawakami will give it a name for you that it'll just use it'll just work so when we run this statement we will see and then let me move the group thing here right if you guys can see the sequel at the bottom of my screen it has the Select and has user join to address has addressed one joined to address as address two and then it says where address one is something and we're addressed to is something so basically we looked for the Spongebob row in the database for a row that has two email addresses separately with different these certain emails that we got back that one Rover is this first bunch Bob time that this is going to be 211 okay a sub-query kind of works similarly it's kind of like taking a table and an alias and then and this the Select object together so basically if you take a select object and you say dot alias on it that will actually make a sub-query meaning the actual object you're dealing with is another table like object and this is kind of a thing about sequel if that secret was composable from what they call in relation illogical to call them relations a relation is normally what we call a table but a relation can also be a sub-query or something anything that returns rows is a relation not to be confused with a sequel alchemy ORM relationship and so basically the way table objects have a dot C on them which gives you the names for the colony you can use that table dot C in and in a sequel a select statement if you take any any select statement you can say dot alias or in one point four dot sub query which is more descriptive and you have this new object that has a dot C on it so select sub query here is going to have a dot C that has both user name and email address columns on it these are not the same column objects as the user table user name it's the column object that comes from the sub query so when I run that if I select from my sub query as it as if it were a table I get select a non one which is the name of the sub queries anonymous from the sub query so that's different from if I said select you know dress table table see email address so if I select from that column versus if I select from so what I'm trying to illustrate here is that there's an address table that has a dot see the email address and there's a sub card that has dot C the email address and when you select from one of the other you get a very different query so that's kind of a bit of a conceptual leave for people who haven't seen that before once we have a sub query then we can really go nuts we really have the power of sequel here we can make a query that will select how many addresses each distinct user ID has using group by and the count function when you want to use sequel functions in sequel committees are usually aggregate functions but there's all kinds of functions especially if you're in Postgres postcodes has crazy masses of functions all of the functions are implicitly generated from this namespace called func so I can say func XYZ and get the XYZ functions someone the other day asked about like bit shifter something like that I get function bit shift like that I could say bit shift I don't know what this ship does this is a fake function I'm just making it up I can say do that selectfrom bit shift and you get whatever that means means nothing so up here we have an address select that is going to select the user ID from the address table and the count of the distinct IDs and it's going to group by user address ID and then that's a select statement were going to tell us equality to turn this thing into a sub query meaning give it a dot C attribute give it a namespace one change for people who are you know this stuff already the newcomers won't understand this in sequel be 1.3 and before the Select object itself has a dot C on it that is deprecated because it doesn't actually make sense in sequel I mean it's actually it's actually possible syntax in some databases but in general most databases require that a sub query have a name and parentheses around it so if you want the dot C attribute from a select statement you need to make an alias of it first or a sub query that's a kind of a big design design change in C quality one point for and you'll see that in the migration notes it shouldn't affect people because people have already been using sub-query alias anyway because database is like Postgres will not accept the query if you don't do that so we've just made it more strict in 1.4 so we're gonna make another query we're gonna take that address sub query we just created that has the count it's gonna select all the user IDs and how many addresses they have and then we're going to join that to the user table and once again because user table and address table have a foreign key between them super welcome even knows how to join to the sub query even though the sub query doesn't actually have a foreign key in it see cockney has this rather elaborate system how it knows that the sub query is derived from a table that has a foreign key and sequel alchemy knows how to generate the derived on Clause between the user table and the address sub query so that we can get the name of each user name from the user table and how many email addresses they have does anybody have questions about that because just took kind of a jump conceptually and then a lot of people love this thing called a common table expression or CTE for short which really at its basic level is a lot like a sub query um it supports some other tricks like a recursive sub query which is also some mystic locking supports there I find them hard to understand but we support them if you have a sub query you can also instead of using sub query you can take the address select notice that address select here is the Select statement it's not the sub cry so we can take that address select and you address select that CTE and this creates instead of creating an alias sub query object for us it creates a CTE object we can then take the CTE object and use it in exactly the same way as we did before we want to select the user name and then the count and instead of getting a sub-query in the in the the from clause we get a common stable expression which means that the sequel statement has a whiff clause at the top and it names one or more select statements or actually they could be inserts and updates and Postgres but normally select statements better than addressed in the query so from sequel Alchemy's point of view this is kind of a cultural thing a lot of people once et's became well-known people like where are the c tes i want to use CTE and i'm like see quality doesn't care is the same as a sub-query i just use a sub query like from a syntactical point of view in python it's exactly the same you don't do anything different whether it's a sub query or to CTE the reason people wanting is GT e--'s is because they affect the the efficiency of the query and they change the query plan and they don't cause cartesian products so there are reasons that I came to appreciate so now if you want yourself query to be a CTE instead you suit you do dot CTE and then you refer to that object in the same way as though it were sub query and the sequel will all work out for you I'm just watching that zoom chat no one's asking you questions okay another kind of sub query is a scalar sub query or a correlated which can also usually be us correlated sub-query doesn't have to be correlated the sub queries we've been showing so far I might refer to as a from clause sub query meaning it's a sub query that returns one or more columns and your more rows you can take those queries and treat them like tables and put them in the from clause and have it join there's another kind of sub query that is a scalar sub query that you can use in the where Clause of a sequel Select statement or in the columns clause and the requirements of sequel on most databases with some a couple of exceptions that I know about but not generally available a scalar sub query needs to return exactly one row and one column if you're scalar sub query does not return one one column you'll get an error message at runtime when you run the query the database will give you that hair Seco alchemy is not police your skill our sub-query I probably didn't mention that that one of the things about Sikh walking is that we do not try to reinvent what the database does if the database is gonna parse your sequel and tell you if it's good or not we let the database do that as much as possible we don't try to pre judge your sequel with some exceptions but if you send a sequel that's doing something or another we try to let the database tell you we let the database do as much of the work as possible so anyway when you do a scale or sub query if you select it from an aggregate function that's a good way to make sure you have only one row because an aggregate function is going to take all the rows that match and it's gonna roll them up into an aggregate it's so one value so say here we do a query that's gonna select the count of addresses and it's going to also do this where clause against the user table which is a little strange because we don't seem we really need the user table here when we print this query by itself it says okay select the count of addresses from the address table and then also select from the user table where user ID equals the address user ID and that's kind of a pointless query because the address user ID column is a foreign key to user ID and it'll always be there it's not knowable so this seems weird we're going to take this scalar sub query which we've denoted by calling this method called as scalar again in Python and in C combi 1.4 you can use the scale or sub query there's a question that just came out which is a good one which is why is the table user quoted now that is because in sequel when you use a reserved sequel word as the name of a column or a table it has to be quoted so when we print these statements sequel alchemy is using a default compiler that is agnostic of all the databases and it has a list of words that are reserved for it's like if I had a table called select you would quote it so when I print the statement the name of this table is user and the default compiler for whatever reason is treating the word user as a reserved word so it's quoting it but the funny thing is when we run the statement on the sequel life database when we run on the connection sequel a does not have user as a reserved word and that dialect actually does not quote it so you'll notice a difference you know good observation that when I'm string of flying these queries its quoting user the word user as a table name or column name but when it runs on sequel itís actually not quoting it if you were to run this on Postgres I think it probably would have to quota cuz I'm a user as a reserved word I'm Postgres as well because it's part of the create user statement that most databases support but not sequel Lite that's why user is quoted anyway what we're showing here is this is a sub query that's by itself it's not really a sub query yet even though we've said it's one but when we stick it inside of another query we say okay select for me the user name from the user table and then also select for me this correlated sub-query address core sequel alchemy will Auto correlate the user table so if you notice this two different select statements being printed here when you look at select count from address the first query has from address comma user in it the second one just has from address and there's no comma user but it still has the where clause of user and that is because this is a correlated sub-query so basically when you take a scale or sub query and embed it in the columns clause or the where clause of another selects the auto correlation will look for that any of those tables in the immediate enclosing selects if you are trying to correlate to a grand an ancestor select you have to do some special methods there's a dot correlate method where you can explicitly tell it what to correlate to why nots correlate to but the default behavior is that if you put your correlated your scale or sub query inside of a selects it will Auto correlate to the immediate enclosing group which basically means it will omit that table from the from clause and that is the end of the sequel advanced tutorial part and then we gets the ORM so the timing is right on and I don't feel like I'm talking too fast either am I'm talking at a fairly relaxed clip even though probably have bronchitis from talking for three hours that always happens to me anything we want to do before we get into the lightning round of the ORM I'm just watching at that zoom okay before our object-relational mapping and this is where everyone's will have all of their questions and we're now at the highest level of psychic me okay and we have a lot of slides that I have to move this little thing over so that you could see my slides object relational mapping or in for sure it is the process of associating object during two classes we call them domain objects with database tables focus OOP let me go back we refer to the set of object oriented classes as a domain model the most basic task of an ORM this is the most any ORM the very first thing needs to do is to be able to translate between a domain object and a table row and the thing I want to note about RMS is that I feel that you are have to be using object-oriented classes when people talk about ORM instead are just trading up dictionaries that's not millirem that's just dictionaries and namespaces I come from the old school Java Hopi Ganga for you know culture where an object meant you had an object damn it so by domain object I mean you're using in Python the class statement and you have class user class whatever and it has methods and attributes on it and those attributes are going to be persisted in a row of the database and then you can retrieve them back so millirems can represent arbitrary rows so so in not just having your row your database object your domain object load from a table row can load from a select statement and seek walk me and a lot of other rooms do that too and then this is a quick diagram that you could just look at in the github this is kind of a diagram of how to represent many-to-one and once and many using foreign key associations so on the right side is what a database would have we have rows associated to a parent row by a foreign key reference on the object-oriented side this becomes a bride directional relationship whether you whether you mean it or not your parent object has a one-to-many relationship to those child objects and the child objects have a many-to-one back to the parent and that's kind of the whole ballgame right there so I'm assuming people kind of I hope people get this chart because when you deal with the ORM this is where it all happens is this thing about relationships between tables and loading them and storing them and updating them and everything else other things do probably means of querying the database engine of the domain model structure and that really so that bullet point is really the first part where it's like writing the sequel I'm sure everyone has seen or maybe not I don't need to know Ram I want to rent my own sequel well great that one bullet is the part that's the writing the sequel part everything else we just did is not necessarily about writing the sequel there's a lot of other things going on that are outside of writing your own sequel statements some oh rims can represent class inheritance using a variety of schemes to Kawakami supports three of these right now Oh rooms can do sharding if you want your data to be separated across different kind of if you want to have vertical partitioning meaning different tables or in different databases or horizontal partitioning meanings different groups of rows or in different database backends for scalability or amps can sometimes transparently handle that concurrency patterns row versioning patterns ways to check that your row is the current row and you're not looking at a stale row data validation coercion all kinds of fancy object-oriented stuff integrated with the database the two terms we hear a lot with RMS are active record and data mapper I actually had to look in Fowler's book just to make sure I'm getting these two terms right because I found that to be a little ambiguous active record means that when you have an object that represents a row in the database that object is responsible for managing its own persistence and loading so user record dot save user record dot you know fetched myself every object is basically on its own and represents a database row all by itself and there's no coordination between multiple rows multiple objects data mapper style ORM is different you have objects and then there's some kind of central coordination 'el system that is making sure everything matches up correctly sequel alchemy does this using a pattern called a unit of work there's also different configurational patterns basically every ORM uses it all at once style where you have classes and then the classes you name a bunch of attributes and the attributes become database columns sequel also supports a classical mapping style which is what we had in the first in the original versions where the table is separate and you say here's my class and over here is my table the table metadata that we see and then I'm gonna map them together you can still do that a sequel alchemy although there's really not much reason to so single room is a data mapper modern versions use declarative configuration the ORM builds upon off the Sukho expression language presents a schema century for this presents a domain model centric view of data key ORM patterns unit of work obviously maintain my system that tracks changes over the course of a transaction and flushes changes periodically the identity map multiple objects that are of the same primary key identity in the same table will be the same object so that different parts of your application that modify that object within the transaction will be modifying the same object so that tracked changes are tracked correctly lazy loading some attributes of an object may emit additional sequel queries when they're accessed equal loading attributes are loaded immediately related tables may be loaded using joins to the primary lock statement additional queries can be admitted and then we go to the ORM walkthrough I'm going to take a two minute bathroom break okay Josh you got it [Music] want to take this my time to remind everybody that if you also have to go to the bathroom or get up and stretch now is a great time to do that yeah sitting down for a long time is not very good for you so make sure that you're periodically taking a taking time to grab some water grab some food stretch and do all that good stuff alright or in basic when we do object relational mapping mmm we start with this object called a declarative base the declarative base is basically a class that we're gonna subclass so when we subclass declarative base it's going to have a table metadata collection that will track tables for us and it will also keep track of all to call them all the classes that we create so the first class that we make here is the user class which is going to represent at the user table that we've been looking at all this time we give it the name of the table and we give it columns you'll note that we don't have to name the columns inside the column itself we can just assign the attribute name username equals column string and the declarative system will assign the names to the columns for us automatically the Reaper here the Reaper method under Reaper is so we can see objects coming back in our examples you don't need to put our Reaper it's optional make user user now has a table object stuck onto it as double underscore table and that's the same kind of table object that we were using this equal expression tutorial and the metadata so that gets created for you from the declarative system this is also an object that you don't need to really worry about called the mapper and that's the map or object that's actually doing the mapping it's it's actually mediating the user class and the attributes on the class with the table and as the ORM does things it's constantly asking the mapper for information about how to map this class and how to query from it that mapper is also an object that I showed over here in flavors of ORM this is what C Qualicum e classical mapping looks like would you have a class separate and then you have this mapper object the declarative system here does that for you and it has shown to be the probably the superior way to do things so when you use declarative you get a default constructor that's your double-under init method you can see here we're gonna make a new user object again with SpongeBob SquarePants so this is kind of like when we would have a row as a dictionary that we would insert using core to insert but instead when we want to represent a new row we create an object now Karina's object did not actually do anything to the database nothing has happened to the database at all we basically have this Python object sitting here it's got a Python dictionary inside of it it's got this little magic object called instance state that you don't need to worry about you can actually look at that using inspect if you want to but it's not needed for basic use and when we created the Spongebob notice that the table has three columns it has ID username and full-name we didn't give it a dot ID however when we access dot ID we get something back it's actually none ideas none now because we got none that's the basically a default value for a column that you didn't for an attribute that you didn't give a value for um when we do dot ID years ago that would actually modify the state of the object which was in detail and now it does not do that that's a coffin in 1.0 about four or five years ago even though it says spongebob ideas none there's no none there's no ID in the state it's just strictly a default that's giving you each time you call upon it and that might also reveal to some people that these attributes are kind of magical they're not just regular Python attributes they're actually what I call instrumented by Seaguar me using a technique in Python called the descriptor a Python descriptor is if anyone's ever used the the the property greater the property decorator in Python is what's known as a descriptor it's how you can give an attribute to a class and then when you have instances of that class you access dot food bar but it's actually calling a method on the class on the object with the self that will give you a value and you can also two setters and the leaders so that's what sneak alchemy is doing with this class so we made this this spongebob object all of the attributes that are mapped to a column are implemented using Python descriptors now we have that user object we can take the metadata for the tables the collection that's stuck on to the base and we can make we can basically use that metadata as it is you create tables so we get the user table the still and we're working with so far and then here's the new ORM thing remember I said that a data mapper over m is going to have a mediation layer a coordination layer between the individual objects that represent table rows there's going to be this coordination layer that can have any number of these objects and they will make sure they all get you know modified updated managed loaded whatever correctly and we talked about there's gonna be these two patterns called the unit of work and the identity map the place that that all happens is this object called the session the session also in most cases it has options for this but in most cases you give it a single engine object which the session will use as a source of connectivity so whereas in all the other previous slides we had engine begin engine connect once use new ORM the session now will do the job of the engine not connect engine dot begin for us we no longer deal with the engine or connection directly in the vast majority of cases there are case you can't deal with them but normally the session is now taking over the job of starting transactions committing transactions emitting sequel interpreting results for you so session has a queue of new objects meaning objects that it should insert into the database you access this queue you modify the queue using the dot add method so we have this object spongebob and we just did a session add nothing happened to the database say we haven't even connected to the database yet oh well we did to create tables but we haven't the session hasn't even tried to use that engine yet because nothing nothing has told it it needs to talk to the it to the database yet so when we put the object into the the dot add method it gets part of a collection called the dot new collection which really should be named dot pending and maybe I'll make that name change we call this object a pending object and the pending is a specific name of a state seek Lakme of objects that will be inserted but have not yet been inserted they will be inserted when they need to be which is strange because we told to poet spongebob in the database but the reason it didn't actually need to insert yet is because we're always working within a transaction and we're actually the session won't really do the session will not consider its work done until we call the dot commit method on session so if we were to say session doc commit it would be like oh let me get all my pending work out the door and then commit the transaction and in this case it would even be started the transaction because we haven't even started so basically sequel alchemy has this pattern which is called like late evaluation basically you notice when we did create engine we made an engine it didn't actually connect to the database when we made the engine it didn't do it until we access that engine cannot connect Vanek connected the session is the same way it's implicitly in a transaction but nothing happened yet we didn't do anything yet so it didn't do any begin it didn't even try to talk to the engine yet we added an object but we didn't commit it so it's I don't know if he's doing yet I'm gonna only do something when you need to do something that is a database operation let's do a database operation and we'll have a sneak peek here at what a query in the ORM currently looks like and again this is a part that is gonna be changing foresee cockney 2.0 but for now everyone's used the ORM knows this is that we don't work with the Select object directly in the ORM yet we work with this thing called a query which is a whole lot like a select but a little bit different which is kind of annoying which is why I unify them but in any case for now we're gonna use traditional sequel Kamino RM methods which is this query object so we say to the session hey session we want a query for user objects and the sessions like oh really and we want to filter them on the ones where the user name is the name spongebob and it's like oh really that's very interesting nothing happened yet when we say dot first this is what happens decisions like oh you actually want a row oh okay well I don't have a database yet okay oh this is engine let me create a transaction oh let me connect great connect to the engine start the transaction okay I've got some rows pending okay let me push out those rows okay they're gonna roast there go the rows okay great now we can do a query select star from user where are you saying the Spongebob room here's your row how'd I do okay so that was gonna happen I did not rehearse that either so everything happened all at once because it is late evaluating it's like oh you actually want a row well let me push what I have so far into the database into a new transaction and then let me select it for you and let me give it back to you um what has not happened yet is the data has not been committed if you were to start sequel light up on another journal and if this were a file-based database which it isn't but if it were a sequel light would actually log it will lock you out would be like those database well it probably gives you read-only it depends what you're doing but the database is locked right now because it's basically open in a transaction and so right now we're working with rows that we just inserted and we can select them back because we're in our own transaction based on the transaction scoping the isolation rules in effect if we were unlike my sequel or Postgres other trans actions might not be able to see what we're doing I mean unless they were serializable they would see this but you know a transaction isolation means that you can't necessarily see what other transactions are doing yet actually no they wouldn't because the dirty read so okay okay so we just be gonna transaction and put a row in and and then select it back and there's our and there's a row and we yes we got back a user object that looks like the one that we had so he had Sponge Bob is this user object and then we just assigns our query results to also Sponge Bob two different Python variables and I'll show you something about in a minute the Spongebob object that had none for its ID before all of a sudden has the number one for its ID that's the auto incremented primary key that I talked about earlier sequel Lite has Auto increment for primary key integer columns and those two objects these spongebob and also spongebob are actually the same object they're the same you know the ID of Sponge Bob is the same ID as also Sponge Bob that's because when the session ran our query it got the rows back it looked at the primary key represented by each row and said how I already saw that primary key I already have an object locally assigned to this variable spongebob that is that same object let me return that instead of making a new object it's something a copy let me return them when it's already there so that any changes are all coordinated on the same object that's known as the identity map and is basically yet another dictionary with keys that are the primary key identity of the object which you notice includes the typeof object because a primary key value of one is nice but that doesn't tell you which table you're stored in so it's user idea of one and the values are the object values for those primary keys so that's called the identity map pattern it's in the martin fowler's patterns of enterprise architecture book which I highly recommend and then also what's been happening is whenever you see the session inserting rows or updating rows or deleting them that occurs under a process known as the unit of work which means that the session is tracking everything happening it's tracking all of the objects that are pending so here we're gonna add more objects like this is just like session attic so we're going to add a list of objects and then we're gonna even modify an attribute on our existing spongebob so what we have here are two objects that are now pending and another object that's in the quote unquote dirty State earlier a spongebob was known as a pending object once the insert happens and the session is put it in the session it's now known as a persistent object persistent means it's persisted in the current transaction not necessarily the whole database but it's in the current transaction on the database side meaning if we were to commit the transaction it would become permanent but then when you have a persistent object and you modify any of the instrumented columns on it it becomes dirty so we have two pending and one dirty we can look in session hey session what objects are dirty oh look user is has a journey attribute on it and what's pending okay two otters depending so basically the next time this session goes to put data in a database which is a process known as a flush there's actually a method called session dot flush that we could type if we wanted to but we don't need to it's automatic it would insert these two rows these two objects and then it would update the other one and we will see that well in this case we're just gonna commit let's commit the transaction so what commit will do is it will also do a flush or whatever is remaining and then it will commit the transaction so we run that and we have our new Sponge Bob Jones full name and we have two more rows inserted and then the excuse me transaction is committed and now we're in yet another state which is called expired State when and then this is all configurable and the fact that the session can flush automatically is configurable and what the session does when you commit a transaction or roll it back by default which you can and sometimes will want to change and there's a question I'll have to look in there in a second is it will automatically expire the state of everything that it has meaning all of the columns that are persisted will be erased all of the collections that it will have when we show collectors later will be erased and when you access them they will then do what's called a lazy load so you reload that information in a new transaction and what's happening here sequel alchemy is trying to make sure that when you look at an object that represents a database row that database row is always part of a transaction if there is no transaction employee you should not be looking at the database row which might be a little counterintuitive but it's a default behavior if you have like a web application where you want to load a bunch of data and then close the transaction and display it you would turn this flag off but to get the best most correct behavior between multiple transactions at the same time basically when we commit the transaction and then we're gonna look at spongebob - full name again any number of a million transactions could have changed that value by now the row could have been erased it's concurrent transactions because in database transactions they're they're isolated from each other but once you're outside of a transaction and you start a new transaction and your data could have changed so the default behavior is the most quote/unquote correct behavior but it's also a little less performant so when I say spongebob full name and then I'll look at the question in my window here there's two questions it's gonna actually start a new transaction refresh the Spongebob and show us the full name and that way if spongebob were deleted it would say oh it's been deleted sorry or if someone else some other transaction and modified it it would change it so now let's see I move my zoom window over how high is the coverage of functionality of sake waka me like you mean code so someone asked alchemy looks often in the shown samples pretty much like sequel how all DB ziox so I guess this means how well is the code coverage like how well is the Oracle dialect tested it's certainly the the source code is 100% covered and there's probably several hundred probably almost a thousand tests that run against the Oracle dialect that are testing both Oracle's specific features as well as agnostic sequel features that we want to work on Oracle so the built in dialects in sequel alchemy would be seven of two of them the sequel Lite my sequel sequel server Postgres Oracle dialects are super well crazy tested and when we run a CI job this two hundred thousand tests that a run because the same ten thousand tests are run over and over for different database backends different drivers they're very very highly covered someone then asks so that all the new objects and modified are committed as a transaction yes so when you work the session there's legacy versions of it that don't do this but they are deprecated and going away in Co the session when you use it as per current instructions documentation you're always in a transaction so when it flushes rosed it'll flush maybe this row plus these rows you're always in the transaction so whether it flushes everything at once or flushes a little bit here a little bit there as long as you didn't say dock commit or dot rollback everything is pending in the transaction on the database side so basically yes things were always come in as a transaction it's it's if unless you're using a mode of the session called auto commit which is deprecated and won't be there in 2.0 it does everything is a tremendous transaction if you're using the deprecated auto commit mode it still does each flush in a transaction it's just that it won't do multiple flushes in the same transaction which is I you know some that I used to think that that was okay but I don't think it's okay anymore so yes it's pretty hard to not be transactional with the Orem session it's pretty much hitting you over the head with transactions from the gecko so that was an expire so when the transactions over by default and you can change it and you might want to change it for some purposes it expires everything so that when you R access everything else it will refresh itself next we're gonna do some rolling back of data okay I'm Way behind so let me move faster ha rolling back of changes let's make some changes that we're gonna roll back let's change Spongebob user name too with some terrible name and then let's make another user object that will be pending that we don't we're not gonna we're not gonna want let's then run a query for those new rows so the special Miele okay let me flush them first and then let me query so it does the flush you see the update in the insert occur and this select occurs and you get the roads back but they're not committed they're not in the database physically yet they're only in the transaction in a non permanent state so we don't want them we do a rollback so now what's happened is the transaction has been rolled back the session has now again done the auto expiration everything that was loaded has been unloaded any changes to spongebob had been erased any pending objects are also removed because we assumed they were either they were either inserted or they were not so now when we just spongebob it usually like we did before it begins again but it has the old name because we didn't rollback and this is an example how auto expiration is kind of a handy thing as long as it's doing what you need to do and then also the fake user object that we didn't want has been evicted from the session the session called this expunging but it really should be evicted to use hibernates term that object is no longer in session because it was pending and we didn't find it and it's it's out so the rollback has basically totally cleaned up the state both at the database side and on the Python side and that's why Auto expiration is kind of a nice thing the data is also of course gone from the database so we run that query we don't get fake user back and we get SpongeBob's original name even though he still has that Sponge Bob Jones hey that's transactions because I want to get through things I'm gonna jump right into the Quarian part because they're next the next section is about querying because querying is really where most of the fun stuff happens again with querying we have this idea of the column object and the overloaded expressions so user dot username equals Sponge Bob again gives us kind of it's a cut the user dot use your name is kind of a quasi column object it's not actually a column but there is a column inside of it and notice I'm quoting from the user so this is the user class the user class has this table okay user username is this thing called instrumented attribute which you don't need to really know about but just know that it acts like a column oops the column is actually in there if you do that there's the column that's actually mapped to the table there's actually more weird things about that column that I don't have time to get into anyway user that username is acts like a column on the database table the way we had table dot C column why isn't there a dot C attribute well a long time ago there wasn't that's the attribute but the reason we don't have that C is because your user classes your user class your username attribute is your user name attribute there's no reason we'd be concerned about username overwriting some other name on the user class that we need because that is the name when we have a table object and we say table dot C dot user that's in case your name your table name is if your column is called insert table that C dot insert is different from the table dot insert method so the dot C thing has to do with naming when you in the ORM there is no table C attribute it's just your class dot the name of your attribute and that acts as a column expression so again like before the query object acts a lot like a select so we can say give us user object except in this case it's giving us user objects and not database rows this is the way the query was first written was that it did not return rows at all it returned as objects so when I say query user what about user ID give me all it's gonna give me a list of what I asked for and not as a row not as a tuple but as the actual instance of the user object later on quarry was enhanced so that it does return a tuple like row if you ask for multiple things from query like individual columns you will get supples back these tuples are different from the rows you get back in core however in 1.4 they're now the same kind of row object so you get the same row object back from this query and 1.4 just out of gate as you do in the core which acts like a temple so it's pretty good there's your tuples and you can also query for accommodations of multiple classes if you want we'll show that later and column so I want to give a row where the first element of the row is the user object and the second element of the row is the string user name so I get the user objects back and the string names check the time okay you have a whereby you have a filter by a clause which is kind of like a shorthand to the where clause you just give it key value pairs and the keys I'm trying to sorry the keys are coming from the attributes that are named in the class you say filter by convenient if you want to do full-blown expressions with ORS and hands and whatnot you can use filter which is oddly enough named differently been dot where that's also a thing that I hope one four fixes because why do we have two objects to do this anyway in current ORM everyone the or and everyone knows loves it uses dot filter for ORM where clauses they act just like they we're on the select a variety of returning results is I can get all for a list I can get first which give me just the first row has and it's whereas a single object or none if it's nothing there and then there's the one method which is pretty popular one will give you that same value as first but if there's more than one row or a zero rows it'll throw an exception so here's one there's one object if I do give me no rows but one oh I get an exception no row was found with womans required if I get more than one give me all the users but just dot one that's wrong multiple rows were found that's what that one does that one is still around in psycho alchemy in one point four it's now present on there that new result object so you can use dot one for Korra results as well which is kind of a new thing so that is the basics of what does the session view how does this session do transactions and expiration of objects cuz that's like a big deal I actually have a whole talk about that and then what is the query kind of look like so that's the first part the next section is all the more advanced query stuff was written form fields filter by good so filter session here filter accepts an argument that is of a type called binary expression so if I say I say expert equals user name equals foo expert is this object called a binary expression Q equals session query user filter filter will actually take positional arguments it takes like it takes basically Python positional args like that you can pass it expert and that is what it is going to use now if I would have passed expert twice I think this would work it'll give me the same thing twice let's say we're firm user where username equals and so basically filter from a Python point of view is accepting arcs right if that makes sense for people who know Python pretty well filter by does not accept arcs closer but accepts canopy works so when we have this this expert object up here we cannot pass that oh sorry I brought the wrong thing filter by text can be ours filter by will not accept this user and username thing it does not want to have a binary expression object it wants us to just give it the name of any attribute on user that we care about like user name equals foo and then we also want to say full name equals bar and notice we're not using the double equal sign because we're not passing a binary expression object we're passing just Python name keyword named key arguments what filter by will do is it will create for us behind the scenes this binary expression so when I do this and I print you two you get the same thing the things are the question good bye you're not typing fast enough I assume the question is answered okay that is yeah I'm a little late for the Advanced section just like with the sequel advanced we're now going to talk about two tables we're gonna do about joins and relationships and in the sequel of me ORM when you're dealing with more than one table a lot more is going on than goes on with the so one more question does eq not return a bool trick question it it doesn't does not it's not actually a bool but it has the bool method so you actually can't call bool on it and it will actually do an equals but it's not that very performance so you got to be careful yes the bull thing has been a big deal but it does have a bool on so it will actually give you a bull compatible object but I don't recommend using it that way um cuz you don't have to so let's do the same model again so we're gonna have user and address as we've hopefully have seen many times by now we're gonna use this new concept called a relationship which means that we're not only going to have user and address have a relationship buy a foreign key the ORM is going to have additional directives which say hey I need you to also persist the related address objects in the collection and as you do that I also want you to synchronize what I'm doing with an attribute on each address object called dot user and that's what the back pocket with key word does people will know this more as a keyword called back ref which is much more well known back ref is not coming away it's still there it's great I find back pockets to be clearer you can still use back ref no problem but just I think back five points might be a little easier to reason about now we have an address class which also has a relationship says address hey I'm gonna have a relationship back to this relationship called user and when you populate or deal with my dot user attribute also please maintain the collection on that user object called addresses um notice this is not escaped us from the responsibility of creating the foreign key element also this is the part where Co alchemy is very separate in how the ORM mapping and the core DDL systems work they're very separate there's a big fire wall between the Corps table DDL stuff and the ORM so in some respects you have to tell the ORM twice I like to say it's being explicit there are ways to automate these foreign keys but they're just complicated and I think in real in practice people like to deal with their database models in both ways they want to see how it's gonna live as the DDL in their database and then separately they want to see how it's going to act in python and the two separate concerns so they're pretty separate so let's again go back and do the metadata create all the things that we're doing and we get user and address like we had before with the following key nothing is different there then let's put some data back into the table the way we did before we make some objects and we add all and then we say commit communitive work commits those objects the fact that you see three separate insert statements is also a tell of the fact that we want to get the primary key value generated from secret light and returned if we were to give these objects a primary key value upfront it would have used it and execute many in one step kind of an implementation detail let's make a new object for a squidward and don't add them yet when we make Squidward he now has kind of like the way spongebob had an implicit ID Squidward has an implicit dot addresses which is a collection and it's empty that's because we made a relationship on user called addresses so every user object now has a thing called addresses which points to a list it's a Python list it has again has special instrumentation on it but it is a Python list you can also use a set you can also use a dictionary with some extra steps and you can use the custom collection by default it is a list that list is also not part of the state of Squidward yet that list is a little default list and it will not become part of scribbler state until we modify it let's modify it specifically about replacing it with another list we could also append to the list we can append items to the list we're set we could add items so let's add three email addresses for squid squid where it has a lot of email addresses so we're gonna have three emails and when we do that if we look at one of those address objects the address object here squid here ad Squidward addresses that's an address object it's inside of squid where the dresses collection it has dot user pointing back and this is basically the one-to-many many-to-one thing that the back populates has done for us by appending address to Squidward's dot addresses collection the back populates directive said oh also set the dot user on that address back to the user object that were populating and also when we add Squidward the user object to the session there's a behavior that Kem's comes from hibernate called cascades which means it's going to cascade to all the child collections and attributes on Squidward to also add all those other objects to the session automatically for us because usually that's what you want there are ways to change cascade but in practice nobody ever does we said that level commit all that data so now we get Squidward and the addresses and also a thing that the unit of work does here is that it makes sure that it inserts Squidward first it gets the new primary key for this query would row it which is number of probably number of three here gets that number back accident before it's right there gets number four back make sure number four as populated into the user ID column of the address rows and that's one of the things that o RMS do for you which is they take care of all the messy business of whose primary key is what which foreign key gets what primary key that all gets taken care of for you by the the ORM and the unit of work which makes sure the statements are invoked in the correct order says that you don't get an integrity violation which is very neat and that's the big advantage of data mapper over active record we've committed what's meant again everything is expired so when we access the adat addresses collection once again lazy load will do this thing what's gonna load first it's gonna refresh the Squidward row itself because I might have been deleted and then it would go and say hey give me all of the email addresses that are associated with this with this Squidward user you could see in the where clause where for equals address user ID and you get back all of Squidward's three email addresses and a thing that cyclic me does that not every Orem does is that this is a plain Python list that has now been populated if you talk to Squidward addresses again it's just there there's no sequel emitted it just goes okay live well that's the list and here it is I'm just sitting there some ORM s will run sequel every time you access a related collection sequel alchemy just have an option to do that but it's not usually using you don't really need most the time let's do something where we're gonna mess things up we're gonna take one of screwed words addresses and we're going to change the dot user attribute to point the Spongebob so basically Spongebob is going to steal one of Squidward's email addresses and we're actually gonna load spongebob from the database I'm going through just for the purpose of example hit spongebob addresses collection which will lazy load by default so that we can see where that's going and then Squidward dot addresses his first hits his second address we're going to say Squidward's second address change dot user to be spongebob what's going to happen from a relational database perspective each address row can only point to one user at a time so it has to change ownership meaning it has to be removed from Squidward's dot addresses collection and it will be moved into SpongeBob's that addresses collection and so what you see here is the sequel you see is the spongebob record being loaded and then SpongeBob's currently I think maybe one element addresses collection being loaded however the move of Squidward's address record has not been persisted to the database yet when we look at it spongebob addresses now has Squidward's email address in his collection and Squidward's addresses has lost that email address and that has not been persisted to the database yet this is just the Python back populates keyword doing this little thing of moving things around in response to the other side now this is automatic implicit and magics behavior you don't have to use it if you don't like it you can turn it off and there's actually even more options now to turn off this seeking feature but in practice nobody ever does everyone pretty much loves back graphs 100,000 time with no problems whatsoever okay so we've changed the state of the Python collections and then when we do a flush or commit it will express those changes in the database and you can see it's really quite short all it did was update the user ID column of that row to point to spongebob rather than squid it set the user ID 2.21 instead of number four so that so people have some questions ago so I'm gonna I'm gonna keep going and until I see querying with multiple tables okay slide 10 okay so he's referring to the way you see this word cached now the thing about cast is I haven't talked too much about that but when you see in the sequel output you'll see the word generated and you'll see they were dashed when you see they were generated it actually says generated in that means how long it's good to generate the query when you see the word cached it says cast for the time is now how long has this been thing has been cast for because it wasn't generated so is there's no time to generation yeah I know I seen the docs I'm gonna document it this is all new stuff and yes being confused about that is something that I have to see maybe if it's too confusing I'll change it I want to be caching logging to show how long he's been casted that you can see at your cache is swapping too much that's basically why shows how long is the cache so you see things are constantly cache for a very short time that means your cache is too small and swapping out too often so I wanted people to see that the difference between flesh and command is that flush is going to push data out to the database by omitting insert update and delete statements and commit is going to just call commit there are two separate things however when you commit the session it automatically does a flush for you so the commit is kind of the commit of the session includes a flush for free if you have you a flush so if first it will do the flush step which is the rows going through the day but in the database then the commit which is commit the data and expire the current objects this is an example of querying with multiple objects this is the part where I start talking really fast because we have you know 20 more slice you can create multiple objects and get tuples back this is what I talked about for the implicit join of user ID equals address ID and in the from clause you see the commas you can then also do join as we showed before in the sequel tutorial does it Jone I can join from user to address you got to join I can also join with an explicit on clause and the select version does this to join to address and make sure you use this specific on Clause the user ID equals the address user ID and the best way to join with the ORM is to join on the relationship so another reason we made this user dot address this thing is that when we've referred to user addresses at the class level we can send it to joints so that suppose user and address had multiple foreign keys and multiple relationships and multiple ways to join it would be ambiguous to know how to join these two things yeah so we can use the relationship to do that so basically this point we're the 10 minute warning which means I didn't talk too fast to kill my throat that's a joint relationship I want to show you eager loading pretty fast so let me go through a few things here's aliased for the ORM it works a lot like the the alias that we saw for the core select so it's the same kind of thing except that we can alias against a an entity we can join to alias using this thing called up type which is in the docs we can do sub queries like we did before we saw it sub queries with core you can do the exact same sub query with the ORM query so here's an example of the the addressing counting the addresses for each user and getting that back so again using the ORM is in many ways just like using core exception eager loading so here's the thing when you read about ORM is why you should not use no RM is they'll say the n plus 1 problem means because you were lazy loading a collection if you load a lot of objects and then access all the collections you get a lot of queries so if I load all the users and hit all the addresses you get lots of queries and it's slow eager loading means that I can tell the ORM hey load all of these collections in one shot for me by the way because I'm gonna want them so the best eager loading you can use for collections in sequel alchemy is called selecting load this is the options version that you do at query time you can also set it up at mapping time so query user options selecting low user dresses is going to do a second query that is an in query you see it selects the username and then it does a second query against the addresses table and it gets all the primary keys that it loaded for users and puts them all in the in clause and the reason selecting load is the best eager loading it doesn't use me joins in a simple case it doesn't use any any joins no outer joins it uses one table and it uses a primary key index lookup so it is the fastest eager load that you can use for most purposes it's not always appropriate for some reasons that we don't have time to get into the oldest loading we have is called join load which is where the quorum runs one query and does a left outer join or inner join join loading is what everyone knows it's the oldest joining I thought was the only one we needed a long time ago is very good for many-to-one loads because it only has your parent row plus the mini to one so if we want to load address objects and also eagerly load the the user object we can do a join load and we can also say inner join equals true so don't you meaning we don't need to use an outer join with this one you can get that if you want to do and explicit join that is kept separate from what join load does people that work with seek alchemy know this pretty well so if this does make sense you get it pretty quickly join to address user and then also join load their separate you get two different joints that seems weird why do I want that I don't want that I actually want to join and also load it okay there's an option called contains eager that will both join explicitly to that table and also contains eager apply that object to the address user relationship and that is eager loading which was the fastest eager loading ever and eager loading is new to you that was way too fast for you to understand it so just go to the docs and everyone uses eager loading all the time I knew her everyone can learn it it's it's easier than other things in the final section a little bit of the 2.0 stuff so I have seven minutes right 42.0 stuff is that good yes I see nothing happening good I have no idea how many people are on this session is someone's type in the thing how many people are here because I see only three people in my screen ten people yes so in the zoom you have 22 but on the the actual cloud swarm there's probably another 20 or so okay okay how was the trip like 100 200 people get good umm what's new in one point 4.2 so let's look at that future selecting pretty quick that I just said will actually work like the query and it does um you can have your future selects and notice when you make a future selects there's no more brackets here you send the columns and entities that you want to query positionally and then you can do a join just at the other end of the join and the reason the way that this is stays cool with the fact that this is the core and the core knows nothing about the ORM does that the way the statements work in 1.4 it's just going to gather these arguments and do very little with them it's not really gonna know much about them it's gonna do a thing called a coercion which will introspect into the object given and the odds will say hey I'm okay I'm good for join when I get compiled this Oran will handle me later and it'll all be fine so even though your core and your nothing about ORM relationships don't worry I'll be used later I probably still just think of the name of that architecture um so feature select user email address join the user dresses and then we execute it with the statement we do session execute which is different because if you notice the Orem queries about session duck query user whatever dot all dot one that's what the query does but people wanted there to be more flexibility with how to get results so by using execute we get a separate result object which is the same exact kind of result well it's actually a subtype that you get with core so you can work with core results in ORM results pretty interchangeably and you won't even really know which one you're dealing with you'll just have database results you don't have to care how you got there so this is select statement that when you run to the session will actually load user objects in the result so when you have the result you then say hey I just want the first row as a scalar and also can you please unique if I them for me and give me all of them so this is called method chaining on the result which is totally new and one for that's not in one three at all so I get back my user so it's a much it's a it's a more longhand API it's more explicit but I think it's much more clear and I think it really you know we have to we'll have to deal with how people might not like the verbosity but it's much less ambiguous and I think it's easier to understand and I I think it'll be popular but you know we're waiting to see how people comment on it as many people look at it also selectors now can just replace query there's no longer filter there's no longer filter and verses where there's just where however select does have filter by now so you still have your filter by which is handy because people like that too so yeah filter by where and join that are equal across both ORM and core there's no more need to learn both API separately they're both the same API now except with the ORM behaviors you get that and what are we doing here yeah here's the casting so here's an example of a statement that we just ran with the core connection and it said generated in point zero zero zero 56 seconds so now this statement is cached when I'd run it in totally different context in the ORM context in this particular case it's still going to be cached and the longer I talk the bigger the numbers gonna be because it's showing how long it's been cast for so if I speak for 10 seconds I speak for 20 seconds I'm gonna say it's about 35 seconds and when I say run code oh wow 34 point 48 seconds I must be a drummer I am a drummer so that was cached for 34 point 48 seconds which means that my brain works to an accuracy of half a second so that was cast meaning the whole lot of work that the ORM especially has to do to make this join there's a lot of stuff it does in turn I do eager lows to figure out what attributes in the load is all kinds of Python computation that entire thing is now cast and seco agony 1.3 has a feature called bake queries which do this bake where's Rick really well but they're hard to use it's now pretty a little not quite as fast as bakery but it's almost as fast and it's transparent it just happens for you for every query so people's applications should be a lot faster because there's no more need to use bake query you'll have the casting everywhere it can be used that's it
Info
Channel: Six Feet Up Corp
Views: 24,606
Rating: 4.7777777 out of 5
Keywords: sixfeetup, Six Feet Up, python, SQL, SQLAlchemy, Tutorial
Id: sO7FFPNvX2s
Channel Id: undefined
Length: 176min 51sec (10611 seconds)
Published: Thu Sep 17 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.