Introduction to Memory Management in Linux

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome we got a lot of materials so we're going to blaze through this hold on it's going to be quick so my name is Matt Porter I'm with consult Co group and if you're looking for Allen I'll explain on the next slide why I'm not he and you can probably tell because if you know Allen he's got hair down to here so this is a introduction to memory management and I stress introduction so if you're experienced kernel person you might not get as much out of it this is the this is the presentation that back when when the early adopters of embedded lytx back in 2000 2001 we're coming in from working on our tosses I wish I had sat down and put something this this good together because these are the things that everybody needed to understand to really grasp their system all right so just real quick about the original author Allen OTT he couldn't be here unfortunately good friend of mine and he's a veteran embedded Linux developer he's a Linux architect at soft iron you may have heard there an arm server company but he put together all this material he's a fellow instructor LFT training instructor also gives the the kernel internals class that contains a lot more than this material so he did a really nice job on the slide deck and trusted bida presented as well so anyway I just want to give him the kudos for this awesome material so here we go we're going to talk about memory management from beginning in the end and it's going to be the intro as I said so we start at physical memory if you look at your your low-end systems I have a single address space and memory and peripherals are sharing that same space they're mapped into different parts of that single address space and the all the processes in OS in in this type of system share the same memory space there's no memory protection like we you would often hear about we're going to get in all the details of this and you're running a process in that single address space processes can stomp on each other because they're all shared in there you have to separate them manually and quote user space or your user application can stomp on say your real-time executive that you're using the schedule so examples of these would be nad 8086 cortex-m part AV ours all these low end microcontrollers in the old pre MMU processors so let's take a look so it gives us I know a lot of us are not working on on x86 but it serves is a ubiquitous example if we look at a 32-bit x86 system right lots of legacy obviously but it is a common ground we have all of these legacy areas and so forth you have Hardware mapped between RAM areas you can see that your PCI physical PCI area memory mapped i/o is all in the high part okay so that gives you an idea physically what x86 looks like now what's the limitation with the single address space right you have portable C programs expect that they kind of owned the whole thing right they don't they don't know if you're trying to port several C programs into one space you've got to go set the addresses of this this can live here and this can live in this segment so they don't stomp each other so it's kind of hard to do that you've got to have special knowledge of your actual platform you need to know what your total Ram is and you need as I'm saying you need to separate those processes you have to do all this work and there's no protection right as we said rogue programs can stomp all over things so in comes virtual memory and this is where things get fun so what is it right this it's a mapping it's a virtual mapping hence the name virtual so so you map a virtual address a fake address to that physical address alright when we look back at that x86 map that's all physical world and if we can just think in virtual addresses we can have any mapping we want so we map virtual addresses to physical RAM but we also map virtual addresses to hardware devices all right so PCI GPU Ram on SOC IP blocks right everything so what's the advantage right described how in that flat memory model the single address space we have a situation where you know I got to tell something to run at this address and this address and this address up to end times and actually have a nice memory map of where everything lives it's not portable right so when we have virtual memory all right you have one processes Ram is inaccessible to the other processor it's also invisible right so you have built-in memory protection and kernel Ram is not visible to users space directly the nice thing you have is that memory can be moved right so memory can be visible to different processes but you have to actually set up a mapping for that and the other nice thing is rather than in a single address space where you have all the memory sitting there and you have to manually share it and segment it right you can now do things like swapping memory out to disk because the addresses you're dealing with are just virtual the other thing you can do with virtual memory is map that Hardware right we talked about can be mapped into your process address space ok we need help from a kernel to do that on behalf of user space all right the other thing is we can take RAM memory and we can map it into multiple processes all right we're going to get into that more and that's the the shared that would be a case where like a shared library right where you're mapping and in the multiple processes and finally with virtual memory we get the ability to have rewrite execute permissions placed on those address accesses all right so we have to address spaces now right we've got the physical addresses we talked about and we saw that physical memory map x86 we use an example and that's you know DMA peripherals whatever it maps out to in your physical world right virtual addresses right and those are the ones that our actual software uses right when we get to our machine code whatever whatever architecture that's our load store accesses right out to memory and those are always using virtual addresses all right so looking at virtual memory all right we have to do a mapping this mapping is done in hardware so there's a piece of hardware that assists with these mappings okay once we have it mapped there's no penalty for accessing memory that way I the permissions are handled without a penalty so this is all handled in Hardware for us and we're talk about what that hardware is that does this and of course we use the same CPU instructions those same load stores whether it's RAM or a piece of peripheral IO okay so in normal operation you're always using virtual addresses all right so what magic does this it's the memory management unit okay and so an MMU sits between the cpu core and the memory all right it's often a modern architecture part of the physical CPU itself if you look and like retro things you'll find that mmm use were used to be a separate discrete part right and we're interfaced and we're part of that set sold just like say a Pima cos often a an integral separate discrete piece of a protector so the one thing to keep in mind is that the RAM controller is a separate piece so you got an MMU the ddr controller is going to be a separate IP block tightly coupled though and what does an MMU do right and what it does is it just does that magic of transparently handling of the translation of those load store instructions into physical addresses okay so we map the memory accesses the virtual addresses to our system Ram that physical address space we talked about right same thing with peripheral hardware it's no different from its point of view right it handles permissions he said we got permissions with virtual memory and if we have an invalid access to something it's going to generate an exception and with that exception we can go do some interesting things we'll talk about that in a bit okay how an MMU works there's an important piece of the MMU called the TLB translation lookaside buffer okay and so that's just a hardware buffer that has a set of mappings and those are your virtual to physical mappings it'll have permissions for that space okay and there's a there's a granularity in which these mappings are captain we're going to talk about that in a moment and the interesting thing is that ltwill TLB design is a very architecture specific very part specific performance sensitive and so you'll see a wide variance in how TLB is are designed how mappings are placed in if it's software done or it's hardware assisted that type of thing and also capabilities how many slots they have all right so this is a quick little diagram of what a system looks like if you're having trouble visualizing it where it sits you see the MMU between the memory controller right and the cpu you see that TLB on the side with some entries okay alright so they're saying with that TLB the MMU takes a look at that buffer right is there already a mapping in there when it accesses a virtual address and then it can look that up and if it doesn't doesn't find one then it's going to generate the page fault interrupt the CPU okay now if the address is in the TLB but you let's say you're doing a write access but it's only set for read permission it's also going to generate an exception and that'll come back into play as we get into how we use those things in Linux so in Linux a page fault right so you have a CPU exception generated okay and this happens when you access that invalid virtual address what makes it invalid right it's not in the TLB okay and you have three cases so first virtual dress just isn't mapped okay for the process that's requesting it second you don't have the right permissions all right and third would be that it's a valid virtual address but it's currently swapped out and that one's a software condition so let's dive into we're going to dive into each of those but first we're going to get into kernel virtual memory side of things okay so we use virtual addresses both in the kernel and user space but the way that we use them how things are mapped are quite a bit different so in the kernel we use them obviously and but we have this split in how we treat our virtual addresses and the upper part of our of our virtual memory map is for the kernel in the lower part for user space and usually when we teach people about this it's harder to think with 64-bit addresses so we go back to 32 bit and we affectionately call the default spot that it's split between user space and kernel spaces see bazillion that's at that three gig location that is a default so this is what it looks like so you saw that hugely complex physical memory map of x86 32-bit architecture and lo and behold here's the virtual memory map we've got three gig for user space right config page offset controls where that split is set at right and so every process gets its own three gigabytes in that system it has that whole view so you remember to go back to that single address space if you had multiple processes you had to go link them in all these different spots and manage your processes very manually in this world when we link applications they all end up at the same place right and the kernel just has this one gig in our 32-bit case okay so as we said that config page offset controls that a lot of architectures if you have specific needs you might fiddle with that a bit that sometimes happens in embedded stuff and the on 64-bit we don't have this situation where there's ever a possible need to do that essentially on arm 64 we're at 8 bazillion there x86 64 the splits at a different location but you know given RAM sizes and so forth it's effectively something that's that's not worried about in 32-bit system where that page offset is is has an effect on how we deal with large memory systems which we'll talk about in a moment so there's three kinds of virtual addresses in Linux and ldd defines these these best you can you can look at that and the way the way we define them are and some people use some different terminology historically but in the kernel side we have kernel logical addresses kernel virtual addresses and then we have those user space virtual addresses okay there's another special case but most people don't speak about them exactly that way either physical or bus addresses but you can look at ldd three the link was in there for a little bit more information so kernel logical addresses that's the what people consider the normal address space that they're normally dealing with what you get back from K malloc is a kernel logical address okay they have a fixed offset okay and so you see a magic number there that's the config page offset value and that would map to that now that that physical address is specific to one architecture that could be wherever your base of RAM is and it does get more complicated in various other segmented memory systems but this is an introduction so we keep it simple so because this is a very simple mapping logical mapping the conversions really easy to do so visually it looks like this your kernel logical addresses are at page offset point down assuming your physical RAM in that physical memory map is starts at 0 boom you've got this very simple logical thing and accordingly you have a very simple set of macros that can convert when it when it's a kernel linear or logical address okay now the next thing that that's interesting when when we have a small memory system right so we'll call them small or large and this is really specific to our third two-bit example okay so less than a gig of ram it's technically less than that really when when you look at where the splits at you have those kernel logical addresses right starting at the page offset and then going through the end of memory so if you had five and twelve megabyte okay then you would have see up through D all foxes right would be that Colonel logical area so it's very simple when you have that small amount of memory that's where it gets mapped into Colonel logical space okay so things that it includes in logical space like we already said allocations with K malloc get free pages all of the all of those allocators and kernel stacks and the key thing here is and we haven't talked about how swapping works yet but logical memory can never be swapped out okay there is a as we said there's that fixed mapping we saw how simple the Mac the macros were for that okay but what's because of that all everything in that kernel logical area it's all physically contiguous so that's important because we need that for DMA so that's that's why you'll see K malloc tanned those those types of allocations used with DMA Obul buffers okay okay then it gets complicated during a large memory system something more than a gig of ram nominally right we run out of space right or page offset was it's even how are we going to map that all into kernel we can't okay so there is we run out of room there and then on top of it we have to have the space for use by V malloc memory which is our kernel virtual address range and we need to keep that so we're going to talk about that in a moment okay once we go above that gig of ram nominally it's actually less then we have stuff mapped into the kernel virtual memory area and so that's the the high mm support again note that when we're taking the 32 bit model we have that problem with 64 bit we don't really effectively have that problem until we're going to get ginormous amounts Ram on the system doesn't seem like it's going to happen tomorrow in that space okay all right so kernel logical mapping right we saw a page offset and then in this large mem system right we don't have enough room for that so the additional Ram is going to be a map separately in that kernel virtual area so you don't have that logical mapping so those things you can't you can't use any of those simple macros on them at all okay so how do we call that low memory is that directly map set right and then the high memory there is is you know not physically contiguous it gets mapped in on-demand we only have that situation on 32-bit and the key thing on the low memories you saw and you can see it visually you go back to the these these sections that you have that one-to-one mapping there right but the rest of it you do not okay so let's talk a little bit more about kernel virtual addresses so the easy part was that logical set very simple right so kernel virtual addresses I usually call it V malloc space like a lot of people and so keep in mind of that and that's that area above that logical range of its managed dynamically and so those are used for non contiguous mappings so what's the practical case for that in Samad right to load a module memory is allocated it needs to be virtually contiguous for that module but doesn't need to be physically contiguous right so what you'll find is the module does V malloc it's got that virtually contiguous area and but the actual physical backing Ram could be scattered anywhere okay the other piece is memory mapped i/o you're using i/o remap and friends we'll get into that right and that also ends up in that space so all right so quick look at that we've got logical addresses physical RAM whoo yeah and so that that's what that looks like and then you've got your virtual address space up there right your modules are getting installed your i/o remaps all of that all right so keep in mind as we said that the key there is it's non-contiguous right you can't can't can't rely on it for DMA at all that's the main point here all right with that yeah so this is this is reiterating probably maybe too much is this emphasis that on the 32-bit machine right we have a very constrained space in that in the the the logical address space if we have you know 768 mega Ram ok so there's less space for for kernel virtual addresses so those are tunable and you just don't deal with this problem on a 64-bit system all right so let's jump into the meat of user virtual addresses because this is where it gets more complicated so our user virtual addresses right that's what our applications or our processes are mapped into ok they're all below page offset if you remember that memory map we saw below the 3 gig mark in our modeled 32-bit system right and each process has its own mapping what I mean by mapping is its own view of virtual address space I thread shares mappings and things get a little bit more complicated with clone because there's a lot of options and you can choose how much you're sharing and so forth but that's beyond intro level and so one of the key things is kernel virtual logical addresses right they have that fixed mapping okay user processes are fully using the MMU and the only time the only time that you actually use RAM is when when when you're actually touching it we'll get into that right the memory isn't contiguous it's a lot like that V malloc space and kernel you can't rely on anything being contiguous just because it looks that way from the virtual address right and the nice thing is you can swap out remember kernel logical virtual addresses it's not swappable memory and the memory can be moved around on you so that's virtual world all right so what does this fundamentally mean since things can be moved around on you it can be swapped out you can't use it for DMA right can't can't allocate you can't malloc memory and then try to DMA to that virtual address all right it's not not going to be a stable backing behind it all right how does this work every process has its own memory map you can go look at struct mmm right there's pointers to that in your task struct for your process and that's where that whole mapping is kept of those pages talk about pages in a moment every time you do a contact switch that memory map gets changed and that's where that overhead comes your context switching overhead where you have to go change that mapping okay so again back to our map here we've got this this view of the 32-bit world and every time we change the process this whole set of mappings in here into the space is going to change so back to the MMU right so we use that to manage those virtual address mappings and so I read into the page so how does this done it works on the granularity of a unit called page okay and some architectures people always hear 4k a some architectures most architectures they're configurable there's some advanced features some very large page types we're not going to get into that today but here's some common ones I had 4k 4k or 64 care and armed 64 like I say we're not going to talk about huge pages I'd be more advanced topic but let's just assume for k4 the stock since that's what's most most architectures are defaulting to so that's our unit of memory that the MMU can work with right relying on that page size anytime we do any allocations or mappings all right and then we have this concrete concept which is the page frame okay and that's page size page align unit of memory of physical memory so anytime we say page frame that means in that physical memory map okay and when we talk about a page that's the unit for virtual addressing purposes that the MMU is dealing with okay and so you'll see that abbreviation PFN throughout the memory management code that's your page frame number all right referring to that page frame physical unit okay so M mu operates on pages right memory map for process is going to have this huge list of mappings right big space a bunch of scattered page frames all over the place right a range of multiple pages and so what does the TLB need to know right the TLB when when actually gets loaded with a mapping right a virtual address the physical address so page page frame right and then a set of permissions write rewrite execute back to our view of that right just as a reminder all right so as we were we touch on earlier if we act access a region of memory right that we don't have mapped we're going to get a page fault exception okay and this is normal right these are good things we want this page fault except okay and I mentioned that TLB is very in size you know some of the embedded stuff they have 16 entries in it it's not much when we know that our page size is 4 K right and so that's got a lot of churn in it ok and so when we contact switch we have a lot of page faults as we start touching virtual addresses that are mapped right so your process gets swapped your contacts switched in you start executing code it's touching you get page faults that exception because we don't have a mapping right and we also have a concept of lazy allocation we'll talk about in detail here all right so this is what it looks like visually in between you've got your virtual address it's hitting the TLB and then it's able to touch those physical page frames right through those mappings that are set up right so map page ranges all right so contiguous set say the text for your application your process some data area that's mapped and those are going through the TLB to access actual backing page frames for that area and then you'll have some on map space that maybe hasn't been executed yet notice that's the the allocated frames on that side that's going through there alright so just just as I mentioned with kernel virtual addresses and that V malloc space it's not guaranteed to be contiguous ok in in user space virtual addresses right so don't rely on that we already said that's why you can't use them for DMA right and when the reasons for that is it makes it much easier to allocate memory if you get into how the internal memory allocators work and think about how how fragmented things get this allows you to go put together a large allocation with a lot of scattered page frames right and and almost everything you do it doesn't require physically contiguous page frames backing your code alright so as we were saying when we looked at that that virtual address and we said the virtual address space say in 32-bit and that three gig area we said one of the cool things was that each process gets his own address space so what does that mean you hear that all the time it means that when you look at the virtual address space and say you look at that task struct and that mmm you're going to see mappings that have that same virtual address but they're pointing to all different physical memory addresses all over the place so if they're in there at the same if you have things scheduled running next to each other they're using the same virtual address right they get scheduled in but it's mapped right to a different page frame each time so but they don't have to know about that backing and so here's an example process one with this set of virtual addresses map through all these different page frames right in the blue and then process twos got these same virtual addresses and he's he's touching completely different page frames right just visually representing that okay and now we get into shared memory right we all we need two for IPC purposes shared memory is a common concept of POSIX concept normal concept in most os's and and so shared memory using MMU and we saw how we can have the same virtual address with different page frames right we can have different virtual addresses pointing to the same page frames is essentially how shared memory works right so simply map the same physical frame to different processes write the virtual addresses don't have to be the same and now you have shared memory write two different processes completely different virtual addresses but they're touching that same page frame as they get contacts switched in okay how does that look we get the shared physical frame down there in green alright we've got this virtual address mapping to it it's touching that shared frame right this is a four cave shared memory space and then this completely different virtual mapping in the other process pointing to the same frame boom we've got shared memory now so that was a case with with different virtual addresses okay then map system call you may be familiar with right you can get at a specific address to share the memory so that's that's a different case okay and it can fail all right let's talk about lazy allocation so one of the things you will notice when you work on a Linux system or classic UNIX system is that the kernel is not going to allocate memory directly well yeah you saw your your call actually come back successfully right you got virtual memory but it didn't actually allocate the physical memory those page frames that back it right and that's what we call lazy allocation so there's an optimization right the kernel is going to wait until you actually need to use that memory so if you're allocating a four megabyte chunk of memory for your database and you haven't touched any yet it didn't really allocate anything for you right if you if you never use it you never touch it it never allocates anything all right so how does this work so when we we request that memory it just creates this record of the request in the page table talk about page tables in a moment returns the process and so you've got that virtual memory set aside in the user space process okay once we touch it our old friend the page fault comes into play right we already learned that we're going to get an exception right because there's there's no mapping there right or it's only set to read permissions right and we're going to go do the page fault handler so kernel is going to use page tables see that the mapping is valid in this case in a lazy allocation right allocated virtual address space but it's not yet mapped in the TLB okay at that point it's going to allocate those pay trains a page frame a series of it whatever the request needs to be satisfied with okay and then it's going to update the TLB it's architecture-specific how that happens of course with that mapping and then he comes back to the exception handler and the user space program continues so you your malloc got you that virtual address space and return quickly but when you went to touch the memory all of this happen behind the scenes right the first time you went to dereference that pointer and update it with the value so that's what's happening behind the scenes okay but you're not aware of that key point here right but you will see it if you're running benchmarking and you see that lag right it's appreciable right and you can use you can use tracing tools and see how that's happening visually the other thing if if you have time sensitive things here right you know that you have a fast path you can go pre allocate that you may have used em lock or the family em lock calls that will go ahead and pre allocate these things so you don't have that lazy allocations situation so as we said getting into page tables TLB entries could be the entries in the TLB can be a limited resource right we can't just map the whole world of our address space in there right so we have a lot more mappings in that struct mm for our process than we have TLB entries so the one's got to track all that so it has a set of data structures we call the page tables okay and you can look in struct mm and VM area struct to see how those are done and but it's essentially a hierarchy that leads you down to that 4k page right and The Associated mapping to page frame number and the permissions right so everything lines up with what needs to get loaded into the tlbs okay it also has metadata in addition to that about is it valid or not and so forth and some other housekeeping flags as well okay so when we have something the page table the TLB we so we have a valid mapping right and you touch it the hardware since there's nothing in the TLB yet is going to generate that page fault right CPU doesn't have knowledge CPU the MMU right only a kernel does all right so our page handler runs right it's going to traverse these page tables find that mapping for the virtual address right page granularity select and remove an existing TLB entry create a new one with our address and the correct permissions and so forth and come back to the user space process okay all right swapping and good all right so swapping we're used to our systems we deal with our desktop systems our development systems where we have a lot of swapping out to our disk when we're doing heavy builds right I'm running low on RAM and you know how this works is the MMU is the thing that enables this okay and so you're going to run out of that 16 gig Ram you have under these heavy builds and you're going to contact switch and it needs more memory and it's going to take those page frames that were backed and it's going to take the contents of those and it's going to push them to your storage all right and then when you need that data back and you've been context switch back in it's going to read that back off that slow storage and bring it back in that's the big picture right so low level details right it's going to do that on a frame based basis right it's going to copy a frame to disk remove the TLB entry and then that frame is free to be back for another process right so when we need it again all right CPU generates page fault right common theme here right we flush that entry out of the TLB all right so now it's going to generate a page fault and then when we we hit that page fault process sleeps we copy that frame from the disk into an unused frame and we update that page table entry and then wake the process back up okay so it's going to be slow process right we got to go out to that block i/o we're throttled by that bandwidth now so when we restore the page to ram okay we're not necessarily getting the same page frame so again we have this virtual dance going on here right there is no persistence or affinity to that original physical page frame so you need to get rid of this notion that paid you know physical addresses matter okay you will use the same virtual address though right because those mappings stay the same in userspace all right so you don't know the difference so your codes executing along you yield the processor it gets swapped out you context switch back in it could it'll it'll redo that that mapping same virtual address and your code continues on at the same virtual address but a completely different backing as a the frick page frame contest gets copied back in and then mapped in okay again this is that low-level detail why we said we can't use user space virtual addresses for DMA we have no persistence of the physical backing but the DMA engines and the peripheral hardware need alright so what does this look like visually we've got this frame that was selected by the kernel to be swapped out to our disk we've got this wonderful trashcan looking cylindrical disk thing here and we copy that frame out to the swamp media we invalidate the TLB entry page table entries invalid now right okay and now there's no entry there so that that frames freed up so now you can free it back into the allocator pool but the data is preserved out there on disk that's in your swap partition right all right now we're good we get we get context switch back in right we're back and running same process we try to access that same virtual address we were just running when we got so rudely taken off the CPU and we get the page fault thing we've been through the page fault dance before and we just rock on through that we get copied from the slop this cylindrical simple disk thing and put back into that page frame that we got allocated create the TLB entry but add one more animation yeah and then we return to user space now we can access that virtual address we've got the same data we had before we got swapped out all right so I'm actually running this on time behind so I win all right it's 95 slides so user space we've got several ways so so we've been through that whole stack all the major pieces of how everything's happening in the background now now let's see how this maps into you know our api's we have in user space right and so we have several ways that we allocate memory right we've got all our Emily of aleck things and I've referenced them a couple times verbally hey we know that we can M map to directly allocate and map pages we often see that to map some peripheral i/o if we're hacking around not doing proper kernel drivers we have break and s break where we can modify the heap size right so first off mat M map right one way that we allocate a bunch of memory from user space right you'll see it if you if you if you live the world of running s trace on things you see lots of a map happening right when files are getting opened and so forth so if you use map anonymous you get you get allocated normal memory the shared flag allows us to share that memory with other processes all right so break why is it called break so that's the top of the program break legacy terminology right and so effectively you increase the heap size with that as we're saying okay now lazy allocation going back to our whole lazy allocation technique okay we have a situation with with if we look at M map C and do break that it's implemented a lot like M map all right so it goes in it modifies page tables we talked about how that happened right where we modify the page tables and then we wait for a page fault okay and the other thing you can do is you can pre fault we talked about with M lock right and not have that issue where with with accessing the memory you have this long lag relatively long lag where it actually has to allocate that big big chunk of page frames for you all right so you can you can take that cost up front with m lock and then have relatively deterministic behavior once you're actually accessing the memory the implementations of malloc and calloc same thing they're going to use break or a map depending on how big the allocation is and that's going to happen behind the scenes right and if you if you are astute you can modify that behavior with Mau opt you could set the threshold parameter to say where where one kicks in or not that's often used in system tuning ok and then finally a stack if a process goes beyond a stack right CPU is also going to trigger a page fault ok one of the special things that the page fault handler does in this case right is it's going to detect that you got an address just beyond beyond the stack it knows where that's at right and then it can allocate a new page right so it allocate another pfn go into the page tables map that and drop it in the TLB and remember pfn could be anywhere it's not physically contiguous it's just virtually contiguous so gets folded in execution continues on and it's able to you know drop stuff on that segment of the stack you can see how that works and do page fault that's the arm version and so quick summary like I said introduction so if you're already a kernel expert you probably know all that but we went through physical memory right we looked at a stock you know x86 familiar memory map we talked about virtual memory three types right kernel logical kernel virtual user virtual which ones are contiguous or not right we use kernel logical for DMA we went through user space addressing how processes will not have contiguous physical memory and how swapping page faults work to do lazy allocation and so forth like I say we cover swapping and then how those user space api's map onto all that so that's it for the intro I've got one minute for questions yes way in the back you okay so so the first part first part of the question only address that so the question was well if the colonel always has the mappings right and you're talking about that kernel logical mapping that has why do we have to wait for this expensive mapping to user space and that that so to answer that and hopefully I'm answering the right question the reason for that is those those kernel logical mappings if we just use those it would be just like that single address system without an MMU and I can tell you that there's there's systems that in the 90s they had mmm used that running our tosses like VxWorks they would map with the MMU just flat address space because they had to add the MMU on for performance reasons but you were you you don't without without having your own process space right you would have to link everything in its own address space and everything so kernel logical addresses are nice and linear and easy to think about but you have to do these remapping for user space to have that nice world that we enjoy of that protected per user process address space where you just write a program link it and it'll run in any context right if we had all one mapping of the kernel logical spaces be just that single address space you'd have to link your program at 0 and 1 bazillion to bazillion and manage them not stomping on each other as you allocated the memory I hope that answers the first part of it and I'm out of time but we can talk about the second one yep sorry 95 slides so
Info
Channel: The Linux Foundation
Views: 92,230
Rating: 4.8971725 out of 5
Keywords:
Id: 7aONIVSXiJ8
Channel Id: undefined
Length: 51min 19sec (3079 seconds)
Published: Tue Apr 04 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.