Linux And Windows Kernel Comparison

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let me tell you a little bit of background on myself and how I think that I'm somewhat qualified to stand up here and talk about both the Windows kernel and the Linux kernel I've got a background in computer engineering got a undergraduate degree a master's degree in a PhD in computer engineering my PhD is from Carnegie Mellon University at school in the United States where it during my doctoral work I actually was working on flavor of Unix called BSD running on top of the Mach microkernel operating system so my background through research and through school was on UNIX I came out of school and then started to work on the Windows platform several years after I graduated I went to IBM Research in the United States and at IBM research I was hired specifically for my knowledge of Windows internals but while I was there I started to look at what was going on in the Linux side of things and becoming familiar with how the Linux kernel architecture is composed and its capabilities and became somewhat a Linux resource within IBM research as well in fact there was several years ago on underground effort within IBM to port Linux to the OS to the s/390 fame frame and I started actually a project that started in Germany well IBM at that point had to decide whether they wanted to commercialize this thing what they wanted to do with this portable Enix and so they formed a committee that would advise Lou Gerstner who was the chairman of IBM at the time as to what to do with this version of Linux and I was the technical Linux expert on that committee which is composed of about eight or nine people and several people from marketing and other business groups within IBM I also served as the Linux technical resource for the AIX AIX division of IBM ax being IBM's version of UNIX and the day of course were interested in what was going on with Linux when I was there I'm no longer at IBM but that just gives you a background in and to how I started looking at Linux I've been following the developments of Linux for a long time and the class that I'm going to give you today or the session that I'm gonna present to you today is actually a session that I've asked for input from all the major players that it relates to so for an example David Cutler who as I'll tell you is the chief architect of Windows NT and still works at Microsoft on the Windows kernel he's reviewed the slides Linus Torvalds has reviewed the slides and I've had several other developers from the windows team and from the Linux team review the slides and provide me their feedback and I've incorporated all their feedback at the end of the session actually I'm going to show you an email that I got from Linus as we were talking about the session where he points out the problems he has with this with what I'm going to tell you and I'll address those what he says then so now this is the outline of the talk and this is a very technical talk so as well as getting a comparison of the two operating systems I'm going to give you a heavy dose of Windows internals and Linux internals but first I'll start by talking about the evolution of the operating systems as they progress through time when they were developed when you where you can trace their roots back to and talk about where we are now with those operating systems then I'm going to take a very high-level look at the two architectures and after that high-level look diving in a little bit deeper and start to compare some of the kernel mode subsystems so talk about how scheduling works on Linux how it works on Windows what's different how memory manager is different or am the same between the two OSS and the section following that one I'm going to talk about a number of features that I've seen go into Linux a number of enhancements and capabilities that have been added to Linux over time that actually make it look a lot more like Windows in terms of Windows having had those same capabilities or features for a long time now Linux progressively getting more and more of them the two operating systems start out started out looking kind of the same which might be surprising to you as you'll see and have actually evolved to look more and more the same from a kernel perspective then I'll talk about where we are in terms of performance of the two kernels because of course kernels are designed to manage the resources the of the computer and the to manage them as efficiently as possible so the applications can perform as efficiently as possible so in terms of saying what's a good kernel versus not a good kernel you've got to talk about how applications can utilize the kernel services to do what they want to do and one way that people measure that is through benchmarks so I'm going to present to you Ben the current state of the world in terms of benchmarks of Linux vs. Windows and then I'll spend a few minutes just pontificating if you'll indulge me about where I think Linux is going where I think Windows kernel is going and what's the future hold so let's start with before even I get into the history narrowing the scope and in an hour and 15 minutes I have a limited amount of time to even cover in the material I'm covering I certainly can't cover all the things that you might be interested in hearing about with respect to Linux versus Windows including the cost the total cost of ownership how how supportable the two OSS are what kinds of applications are available for the two OS is how manageable they are and their use as a desktop system those are all outside the scope of this talk and so if I'm focusing just on kernel level technology and I'll talk about where that fits into the big picture at the end of the talk so if you look at Linux and try to figure out where it came from you of course have to look back at the UNIX operating system which it's somewhat derived from so in that sense the real history of the Linux starts in fact in 1969 when Ken Thompson developed the first version of UNIX at Bell Laboratories at 18 T a few years later Dennis Ritchie who designed the C programming language he's the father of C he joined the project and they both debuted in a joint paper to the academic community at a conference in 1974 a paper on UNIX and Bell Labs a couple years later released the first commercial version which was called UNIX version 6 well Unix Bell Labs was pretty liberal about letting universities and companies license UNIX source code and use it within organizations and so it began to spread pretty fast in 1978 they actually released a version called the UNIX time sharing system which is a version with the source configured so that portability would be very easy it so be easy for somebody that said hey I want to take UNIX and run it on this different kind of processor import it and that enhanced the proliferation as well so because this source code was proliferated people can license it what you saw was three major branches of the UNIX tree developed as company started to add their own features and capabilities to Unix and those three branches were UNIX system three from Bell Labs own UNIX support group another version called UNIX BSD from the University of California at Berkeley and finally Believe It or Not Microsoft's own version of UNIX which was called xenyx and if you're surprised that Microsoft had its own version of UNIX you might be even more surprised to know that it if you look at the early 1980s and talk about the biggest UNIX vendor in terms of Units deployed around the world that vendor was Microsoft now what happened to Microsoft xenix and I think I just is a little sidebar because it kind of brings us into current events Microsoft Enix was actually sold off to scho schoo was actually started with Microsoft Enix and the schoo group Santacruz operation because Microsoft gave them xenix Microsoft took partial ownership of schoo which Microsoft eventually sold off in the early 1990s and now you know that if you've been following what's been going on in the US with scope and still challenging the legitimacy of copyrights of various pieces of the Linux source code and Microsoft turns out his funding indirectly SCO's operation in challenging Linux it's kind of a you know look at Scott I kind of look at as a father looking after a child that's gone off and trying to help him out after they've left the house so the three branches continued to fragment even further those were the three major branches you had lots of offshoots there was a couple of efforts as the industry started to recognize that this this diversity was going to cause a problem because you've got these vendors that put in special features into version of Unix now you have applications that only work on that particular version of Unix if the user of Unix wanted to switch to a different version of Unix well then they'd be kind of stuck because their applications were coded to use those special features and so two efforts were started to kind of define what would be the definition of Unix and so hopefully application vendors would program to this definition of UNIX which all the major UNIX variants would support and you'd have cross-platform portability those two efforts where one was I Tripoli POSIX which released a standard API interface for UNIX and another one was the X open groups portability guide which also defined the way the services that UNIX operating system should make available to applications well even though those portability guides were released you still saw a lot of fragmentation in unix and you still see that today so let's pick up now where linus comes into play in 1991 linus took a college computer science course that used a toy operating system called MINIX that was modeled after or that is modeled is still available it's modeled after Unix it's a very stripped-down basic bare-bones version of UNIX that's simple enough so that a university student can understand it from top to bottom in one semester fits on a floppy disk it's that small and - after taking the course was thrilled with this access to the source code for this thing and how he thought thinking how he could make it better and make it portable make it run on a whole bunch of different up platforms and he approached Andrew cannon Bom who's the author of the tool and the author of the textbook that Linus was working off of Andrew Tannenbaum by the way lives here in Amsterdam he's not here right now I'm by invited in this class actually he is it on a trip in the US unfortunately but in any case Linus contacted Andrew and said Andrew you know I think you're MINIX operating systems really pretty cool how about if I take it and I make it better and Andrew said flat out said no I'm not gonna allow you to do that because I want to keep this thing as simple as possible so that university students can understand it if you start to go add to it it won't be teachable in one semester class so Linus discouraged went off on his own and started to make his own version of the unix-like operating system he in October 1991 he announced Linux version 0.2 in March 19 1994 he released version 1.0 so the first big version let's turn our attention now to Windows antique talk about its history and if you look at the history of Windows NT you've got to even go back further than the original release back to digital the company that used to be digital we're in the mid 1970s three people architected an operating system for digital called BMS and those three people were dick husband Peter Lippmann and David Cutler that the the VMS operating system was targeted at digital's 32-bit VAX processor and digital ship the first version in 1978 right around the time the first version of UNIX was released well so David Cutler moved to Seattle in the late 1980s and opened risa development facility there called Dec West that was going to work on a new operating system called Micah for a new CPU codenamed prism and that the work that came out of prism actually ended up part of it in the alpha processor but the project was cancelled after a couple years so David Cutler being somewhat discouraged was kind of looking around at other opportunities and that's where Microsoft comes into play Bill Gates seeing that Dave was probably disenchanted with digital invited him over and conveniently he was just down the street invited into a meeting and hired Cutler and 20 engineers from the deck West facility in fact some of those were hardware engineers so that's where the Microsoft Harbor division actually sprang from is the fact that Microsoft had these Hardware people from digital that they need to define something to do their will this new project that bill had assigned Dave to work on was called NT os/2 new technology os/2 it was new technology os/2 because Microsoft was partner good partners then with IBM and wanted to make a robust version of the OS two operating system that they would own well in 1990 Microsoft released windows 3.0 which was the first really successful version of the windows 16-bit operating system it just had a huge popularity as it was released and some Microsoft and Bill Gates specifically said hey why don't we forget this from partnership with IBM and let's go do this thing let's focus on Windows and so the project was renamed Windows NT that was going to be the primary interpreting system interface that would be supported as a new version of the Windows API that was supported by Windows 3.1 and Microsoft released Windows NT 3.1 then version number was made to coincide with the current version of the 16-bit Windows operating system which was three point one in August of 1993 so if you look at a timeline of the evolution of these two operating systems starting with their predecessors you see that they've both had their birth back in the mid-1970s and they both now have had their original mid-1970s they both have their birth in the early 1990s and they've been evolving pretty quickly since then and my point in showing you this is that the two operating systems as I'm going to show you in this presentation are very similar from a kernel perspective and that's not I think an accident it's because of their common evolution in time as people are engineered work on problems they look around and see what's going on elsewhere and they take the good ideas and put them into what they're doing and so the fact that these evolved at the same time you're gonna have a lot of the same similarities between the two so by the way limit for those of you that haven't seen a picture of David Cutler this is myself and David Solomon - David Cutler's house a few weeks ago he's in the meditative colors in the middle and for those of you who haven't seen a picture of Linus this is Linus actually this is Linus I got dinged at this for a tech at us people didn't like me showing this they said but I was you know being me and the Linus but bit Linus did this as a dunking competition dunking chip for charity in Australia at Linux world earlier this year and somebody posted this picture of him on the net so I just thought it would be humorous now when we're at David Cutler's house I told him that I was presenting this a tech head and I said David you know it'd only be fair if I show a picture of you in briefs at the conference and can I take a picture of you like that and of course you can imagine his answer so I don't have a picture of David Cutler like that for you to show you but I would if I could so let's take a look at the architectures now first of all the broadest way you can categorize an operating system is how it divides up its work and if you look at Windows and Linux from that point of view they're both considered monolithic operating systems that means that all of the core operating system services run in the same address space and the easiest way to understand what I mean by that is to consider the total opposite of a monolithic operating system which is called a microkernel operating system Andrew cannon bombs mimics operating system four examples of microkernel based operating system and in that kind of operating system the kernel itself is very tiny and provides a very core set of services and other subsystems that you commonly think of as being part of the kernel like the memory manager this process manager which skip those scheduling they're running in their own address spaces they look a lot like user mode programs they look a lot like programs the idea behind Mon microkernel is that if one of those fails you can restart it they've got nice interfaces so it's easy to replace one of these modules with a different version that does has a different implementation the problem with a microkernel based operating system is performance because when they want to communicate with each other they've got to transmit messages between each other because they're in different address spaces they can't read each other's memory and that transmission of messages causes a performance hit so most commercial operating systems are monolithic including Windows and Linux if you've heard of OS 10 Apple's operating system and know that it's based on the muck operating system from Carnegie Mellon Mach is a microkernel based operating system OS 10 takes the mach kernel and makes it into a monolithic operating system so OS 10 really isn't an example of a microkernel based operating system that you might think now at the highest level when you compare the two operating systems what the only real thing that you see it different is windowing the way windowing is handled so Windows has a kernel mode windowing subsystem and Linux has a user mode X windowing system so this is Windows the high-level architecture where you've got the core set of drivers and services here executing in their shared address space in the kernel you've got an application up here in user mode when it wants to perform a graphics operation like draw to the screen or create a window or create a menu it does a call into the kernel into the win32 window and subsystem if you look at when the linux the architecture looks like this where you've got the application up here and when it does a window and call it sends a message to the X window process which is looks like any other process user-mode process now what are the advantages or disadvantages of these two approaches well with this approach you don't have a message you have a simple transition from user mode to kernel mode whereas here you've got messages that are being transmitted which can degrade performance and there's a lot of tricks that X Windows does to try to minimize that an advantage of the X Windows approach is that it makes it very easy to remote an application something you can't do on Windows you can't run word on remotely from another machine and see the window come off on a different machine you've got a terminal server in and get a whole desktop is the only remote capability Windows has and with X Windows you can run apps just the windows for the apps on a remote client so that's an advantage there so when we compare the architectures I've broken it down into various subsections we're going to talk about processes and scheduling symmetric multiprocessing support memory management the i/o system file caching and security in process management well the definition of a process in Windows is a container and that container has a few things associated with it including an address space that holds the code and data of the program that's being executed in that process it also has a list of resources that are opened by that process and resources can include files and registry keys and it also has its at least one thread a thread is what actually executes the code not the process the process has to have a thread within it to execute so the scheduler and it decides who's going to run will pick a thread to run and that thread belongs to a process so you can think of that process is running when it's actually a thread within that process there's no inherent parent-child relationship between processes and their children processes they create in Windows and threads being the basic scheduling unit or what the scheduler looks at it looks at the priorities of the threads to decide what to pick next to run on the CPU windows also has what are called user mode threads or lightweight threads which are called fibers and fibers are actually made without any support of the operating system itself there are totally up in user space it's where an application basically becomes its own scheduler if you look at Linux the process management there what you're going to see here on this first slide let me describe this to you is the orange texts are things that are different from Windows so everything that's white is the same and the things that are orange are different so in Linux though when you talk about a process if things get a little confusing because of a design decision Linus took a few years ago a process in Linux is known as a task and the task looks like a Windows process it has an address space it has reached the table of resources to open resources and it has statistics however there is a parent-child relationship between tasks when it one task creates another task that child tasks isn't going to terminate until somebody asks what its return status is what executive status is by the by default that's supposed to be the parent task now if a parent doesn't ask Windows it has this background daemon task that goes and just does that for the parent it's also however the basic scheduling unit so a task is the equivalent of the UNIX of a Windows process a task is also the equivalent of a Windows thread it depends on how the task was created but let me explain this in the Windows you've got a process and that process has multiple threads those threads if it's a multi-threaded process share all the address space share the resources of the process that they belong to from Linux you can have a task that has its composed of multiple tasks that all share the same address space handle table and so on but if a task creates another task and says that it doesn't want to share anything with that other task that task ends up looking like a new windows process with one thread in it the system call that's used to create a new task is called clone and the clone call tells the sinced Linux what aspects of the parent task the child task is going to share so basically task creating another task that shares nothing is this equivalent of creating a new process and a task creating another task that shares everything is the Windows equivalent of creating a new thread so there are no threads per se even though I'm going to be referring to threads as we go through this talk just for clarity when Linux also has P threads which are cooperative UNIX user mode threads that most recently were added and that are POSIX compliance something that the Linux community is very happy about so let's talk about scheduling now why does the system decide what to run on the CPU I mentioned that it has to do with priorities and there's two scheduling classes on Windows if you look at the priority spectrum the top half it's called the real-time or fixed priority range your priority number 16 to 31 and the bottom half is priority numbers 1 to 15 and it's called the dynamic priority class the reason that it's called dynamic and the top part is called fixed or real-time is that this scheduler itself and the system can change the priorities of threads that are executing within that dynamic priority class so the scheduler for instance will give a thread of boost if it's wakes up waiting for an event it's been sleeping for a while then it gives it a boost to try to kind of be fair for that task that it's competing for CPU with other tasks that are just sitting there hogging the CPU if you give the test that was sleeping a boost now it has a better chance of running because his priority will likely be higher than those other task threads that were hogging the CPU so the model and the real-time of course that means that the scheduler is not going to be messing with the priorities of the threads running in that range they're going to be left at whatever they were higher priorities are favored in in Windows as you've gathered there's the 30 ones the highest priority thread priorities are never lowered before the level base below the level they started at so if a thread starts out at priority 8 which is the default priority for a thread in Windows and it gets a boost up to 10 it's eventually going to decay back down to 8 it will never be lowered below 8 Linux has 3 scheduling classes instead of 2 the normal scheduling class is priority numbers 100 to 139 and then it has two fixed classes one that's called fixed round-robin which internally our priority 0 to 100 and fixed FIFO which internally are priorities 0 to 100 as well normal is kind of like Windows dynamic range where threat a tasks windows threads that are Linux threads that are executing that range can have their priorities modified now what's different is that the lower priority is favored so the highest priority on a Linux system is actually zero in the normal class it would be a hundred would be the highest priority in the normal class and thread priorities can go up and down instead of in Windows where they only can go up they can in Linux go below their starting priority if they start out in the normal class the way that Linux does this is by as a thread uses more and more CPU its priority gets lowered not in number but in value and so it has less likely chance of getting scheduled whereas the threads that are sleeping for a long time and not consuming the CPU well there's good priorities are going to be raised so there accomplishes the same goal as Windows does by trying to give those threads that aren't consuming CPU a better chance at getting to CPU by modifying their priorities now I know a question that you've got probably right now and that you're going to have through the rest of this presentation is which of these approaches is better and that's a very very difficult question to answer and I can't answer that actually because it depends on the workload and a whole bunch of other things what I can say is just looking at these two implementations the windows is implementations a lot more rigid the Linux implementation is a lot more fluid where priorities can go we might have be modified up and down and and kind of track more closely how a process uses the how a task rather uses the CPU when the scheduler decides to schedule a thread on a CPU it gives that schedule of that thread a turn on the CP and that turn is called a quantum or time slice after which when that quantum or time slice expires if the thread hasn't given up the CPU voluntarily the scheduler will look and say is there another thread that I should run on the CPU instead of you another threat of the same priority if a threat of a higher priority becomes able to run the scheduler is gonna go run that thread instead right away but if there's no other higher priority thread that thread is going to finish its quantum the schedulers going to wake up and by default this Quantum's or turns on a Windows system is 10 milliseconds anywhere between 10 mm 120 milliseconds actually there are several preset values in that range and the value the reason I can't tell you which value Windows uses is it depends on a number of factors first of all whether you're on a professional system or it works the server system and second how you've configured the system to tune itself if you're in a server system by default all the threads have a hundred and ten or twenty millisecond time slices if you're on a workstation system there's two priorities usually involved one for foreground threads where foreground threads I mean the threads that are on Windows that are visible to you on the desktop those threads will have longer columns than threads that are in the background and those threads might have a hundred and twenty millisecond or 60 millisecond quantum the threads in the background might have lower ten twenty ten or twenty millisecond Quantum's the scheduler itself is ramped re-entrant and preemptable and what that means is that the if you're going to multiprocessor system the scheduler can be running on multiple CPUs making decisions about scheduling for the operating system and it's preemptable as well the kernel so if what that means is if you enter the current in kernel mode and it's the threads executing in kernel mode the sched the scheduler can stop that thread freeze it and then switch execution to another thread it might even be a user mode thread related to application like notepad and then that thread might come into kernel mode and the same thing might happen to it so it doesn't care where our threads executing when it makes a scheduling decision if you look at Linux time slices you see something that's very similar the time slice the quantum can vary between ten and 200 milliseconds the default is 100 milliseconds so that's pretty close to what I what you see on Windows but the difference is that the quantum of a thread even a given thread can vary across that entire range of 10 to 200 the way that happens is the quantum of a thread in Linux is tied to the priority of a thread the higher the priority the meaning the more likely it is to be scheduled the lower the quantum because it wants for their higher party threads it wants to give the low priority threads when they get a chance to execute a long time to execute since the higher party threads are likely to be running more frequently because they're going to be the ones and more so that if your priority goes up your quantum gets smaller your priority goes down your quantum gets longer on Linux so again if you look at the two windows the kind of rigid system with just a few options and Linux is a lot more fluid trying to take into account how threads are using the CPU in the multiprocessor support area Windows of course is a multi processing OS it supports up to 32-bit processors on 32-bit systems up to 64 on 64-bit systems symmetric multiprocessing means that all the processors can access all of physical memory versus processors having their own private physical memory that's not accessible from another CPU also all processors can take interrupts from hardware devices it also supports non-uniform memory access systems which is a variant on a symmetric multiprocessor in a Numa system an individual processor has its own local physical memory or group of processors might have their own local physical memory other processors can still access that physical memory that belongs to a different group of processors but it takes longer to access remote memory than it does local memory so it's still SMP just as differences in memory access depending on where you're coming from the scheduler has to be aware of Numa systems because what you don't want happening is having a thread that's executing on a particular node of the Numa system accessing its data in that physical memory loading up that local physical memory with its own data and then having a scheduler move it across the Numa system to a different node now that thread when it wants to accesses data has to go reaching across the system something that you want to avoid so the scheduler tries to keep threads on the nodes they're running on the memory manager also is aware of Numa systems and when a thread allocates memory it tries to give it physical memory from that node it's running on when possible windows also supports hyper threading that's called hyper threading or xmt processing and that's where the processor actually is a single processor but it appears to the system as two processors intel pentium 4 xeon systems come in this flavor and the scheduler has to be aware of this kind of configuration because if you have two physical processors that look like four system processors to the operating system you can have a scenario where one where there's a thread busy on one of those physical processors busy on that say processor logical processor zero that second physical processor is totally idle and so is that second logical processor of that first physical processor now the scheduler when it has another thread to execute has a choice as to which of these logical three logical cycle CPUs that one can schedule that thing on and what you don't want to have the scheduler do is to schedule it on that second logical processor of that physical one that's already busy because then that threads going to be competing with that other thread that's already running you want the scheduler to put it on that second physical processor it's totally idle so the scheduler has some smarts about hyper threading built into it Linux also supports SMP where's Windows has these upper limits built into it of number of CPUs it can support and this by support I just mean run on that I don't mean use effectively necessarily Linux has no upper limit it's a compile-time option so you can go download Linux source code and compile it for however many processors you want there's actually a built version of the Itanium source tree for a particular SGI machine in the linux source kernel and the sources 2.6 kernel most recent one that's predefined to support are to run on a 512 CPU system even though no such system really exists all CPUs can take interrupts supports non-uniform memory access the place where it's different from window support for Numa systems is that the memory manager hasn't been made aware of Numa the scheduler has that the memory manager doesn't try to allocate memory from local nodes it also supports hyper threading just like Windows does if you look at virtual memory management Windows if you look at 32-bit Windows splits the user mode address space or splits the 32-bit address space into portions that's going to be used by the operating system and the portion that's going to be used by app processes user mode processes that run like notepad and word and so on and that's split by default is right in half two gigabytes of that four gigabyte outer space are usable by their applications two gigabytes are going to be used by the system by the operating system drivers here's a picture down here that shows that default split and so what you've got are different user mode address spaces whenever a thread from a different process runs Windows memory manager swaps whatever's in mapped into this two gigabytes with the address space of the process that it's scheduling like notepad it supports all the features that you'd expect out of a modern memory management subsystem and all the features you'd expect and more actually supports 32-bit or 64-bit so there's a 64-bit version of Windows it supports copy-on-write so that applications can share memory and then when they modify it it becomes private to them the modifications it supports shared memory so applications can share data easily and it supports memory map files which are used for image execution for example like running a program is mapped into the address space of the executable on the Linux side it also had in the 32-bit version splits the address space between the system and user mode processes the default split though instead of being two gigabytes two gigabytes like in Windows is one gigabyte for the kernel one gig about three gigabytes for user mode you can actually move that line anywhere between those two points of having the operating system have one gigabyte anywhere to three gigabytes with the remainder going to user mode processes by the and I failed to mention that on Server 2003 you can also move that line anywhere between two gigabytes and three gigabytes on Windows so that's very that very similar at usage of those address spaces where things are different or that there's a version of UNIX of sorry of Linux Linux Red Hat Enterprise Edition version 3 that supports a different split it's called the four-four option or four-four split where the user mode processes all have an entire four gigabyte address space in the system process all has an entire four gigabyte address space so they've got a lot more memory a lot more virtual memory to play with which is good for memory intensive applications like database servers or web servers where you get a disadvantage with that approach is that anytime the user mode process wants to call in a service from the kernel it's got this memory manager has to switch the address basis something that's an expensive operation so for for split as long as the application doesn't make many trips into the kernel isn't a performance advantage otherwise you're going to be paying a heavy price for it Linus is resistant to put that into the mainstream kernel by the way that's a red hat specific enhancement it also the Linux memory manager supports all the things you'd expect just like Windows memory manager of 32-bit or 64-bit copy and write shared memory and memory map files so that's the way that when Windows and Linux manage the virtual memory of the process he's running on the machine let's talk about how they manage the physical memory so physical memory management means you've got a fixed amount of physical memory what processes do you let use that physical memory your goal is to give them physical memory to the processes that need it the most the ones that are most memory hungry and actively using their memory processes that aren't using physical memory or their idle you want to take paid memory away from those guys and give it to the guys that need it what you do with the pages you've taken away the data that was stored there is send it off to the paging file and then bring it back when you need it when that application wants to access it again one of the policies that an operator memory manager designer has to decide on is when a process needs another piece of memory another page of memory where that page of memory should come from where that physical piece of memory should come from where who should steal it from if all the physical memories in use Windows has what's called a per process working set okay a working set is the list of physical memory assigned to that process and the page replacement algorithm the algorithm that memory manager uses to decide who to steal a fifth piece of physical memory from to give it to a process needing another page it's called local page replacement and so here's a picture of local page replacement this process here wants another page of physical memory and what the memory manager does when it decides this process has gotten big enough and there's no more free memory on the system it's organized the memory that that process is access to accessed from most recent to least recent and it will take the least recently accessed the one that the process the memory that the process is accessed most most distantly in the past figuring it's not going to need it again in the near future and steals that page away from that process giving it back to it to use for whatever its wants to use new memory for now the other page other processes also have their working sets on these lists from that are sorted that way but Linux sorry windows will steal from the process that wants than the new memory that's in contrast to Linux memory management where it uses a global working set algorithm where it also sorts the list of pages of physical memory on the same least recently the most recently accessed
Info
Channel: Joe Nice
Views: 86,738
Rating: 4.7554917 out of 5
Keywords: Mark, Russinovich, Linux, And, Windows, Kernel, Comparison, i/o, device, inode, hybrid, linus torvalds, memory, timesharing, process management, memory management
Id: Nz-vWYM-2Gw
Channel Id: undefined
Length: 38min 54sec (2334 seconds)
Published: Sun Jun 21 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.