Kernel Recipes 2019 - ftrace: Where modifying a running kernel all started

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome we already had some of the introduction here so today's I decided to give a talk well actually I give this talk back in 2014 at Linux plumbers and Dusseldorf and this is what talk actually bred and Greg I first met Brendon Greg and he said that was the most technical talk I've ever seen and then I seen him recently LCA and he said you or scale I think was a scale and he says you need to get this talk again but this time get it recorded because they had issues with recording back in 2014 and it wasn't recorded it was lost so he said you need to give this talk again so I'm like you know maybe I'll do it a kernel recipes because I know it's gonna be recorded so I said okay I'll do this again a car westby's and then I didn't think about I'm like okay recycling talks are awesome because he already to do much work to put it together so you know I did my on the weight here and on my layover in Detroit I wrote my Artie talked before yesterday and then Sunday I get here my god okay I'll just go and I'll you know add the VMware local slides cuz now I work for a new company I work for VMware now and I'm like I'll get be aware and put the slides up here they're the ones that sponsored me to come here and they're sponsoring this event and I'll do this and as I'm going through the slides I'm like huh we don't do it this way anymore huh a lot changed in five years crap I end up rewriting the entire slide deck and there's one site I actually did a cut-and-paste for it and I found out it's wrong because I was reviewing it went oh okay anyway let's start this is about modifying the Linux kernel where it all started while it's running no I have to turn this thing on this will work and I don't have a I think that would help too boom so this is about when I say eff trace a lot people think it's a whole tracing utility a lot of times image I've traced I think the ring buffer and all that really that's all tracing F trace really doesn't mean that that's really f trace is a function tracer and actually more accurately it's the function hook it's actually what hooks into the the functions and perf uses it it's what the function graph tracer uses DAC tracer K probes uses it systemtap uses it P store uses it BPF will soon hopefully be using it as well so really when I say F trace it's actually the hook into the kernel or into the kernel functions sometimes just generally say F trace to mean the whole tracing system but I really call that a tracing system it's not the same so why do go into the trace of best system and to really activate F trace you just go into you know sis kernel tracing which is the trace of s directory I echo function into current tracer cat trace and then I see all the functions that are happening on my kernel life so it's pretty cool something you could do at home function graph tracer is built on top of a function tracer and it also records the return address so you get to see when a function enters and Winnick function exits and you can actually watch how it looks like C code being executed on your laptop so don't worry about this this isn't part of the talk there's something will you talk about later one of the unique things about this is the fact that it records it actually keeps a table of all the functions that it could trace and I'll be talking about that later in the talk and you could actually say I want to trace just a few functions and actually not only was f trace the first one to really modify the running kernel live it was also the first one to put glob expressions inside the kernel and it's actually extended today there's a lot more glob expressions within the kernel but F trace is the one that started it all so you got to skid the star sched star and get every single function that starts off scat or has skid in it so how does this work well GCC has this profile adoption that's been there for ages well cost - PG this is something that the latency tracer from the real-time patch you used that ingleborough had and it would actually insert this magic function called M count just after a set up the stack frame so when a function enters they'll set up a stack frame through a stack frame put in this call em count and in this let go it will do it to every single non inline function so this is way it was most way to be a way to profile functions that's what it was made for and we kind of abused this functionality to say okay let's do more than just profile it let's add tracing to it let's record it cuz you get from this m count you could get not only the function but the parent that called that function it was really nice requires a trampoline to call Cicco so you can't just have it call em can't call some C function because it's doesn't set up the C register the registers properly so you'll crash your machine if you did that so it has to call a trampoline that sets up saves registers and then it calls a C function and it but the em count requires frame frame pointers because it sets up a frame and it puts there and as that's overhead and it causes issues so any clean wrote a extension to GCC and added a dash MF entry which is different than M come it's similar you do - PG and - MF entry and instead of putting the call after the function frame it puts it at the very beginning of the function before it does anything which is kind of nice because you don't need frame pointers to do this to use it so also because for any pointers do add overhead and the first time when we added F trace in F trace is low overhead right hat when it put it on the production system because the required frame pointer is a frame pointer to slow things down they didn't want frame pointers in their kernels so a typical function this is actually cut and paste from recent code believe it or not that's that's how big schedule is so if you look in a you know kernel /schedule find the function schedule and yes really but this is all this is all inline functions so this is actually a much bigger function than what you see in the C code so this is all gets pulled a we don't see all this this is all inline functions but this is basically a typical function call everyone knows us very simple okay when I gave this talk in 2014 someone said whoa that was shocking so I now have a warning for everyone the following slides may not be suitable for all audiences they contain assembly and machine code I am sorry I shocked some people now but this is when you disassemble that same thing this actually did ops dump and then to cut and paste mob something threw it in there and that's what you see okay you know starts off pushing RDX and using RDX later on no that's sexually plea that's current and then yeah that's the current task dozy right there and then you know takes a free index off of the current and it puts into a and test a so it's probably the preempt see if preemptions enabled or whatever so that's what that's doing but when you add up entry you get this every line basically everything's exactly the same by the way the em count code this would completely change but with the F entry the code is pretty much exactly the same it doesn't modify the code but it adds us all call F entry so when I first did this as like added this in goes like well can we get this running on production systems I said well to run on production systems it has to have like basically no overhead or at least in the noise if we have it disabled when we boot up so the first thing I did is like I added I'm okay now this says F entry when I did this was M count but that's a no story by the way I compiled with and without you know F trace and I said okay I just put the F entry he just put the returned so every call would just Boop so that's the shortest thing I could do so it calls I said let's see what the overhead is of just everyone calling a no op function well why I ran hack bench on this it was a 13% overhead him no no distribution would deliver a kernel that gave 13 percent overhead for a feature that very few people use so this can't work in go ask do you think we could convert these to no ops or something so I said sure let me try and I'm like okay how do we do how do we converted to no ops you know they add but there's no way to find them so I said you know what let's see if we could find him at compile time so I created this thing called record em comb it originally was a perl script which when I added it would do an obstacle of every object file it would scan read they use Perl to read the file and then it would create array and then it would compile that into an object and then relink it into the original object file this did it so when you added F trace on your kernel it did this for every single object file when you're built it slow with the slow the bill down quite a bit in fact they Thomas kind of said what the hell is going on and but at the same time I also created make local mod config which Shrunk the times of compiling so no one notice that extended the time and I strung K at the same time that was that was you know I've learned from Thomas how you could trim people so but they later we got written in C which makes it much much faster so it just it's an elf parser that reads the object file it'll go through find the FN tree locations and then it creates this section called M count or em count Lok which is you know you know it's no longer using the same count we still use the same section name because why do I change it and we put in all the FN trees into it so basically oh by the way any clean was it was still active in GCC and he added in GCC 5 - M record M count that does actually this for us so actually removes recorded count from the from the meeting in the kernel but I'm gonna explain what record I'm Compton anyway so basically record M count would actually go through and read the like you know here kernel scared core dot say find all the call entries in the so you have your scheduled yield this this behind them create this section called em count woke and add the address usually just a reference to it so we'll put in all the references to where these are located it's a lot more complex than this but I'll have time to go into the details and then what to do it would relink it will create this it will then put the section right into the object file and then we used linker magic which is quite off this is very very popular in the kernel kernel does lots of linker magic if you go to this VM let's dial yes you'll see lots of fun things now what we do is basically we create two variables start em count lok and stuff em count look these variables have you search all the C files and the header files you might well see only the header files just seemed declared probably or you see them in file in C files were they're declared but you'll never actually see them to find and if you don't know about Waker magic you could be ripping your hair out say where are these variables defined but they're not defined by C the linker actually magically creates them and then links them into the system and does it it's done in this file where we actually say okay when I put start here we said we put the section here and stop here what so when the linker links the whole VM Linux into one takes all the object files that makes what your VM Linux file it will create two variables here and where the section is so now I could find where that section is when I run the kernel by the way uh that's the parsec architecture for some reason it doesn't like em count uses some strange something I don't know I'm not in spot responsible for that so basically a little Inc of magic so you have all your object files the Leaguer magic will actually just put everything and creating our two variables that we want and for the address location so where this starts and then one of the linkers finally done it converts it into actual hard it addresses so what does this look like it's an array so what we do is on boot-up we just scan the array which finds all the locations we want and convert them the no-ops great this'll boots up nothing's running the first thing we do is just get our no ops Andy clean is still active they also add a dash M no op M count that do this for us and so it gave you the array you still get the M cloaca okay M the M count woke and it just completes no up so on boot up we don't actually have to do this we just say okay we just need this but the problem is on boot up we still do this we delete that section it goes away we don't want that we don't need it but we do we do need that information we just don't like the way it's used so M count Lok is not enough for us but we because we need it not only do we need to know where these are we need to know state as well for each of those locations so we create this thing called the dynamic F trace structure that records all the dresses and this is by the way this is my cut and paste error because I said I'm count that should have been F entry but Kate's all the exit and flags and gives you this struck thing so remember we're making this for every single function in the kernel you can have 40,000 functions in the kernel so this is the structure you want very very small so yes these are two unsigned long so on 64-bit there will be sixteen bytes total but they say what's the structure here well on x86 six or more x86 code it's a no op structure it does nothing so if you look at it all we have this is like doesn't exist you only have these two so why do we have this well because PowerPC needs to require modular it does things differently for modules and you have not only do it so think does it differently for modules you need the module information for each record that is in that module so what we do is we I might be able to work around to get rid of this because this is a lot of overhead on its another pointer for every single thing that I think I might be able to rewrite the structure to make this better for a PowerPC so it doesn't add all this extra data anyway so what we do is we have to copy this data and put it fill it into a structure so after we do this we create a bunch of pages I'm not going to go into the details about the pages or how we do it but we sort it so we have quickly a quick look up it's sorted by the address because a lot of things in the kernel like K probes needs to know is this address in F trace function it uses this table to find it so it has to be sorted and when you boot up if you do a grep of your D message it will show you how much data how much data F trace is wasting on your how much memory that trace is wasting on your machine so on a normal Fedora core no.29 I got what there's 39 thousand entries with that takes up a hundred fifty pages for K pages for total about the Oh 630 kilobytes of memory which is you know most people have gigabytes so you know 630 it's just a little over half a Meg but that's thirty nine thousand functions that it's tracing so before we delete the section we copy it over we copy all that data over it to here and we had to like the flag so that's what it looks like for our array of all the functions that f trace countries so when you cat if you go into cysts f tres are what's a slash slash Colonel slash tracing you'll see this file called available filter functions by the way have to ask how many people have used F trace I mean okay good good good so if you can't available filter functions you're wondering where these function names come from it's just reading that table and that's the order that goes right by the order so if you want to know the order member it's not sorted by name it's sorted by address so actually can figure out the addresses how they are related with each other and what's this is another day we're be able to ask me you know hey I want to trace say two functions I want to trace default else l sync and scat idle so I write in two set F trace filter and then you could add another file or add another function by doing a concatenation but when I cat it it's in reversed order it doesn't do it in the order I put it in I said well the only thing you're doing is you're sending flags in the table and then it goes reads that table and prints out what finds in the tip in the in the table little slag set so the order will still always be dependent on to F trace pages so if you're wondering why there's they're not going order this is why so like I said we have to create States so these are the bits that we use I'm not gonna go through all of them but the first one is the counter so every time you've got more than one callback for every function so when you want to trace the folks you want to hook to a function you register yourself to it and then it will set a bit counter well someone else could do it and no so so we have to know how many callbacks are being registered to a single function so every single one of these functions has a counter to it so the first 24 bits is a counter why is it 24 that's what I'm never going to have that many things it's just that well I'm those bits are free so I can actually drink it more because my first gave this talk it started at 29 so these 25 through 28 were added since then so I'm gonna go backwards now from 31 but that's a bit that just says we're tracing we're not tracing now there's 2 various states you have you have ok this function wants to be traced and then you actually have a state that says it's actually being traced so if the counter is anything but 0 it wants to be traced if the counter that tracing fits not set it means it's not being traced so when I go and through the enabling the enabling is done the same way on all arcs so I have to do this you know very interesting so it does a scan of the table and it checks these states to see if they match it they don't match us as oh we need to make a modification here so when it says oh we have a number here and that bits not set that means we need to trace this function to enable it or when it goes to zero we need to stop tracing it so this is a way so the 31 is saying is actual state of the tracing where 0 25 4 is I want to be traced or I don't want to be traced same thing for bid 30 because some functions want a full-stack wants to save all registers k probes when k probes wants to use F trace it has to simulate a n3 breakpoint and an exception handler so exception handler when it happens it saves all we save all the registers and then we go to K probes because K probes can modify the registers when it returns so we need to have a way of having a state that said or we want to save all registers because we always say box registers because that takes a lot you know if we don't need to save all registers why do that if you don't care about them so you function tracing could be quite impactful on a system and if you want to trace everything you want be as least impactful when you're doing it so we know when we want to trace average store say ball registers when we hit a function or not enough so that bit again bit 30 says there's someone that wants to have this but 29 says we are doing it so we have to know whether or not we switched what type of function we call do we save or do we don't save so when we do the iteration we check to see if those matched if they don't match we have to update the system to make the match so basically a way it works is if okay I start off here's our table and say this guy point to schedule this one points I've scheduled idle and then I go ahead and bump I set bit 30 but 30 means I want to this guy to save registers this guy doesn't so it has zero so you see bit 30 set and bit one so there's one one user that wants to do this one user that wants this but no it doesn't care about registers this one cares about registers so when we do the scan when we convert it so when this guy's now calling registers we set the bit 2000 bit 31 gets set in both saying we're tracing and bit 30 gets set over here because we say okay we're also our bit 29 whatever good it's saying that we're now it matches so let's talk about modifying code a boot up or modifying code at runtime which is not the same as buddha' if you're on a uniprocessor machine you could just modify code everything be fine just be naive just update a function or some code that's being executed just modify it it's one CPU you don't care you can easily do that atomically but when you're on a SMP machine things get strange especially because x86 does that have uniform instructions you know you have a one byte size instruction or a ten bite size instruction and that means that there's no guarantee where the instruction will lie it will get a lie across cache boundaries page boundaries and I'll stop and this comes up buna interesting when the sister CPU is doing all this magic to be highly performances prefetching and grabbing stuff well things change while it's doing this the CPU gets confused and then you create something called a general protection form so basically we want to go from here to here now naively if I want to just modify it I could say here's our code this is a this is a no op right there and CPU this is CP o 0 CP 1 and this is what they see the red is the end of a cache line blue is another cache line same here that's why it's same here so when I modify this guy this guy doesn't see it yet because you know the CPU boom but then the CPUs modifies starts doing prefetching and prefetches you know the second half like to say this guy does something is second half but did it prefetch the first half here yet so I sort of say it was ready but somehow so the CPU the cache line happened where this cache got updated where the first half didn't and then when we execute it we get this the boom crashed the computers know what to do and actually you could when I first wrote F trace I triggered this all the time it was fun especially when it did the repo you just enable f trace on the machine reboots Oh lovely what happened so how do we solve this how do we get from this to this well it took a lot of talking with Intel the we came up with an idea and Intel wouldn't tell us if it would work or not and it took a year we implemented it and started doing it anyway and I think after a year or two until said yes it's okay you won't it you it will work but they were very reluctant in letting us know that it would work so how do we do this break points so when we do this the first thing we do is that we change the CC put the in 3 breakpoint in there and how does this work so once any CPU ok we do the change and we synchronize all CPUs the way we synchronize all CPU so you just send an IP ID i'll see pcp us and when we sent the API that will synchronize everything so now it will see it does a memory barrier so it'll see everything the way it should see everything so when a CPU hits a breakpoint it jumps to the interrupt routine and it calls the F trace interrupt a handler which will say ok move the IP structured 5 bytes forward and the return which means it just skipped whatever it there so when we hit it we skip over it and jump it so the CPU never sees this part of the code which means now we could modify it so we go here we add the input in three synchronized all the CPUs change the rest of the code sticker Knights all CPUs so everyone sees it then remove this breakpoint back to what we want and this actually works until finally after several years said yes it will work so when how do you register something so register F trace function is how / K probes vpf everyone's going to if they want a hook to a function case they just call this and to pass it you need a trace operator and there's two types there's a static one which is basically a structure that's allocated at boot time it will never it's there at boo it'll be there when you shut down it's never gets modified and then there's the dynamic ones that are done by perf will allocate something K probes allocate something and when F trace when you go and create instance instance buffers which is multiple buffers and F days it will allocate dynamic ones too and that that's the be careful about the dynamic ones so this is what the structure looks like you can ignore everything really func is the only thing you really care about this just tells you what this is the thing you just sign when you create a trés ops you say funk equals dysfunction and when you enable it your that function gets called the local hash you don't care about but there's functions that we have that will modify this and this is how you say I want to trace the schedule I just wanna trace this function which rates all functions so the hash it's get put into a hash table that tells us how to or what should be traced when there's multiple functions that need to be traced in certain callbacks want different things so when you look at it we got this F trace trampoline called F trace caller which saves the registers memory said you can't just call C function you have to do some special things save some registers whether if saves all registers or partial registers to call C code is dependent on which is it regs or not say oh I got little grapes there but oysters it calls a upload somebody it loads the registers for the call and then it calls after a stub it returns so if you look in the code C code and you look at what F trace caller is this is what you see very useful right it's almost it's almost worse than to know up it does a sloop so so why do we do this the reason why is because one thing is well today it makes more sense because rep cleans suck if we're modifying code while we just modify that so we actually moderately dynamically modify the trampoline so when you add a function it will actually modified F trace color to call your function which happens to be by the way your F pops func pointer and then what you call here it calls this guy goes back resources that's how tracing works in the kernel great for a single callback the web is we have more than one callback well in this case we have to iterate so when we do this we have our F trace caller instead of calling your function we call we say oh there's more than one function here and this is determined by that table with when count is more than one so when count is 1 we know just call this guy if it counts more than one we call multiple guys and say ok we had a list function and we iterate through so this guy will call this function so and it used this this is where it uses the hash table to look at the IP address of its instruction pointer address the instruction pointer of schedule and checks the hash tables a/c do these guys want to be called so let's say you have a function tracer that wants to trace all functions and then you have perf using the function tracer I just want to trace the schedule nothing else so they're both running at the same time this means that when we go to the list function we have to say hey who wants this trace schedule do you know do you yes who what's the strain steel condition be scared do you know this yes who wants us to crazy oh you get the point here who wants a tray schedule yes yes okay so it's got to ask everyone each time that that takes a little bit of overhead so this is what it looks like you know we first this is the way we first did it and we pump up a bump and then we call them a satyr and we do this but because we have the state data within the table we could be a little bit smarter about this so what we do is we create a dynamic trampoline specific for the function tracer and this is even on the top level so we create the dynamic transyl trampoline we just allocate this put in call function trace right here a direct call and have these guys these guys call directly to the dynamic trampoline straight to this guy and this gets done this is really fast but this guy still has two callers so it must call the Lisp iteration and iterate through both of them so what's the problem with dynamic trampolines well you say okay we call here a car dynamic trampoline we jump to our dynamic template at this point we get preempted okay cuz there's no way to stop this preemption from happening because let's say close to say the schedule you always call schedule of preemption enabled so that means per instance when we enabled and we just jumped to it before we execute the very first line we get preempted so we can't even put in a preamp disable there because it's going to get preempted before we even get a chance to disable preemption there's nothing that stops us from here so we convert it to a no op okay and we're like oh we don't there's no one tracing this anymore because this guy's preempted he's baby grabbed it by a this RT task that's going on forever this guy was this guy was pinned to a CPU that happen to have RT tasks running on it it's going to start a task is going to run four minutes and this guy's just waiting to run and we get to preempted and in the meantime I go and I disable function tracing and everything's this so folks trees doesn't so this is a dynamic transfer Champlain we can't just leave it around it's why cause a memory leak so we have to free it one button we have to free it so it disappears a minute goats the Archy test is done wait and the other guy hey I could run again so he schedules see a problem here boom crash boom how do we handle that anyone have any ideas shut up PR see you guys I talked about earlier well what are see you work how many people here know how RC works well I'm glad we have some very few people hands get up and most of you guys are like mine so it's good Joel will explain it so everyone's hands will go up after he talks anyway if you just do skid RC u right which do this it's not good enough this actually won't works RC u it's going to say oh it's going going on hit but this guy still preempted it doesn't care about preempted tasks because it doesn't have any it's this task doesn't hold any RC o box so how do you even how will our see you work it will work but I had to go and ask Paul could you do me a favor I need a new RC flavor is the flavor that's what they call a new RC flavor it's called RC call RC tasks so call RC you to pass was added in 318 by Paula Kenny thank you it's films like that because I begged him to add this because I wanted dynamic that race to works and when you call synchronized RC tasks it does something different than any of the other synchronized RC use it actually waits for all the tasks to voluntarily schedule so if a test is preempted it will wait until it wakes up and then calls actual either call scheduled or goes into user space those are the two quits and states for RC tasks which is a lot longer than any of the it's one of the longest hour Co things in fact when he first ran is our seated torture I think one of this took actually minutes before it would really go as like wow this really is a long time is are you sure you want this I'm like yes I want this so what's nice about this by the way after he's added it in 412 I was lazy I had like even like he did it like thank you if I never got around to implementing using it it wasn't until leanness was yelling at Paul McKenney about getting rid of all the RC flavors and Paul said hey there's no user of you know I created as RC you to ask for you there's no user of it I'm going to get rid of it I'm like I'll add it now so have that put that put they on the to do loose right to the top got it done and actually I'm not the only user now there's actually several other users of RC u tasks it's actually a very very anytime you have if there's a lot more dynamic trampolines being created every time you have a dynamic trampoline RCU tasks is how you could free it so we're getting more and more that with more modification of code and this is working nicely and how this works is this so remember when we got preempted and when we've called free you can't call free right now how do we know we step we cause new tasks on this and what it's going to do it's going to wait into all tasks scheduled now here's a rule the one rule is that dynamic these trampolines are not allowed to schedule it can be called in any context could be called an enemy context so you really don't want to schedule so you know that you can't call schedule so that's how we know that's a twisting state so now it okay this guy finally gets scheduled so minutes and laters go on doesn't matter it could be years if this RT test is going on to the Stars this guy could be years and then you'll just be waiting for you know your f tres disabled to end it just sits there why is it taking years and because we have some RT tactical waiting for you if I like schedules out jumps out and now all TAS is scheduled now we could free and not worry about it crashing so going back to Reds now if we were able to save registers and do stuff there's a bit you could set in your F trace ops that allows you to modify one specific register if you said it you're the only allows one callback to set this bit it's I want to change the instruction pointer so think about that you come back you change the instruction pointer and when you return you don't return back to the function you called so who do you think uses this this alka live Colonel patchy works so you call schedule you have a buggy schedule say okay schedule we this needs to be fixed so we say we got to fix it so how we do it we well there's a lot of magic to do the switch so they will take a module and load in schedule fix which is the schedule function that's actually now the fixed version of it it calls F trace a direct rays handler with that bit set we call the kernel patch it changes the IPA to equal scan fix and when this thing returns it jumps here not here so now you're running fix code thank you could that seem like 104 slides questions come on why throw this at someone okay seriously or just won't be throwing what let's see if it works okay Europe you have concerned by the size of your DNA trace all right I think you could cut it in half by using only two bits it's questionable with the modules because we have to we have to do with the modules but we could here's the thing this is where Marie said I want to fix the dynamic things I said there's a way to do that because memory talked about the grouping of pages because that code that was a lot of code has been done on how to group the pages and actually if I put the as if I put attributes or flags or say to the pages I could then do that they can actually say the page will be of this type or this type it I could change the type of pages because I have one page or one huge groups of page of VM Linux and then each module has its own so I could actually do the VM Linux one cut that one in half and then have the modulus peel or actually no I could keep them all small and just have the in the descriptor where's the offset so I could probably I could I probably should probably do that yes that's a thank you that's so good something to advance toward help send me an email remind me to put a Bugzilla in it wait you mean for which you mean this part here so wait what we mean oh why did I do it per object file and not just do it at the very end oh why did I do I'm built like why do it in the beginning I've actually had code to do it at build time too and one thing was was more actually well we do it at bolt I now when you put the depth M with GCC does it at Build time it was also partially done for verification I liked that I like to change I actually don't like the fact that we have no ups there because a lot of times someone will change something in the code where makes a page on writable or there's a lot of things about read/write of the modifying the codes a lot of times we keep all this text right read-only and if something screws up I detect it right at boot so we just have it ops you won't detect into the enable F trace and I found a lot of bugs from that you know happens bugs happen just by people worried but now I usually now I don't find these bugs with the know up calls I don't find the bugs until I get it and I run it am i oh something or someone else gets it gives me a bug report hey I've traced broke I'm like why and then find out that it's because something changed in the read right that's that's why I like this but it's it's the probably the better way to go he'll talk about that later no I actually boy when I brought it up is because Alexei says we need to sit down and talk tomorrow so I know it's coming I don't know why so I get answer that at the end of the conference not now anyone else going once going twice yes that was actually in the original code or actually in my original original talk I'm worried about even have it up here at all ah let's see here yeah okay so my original talk I had which was that's it yeah you like that I actually talked about that another solution having this convert in straight instead of us having this that's what you asked we talked about this and we were going to implement this and there's something is something is one does he said don't worry about it the performance isn't that bad not nearly at all and the second thing is this gives us because they like to revert things there's certain things where they want to because now they want to revert things and what if you do it this way you can't revert so you're just going to keep adding code and this is also the way we do it now currently do it we can audit it we could audit what's going on once we do this it's you can't audit what's going on it's hit it's more hidden it's harder to see things that's why we actually for security reasons we found out that the original way which is quote unquote not as efficient is actually better for security better for rebasing better for versioning and everything else so actually they said don't do this that in fact they said don't implement it we're going to keep it the way we had that's the other thing I had to take away from my talk okay thank you [Applause]
Info
Channel: hupstream
Views: 7,750
Rating: 4.9780221 out of 5
Keywords: open source, linux kernel
Id: 93uE_kWWQjs
Channel Id: undefined
Length: 39min 56sec (2396 seconds)
Published: Tue Oct 08 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.