Secure Coding Best Practices

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
my name is Matt Butler we're here to talk about secure cutting best practices so if that's not what you're here for you're in the wrong room so there's a couple over there that if you wanted to go now this would be a good time ten minutes from now might be slightly awkward a little bit about me this is actual my first time at this conference I've spent most of the last three decades in national defense law enforcement network security so writing systems that either protect against these kinds of vulnerabilities or penetrations or actually having to be defended from them this talk came largely out of the experiences that I've had for the last thirty years or so I hope everybody's awake because we have two hundred and seventy five slides and a demo I'm just kidding we only have about 50 slides we do have a demo but the talk is interactive interrupt ask questions this is more of a conversation because everybody here is affected by the material that we go over just as a show of hands how many of you have some sort of secure coding or secure security testing going on in the environments that you work in ok that's about normal most people don't because we don't think about this so this is I think our biggest problem I love the look on the Eagles face it's like really Tinker Bell that's how you want this to end but it sort of shows the way we think about hackers we have a tendency to think of ourselves as the cat because we feel like they're the ones who control the engagement they're the ones who decide what happens what we forget is that we're the ones who own the software we own the network we own the environment we control the engagement and that's what this talk is really about the only thing they get to choose is the timing so there's three lies that I've told myself over my career all of which have to do with this the first thing is well my codes been reviewed and we've all told ourselves that right after a big bug is found you know why didn't get caught in the code review my code runs behind a firewall I don't have to worry about security the problem is the new law of the jungle is is that being behind firewalls not going to protect you the last one is we're to whatever to be a target we're too small we're too big we're too remote we don't have anything anybody wants the reality is is none of that is true every company has money so at a very minimum they'll be coming after the money what's really I think distressing about the way penetrations are happening is really the motivation behind it and the FBI tends to be concerned about this too because we're going to a much darker place than we've been before it used to be that hacking was just about the money it might have been a nation-state was looking for technology but what we're seeing is a lot more penetrations into things that have to do with people's safety last year DHS managed to hack into a Boeing 757 and Boeing's responses we're not afraid of flying which is wonderful considering they're probably not flying a 757 at the time but the fact that they could get into an aircraft and get into its vital systems in in this case it was sitting on the ground doesn't really give us much comfort when two years earlier a passenger had managed to hack into almost 20 aircraft that he was on not necessarily getting into the flight controls but getting close enough to the flight controls that if somebody did that you could bring down the jetliner to give you the example the FBI right now it used to be when you did a felony car stop you get a car in front car and back car on the side and you basically boxed in now they just send out a signal get close enough to you send out a signal they turn off the engine they lock your doors and they deploy your airbags and that's how they take you down today no more having to do anything physical so we're developing a whole host of vehicles and transportation that are connected to upstream servers almost daily or at the 24 hours a day to the point if you can penetrate that upstream server you can now get down into the machinery of the vehicle and that's the the part that is really that's the scary part so the most dangerous real estate on the planet is not in Syria it's not in the Middle East it's not in North Korea it's the internet because at no time in our history has it been possible for a single individual or a small group of individuals to be able to launch an attack all the way around the world against a country against it's critical infrastructure like its banking its finance its power systems with little ability for that government to respond they can literally do it from their from their living room with their pajamas on what I can tell you after 30 years in this business is perimeter Security's not gonna protect you you need it because it raises the bar the same reason why you lock your doors but what we find is that the bad guys have the best hardware and software the money that other people's money can buy so they go and they take down a company they pull 20 grand out if they go buy the hardware ten they need they have all the time in the world to go and practice penetrating that particular firewall or that particular network what I always hear when I talk about when I talk about this with somebody like a somebody in the higher up and management is there is this perception that your odds of being targeted or very low that's just a perception because the way we scale software that we write directly translates the way you can scale penetrations into a system once I have my package I can go and I can put it against any company that I want or as many companies as I want looking for those vulnerabilities and it cost me nothing to be able to go on to those next companies but what I always tell them is that you may have a low chance of being penetrated but it's like being shot in the head when it happens it's going to be a disaster what we as technologists have to understand especially if we're writing stuff that's running on hardware or if it's running inside of a system and it doesn't have anything out facing is that there are any safe spaces anymore they're going to get into your prayer there is an entire laundry list of companies that had outstanding perimeter security and they all got penetrated and they got into their inner workings and what they'll do is when they get inside they're gonna be there for a while before you'll even find out that they're and they're going to first thing they're gonna do is build in back doors so that they can get back in and when you do find them they're going to keep building back doors as long as you haven't gotten them completely out because then it becomes a race and while they're in there they're gonna do some pretty horrible things to you so let's talk about the Veii so nation-states any come any nation state that has geopolitical or geo military ambitions has an offensive hacking system or program it becomes one of those keep your enemies close keep your friends closer they'll hack anybody that's just the way they do it what be what I think is really most scary is just simply that when they find these exploits they will weaponize them so they can deploy them easily into the field but then those weaponized exploits get into the hands of the second group which is crime syndicates people who are looking for money they're looking for technology in every country that has internet access has them the third group we talked about that we often don't think about is business-to-business know Nortel Networks has the distinction of being sort of the poster child of having been driven into bankruptcy by a competitor who is actually in their networks for almost about 10 years they were into everything their executives emails their business plans they knew exactly what when they would have to go after a contract they knew exactly what they were going to propose for that contract how much they were going to ask and so this in this competitor was constantly under bidding them on their contracts eventually it led them into bankruptcy and that's an example what we call an advanced persistent threat you've been penetrated they're in there for a very long time and they're doing lots of damage to your company and then I put the fourth one on here mainly just because for completeness but strangely enough we talk about all of these data breaches but in reality it's the insiders that are responsible for most of the data breaches people who are already in positions of trust who take that information and either go to another company sell it to a competitor or in the cases of espionage to another government so that's not one where to talk or log out say simply because there's not a lot we can do about it it's not the system's we right it's not the Coby right that really does anything that can affect that so we have some terms we need to to discuss the first is what is a critical system well there's the system itself we have the system that allows us to do whatever we're holding credit card data bank data we're holding something of value that somebody wants but it also includes the other systems that can interact with that system no matter how low priority those systems are so processes that are unrelated hardware like printers external systems anything that can talk or in any way interact with that particular system has to be considered part of a critical system so the next thing we have to define is what is an attack vector so buffer overruns as far as our code bases is still absolutely the number one and the most devastating attack you can launch against somebody the same thing with code pointer expert exploits because they allow you to run arbitrary code on somebody else's system move ever fashionable moving up as a denial of service that's usually undefined behavior I send you garbage data your system chokes on it it goes down it resets I send you the same data again and you just keep doing this over and over again so your system doesn't work there are others like sequel injection attacks that's number one on the OWASP top 10 a waswas an organization that talks about that deals with web facing applications so if you're writing JavaScript you probably already know about a wasp today I'll walk you through a using a buffer overrun to run a privilege escalation attack against a Linux box and then we'll do it live I've got a VM that we can do it on the important thing when it taught when we talk about attack vectors is that getting into a system is kind of like an air disaster there's never one thing that brings down a jet and there's never one thing that allows somebody into your system it's usually a chain of events that if you can break the chain you avert the disaster so when we talk about code one of things we'll talk about today is this where the vulnerability is tells you how important this is if it's something down in the kernel level it's something you absolutely have to deal with if it's something in a low priority system with very few privileges maybe that's something you're you can you don't have to be as worried about that but when you're talking about you but you always have to look at what what the system does and what its privileges are the next thing is attack surfaces so attack surfaces any external facing interface so if you're running any kind of web system or if you if you have any kind of input from somebody the attack surface is gonna be your network connections user interfaces authentication points USB isn't the attack surface so somebody sends out a mailer one of your employees gets it hey it's a free USB we're just trying this out what's their first impulse tear it open let's take this thing for a drive the problem is is that it that is just not become an attack surface into your system and since they're logged on with a reasonably privileged account whatever privileges they have that virus can now use it as well but it's also our internal interfaces how many people here write systems that have multiple processes that have to talk to each other yeah almost everybody but none of us really do anything to protect our IPC traffic we don't do anything to authenticate it well no but most of us don't I'm sure there are exceptions to it especially if you're for the product lines you're in but the other thing is CL is I come from the hardware world one of the things we would always do is we go drop these nice little CLI down on the hardware that ships so that the field engineers can do the things they need to do except we don't authenticate those at all so somebody comes in and what's the first thing we do when you type a CLI gives you a nice usage chart oh I didn't know that would allow me to go ahead and reset the system I'll just write a little script that just sits in perpetually resets your system command-line interface so the question was what to CLI CLI is a command-line interface if you have a process that may be listening on IPC port but you want to be able to do things outside of the UI you would drop a another binary on this it's a CLI that actually talks into that IPC for it and it's got it uses char that gives you things that you can do telnet yeah so the first the first place we have to go is that we build security in layers in the same way that you have perimeter security that's not the only line of security what you when you when we talk about securing a an environment you build the Securities and layers and for us and what we're talking about today the last layer is the code itself so let's look at some code I picked these out of the common vulnerability databases because I was really just looking for things that I thought were interesting and that would sort of tell a story so I didn't come up with these these or not that I meant to be tricky that they are vulnerabilities that we do know exist in in current code so what's the vulnerability with this exactly yes this is a buffer overflow because we don't know and we don't test to see whether link actually fits in that buffer size now the the box on the right actually comes from cert so the top one is the severity this is perhaps one of the most buffer overflows of the most devastating if you can make it work it's one of the most devastating things you can do to a system the second one is the likelihood so this is highly likely we missed this a lot and then the remediation is the the medium so how do we remediated code so we could use standard string I mean it's it's heap-allocated it's not on the stack unless it's a small string but I think they got rid of the small string implementation when we put him move semantics good passes code review ship it any problems right but I'm also assuming well let's assume the string is valid because most buffer overflows are null terminated because you don't want to you don't want when it's trying to do the buffer overflow for it to mess up so you do an alternate the problem here is actually that we have it in a heap allocated string but we're actually going back to something that's gonna be sitting on the stack again because we're calling C string so we're actually passing this into something that's going to put this back into a stack somewhere or may potentially deeper down so all you're doing if you do this is just deferring the bug you're pushing this farther down in the stack and you will lose sight of where that code actually came from so a better choice here is if you're gonna error out then check the links and error out because it may be a valid string but it's just too long but then go ahead and null terminated into the buffer a null terminator and be done with it if you pass it into a string you never know where it's going to wind up yeah I'm just dealing with the buffer overflow at this point I know it's a null terminator and it fits in my buffer so I can't overflow the stack here yeah I'm going to walk you through a buffer overflow exploit and then we're gonna run one live in a VM if we have time where we use a privilege escalation attack to get go look at say the shadow file which you shouldn't be able to look at how about this one right exactly you may not be in this enumeration yeah so it's this one which is undefined behavior now this changed in C++ 17 this became undefined behavior before it was an unspecified value so that makes this a little more dangerous than it used to be so we can remediate this by using something we already have in modern C++ which is a strongly typed enumeration and tell it's an integer if it's outside the integer we don't care it still fits in an integer we're not overflowing anything we're not doing as long as this is an integer being passed in then we don't have a vulnerability to let's try to well you're gonna check the range at some other point but it's not going to do anything bad because you're pushing an integer into an integer so the problem here is you're using an enumeration that comes from the for modern C++ so you have no way of knowing because you're not doing the check until after this enumeration you have no way of knowing how the compiler is going to interpret that and it becomes undefined behavior right may have this is a medium vulnerability this is going to cause undefined behavior and this is not going to overflow unless you're talking about passing something much much bigger in your compiler and how badly you wrote the code and all that because transfer let's pick on stay on separate libraries from it it's alright forget to repeat the question yep in fact that's the problem and this is actually a buffer overflow on the heat because you have and that's a high severity because you're not in fact dest is actually zero you haven't allocated anything yet but you're just copping into what amounts to an empty vector so the remediation that just do a reserve or like you said due back inserter but you have to it's an easy mistake to make because unless you've read copy you won't know that copies and actually not doing an insert copy is actually just doing a memory copy okay so yeah so my screen cut my my screen code has a bug in it the other thing you could do is just go ahead and and create it in that in the constructor create the size where yous back inserted with a destination beacon or takes takes us aside yeah so and I don't think I did I didn't do a slide that head back insert around there I just had it in my notes the other thing is standard fill and standard transform I'll do the same thing so you have to make sure that you've allocated the right amount of space for it how about this one so this is this is what we'd see a lot in legacy code why that's entirely possible that's not the vulnerability I was thinking about the vulnerability I was thinking about is this wild statement doesn't ever terminate if you don't pass a zero so it simply keeps going on and on this is actually a pretty good vulnerability that's very hard for static analysis tools to catch because they have no idea what data is being passed in here so no you're passing in a range of values right but that while Luke was waiting for a zero to terminate the wild loop if they don't ever get a zero it just keeps going on and on and on okay so the comment was is that it is the responsibility of the caller to make sure that what they're passing is correct I agree but they very often don't so there are some facilities within C++ modern C++ that allow us to get away from this so for example you use a very attic template you can also use brace initializer list expansion so this is a case where the newer facilities of the language actually help us to get out of some of the problems that we've seen in the older parts of the languages like especially in some of the C stuff that you see in legacy code I think it's the last one yep so it's undefined to here where we don't know when that lambdas going to be called so we're passing something by reference that won't exist probably at the time that that lambda is called it's easy to overlook especially with people who are unfamiliar with lambdas and haven't spent a lot of time with them but this is actually depending on how this is structured this becomes a code pointer exception where you can actually run an exploit off of this to actually run arbitrary code so in this case you capture of a value we're not really using I anyway we're just setting it to something else so let's talk about a buffer overflow exploit if you've if we've gone through if you've paid attention to the what we've the five examples we went through every one of them come down to one thing and that is we've lost it to a situational awareness on our data we're just acting on data without actually having done anything to it to verify that that data is we balance and I'll go ahead and give you part at the end of the talk if you want to make your code as hardened against attack as possible that is the one thing you can do is validate your data know what you're working on that 85% 90% of all the vulnerabilities come down to this we've lost long wareness on our data so let's look at a buffer overflow so we have some really badly written code it's the same code we saw before and we want to do is find a way to leverage that vulnerability now we do a couple things we can crash the system so we just put junk in and the system will go down and that makes a pretty decent denial of service we can also execute arbitrary code so this is the arbitrary code I would like to execute this is for a Linux box the part on the right is just sort of a C representation of what we're trying to do we're just trying to write shell once we can get out and execute a shell we have access to whatever privileges that application has it doesn't seem like a big vulnerability but some of the biggest vulnerabilities and Linux were parts of the operating system that ran is root or as a high privileged account that someone could run a buffer overflow attack against it now they had root access to the box and that's what we're going to do the part on the right is just the this is just the assembly code that comes out and what we deal with is we've got EBX ECX EDX and EAX which are just the four pieces you need to setup that you would see right there in that execute statement 11 is just the system call we're not sending any environment variables and those last two have to do with where you go and find what you're going to execute so let's look at our memory this is a traditional x86 so this is an intel architecture if you're on Sun workstations it's going to be a bit different Milou sides you've got text and data that's the user space where programs run the heap is on the bottom grows up the kernel is that the top can't write there either the stack is at the top it grows downward the important thing to remember though is even though the stack grows downward what it starts out is main and then and then you grow your stack down where all the memory is still access the traditional way it's still from low memory to high I pulled out just to give you the to show you what a stack frame would look like so everything in the gold I took out some of the elements it's just that I want to keep the important things the gold would might be main or whatever called you what's below would be whatever you called or nothing if you if you're at the bottom of the stack so we have our nice big buffer the EBP II rbp that's just the frame pointer and then the return statement though that's what really interests us because there's an address in there and that address in there is the next code to be written after you hit return so as soon as your function exits the address that that return is going to be where the the system goes to run the next piece of code so what we would like to do the way a buffer overflow works is is we'd like to somehow change that address to point back to someplace where we now have code sitting that we want to execute which is our shell code so what we're gonna do and I know this takes a little explaining we're going to use what's called a no op slide so we could do this one of two ways we have to hit this return and we want to point that to somewhere else but we really don't want to have to have two very specific places where we hit yeah the question we will get to that and in fact in my demo I had to disable all that so the so the question was I thought that Intel and on all these architectures had things to prevent you from doing exactly what we're doing and that is true and I'll explain what they what they've put in and how you can make sure that your code is actually being executed under those because it does make a difference it doesn't eliminate it you can still get around it is just a lot harder so here we have we know where the bottom of this buffer is but we're gonna have to try and figure out where that return statement is and that's actually gonna come into play a little bit later when we go through the demo but what we want to do is we want to hit that return statement and we want to point somewhere where our code is but we don't have to be exact it's hard enough to get one exact address it's really hard to get two exact addresses out of this so what we use is called the no op slides this no op slide is going to be hex 90s in your buffer because when the program pointer hits that and starts executing it's just gonna bypass all those no-ops until they get some code that it can run which actually is right about here so here's the beginning of the exploit and then there's a big block of what looks like the same address over and over again so the reason why you do that is because we're not talking about pinpoint bombing here we don't want to have to guess each little piece on line because then it becomes obvious because everything if you overflow this and you blow up the stack frame you crash the program and we'd like to crash it as few times as possible because the goal is not to crash the program the goal is to be able to use our exploit so we will fill that with a big chunk of the same address which in this case is just going to point back into the buffer hits the no op slide Goes Down execute our code so in older systems and this is going to be like early 2000s they've put a lot of this infrastructure and to protect about this it has not come in until like the early 2000s into the BIOS in the operating system so everybody understand that part of it remember the 90s because that becomes important when it comes to finding out if somebody's trying to penetrate your system and I'll show you how that works here in a little bit so to your question the things that have been put in so since then we've added a with a SLR which is the address basically a layout randomization you I went ahead and put all the the command line switches for GCC clang and Visual Studio now a SLR is actually turned on by the operating system if you get root access somehow you can go turn it off so that's one thing that the Vence them from each time you run the application your address base is gonna look slightly different because you'll as you can imagine I start at the top of the buffer and I start here oh that's not it and I go a little further that's not it go a little further and then suddenly I've hit it but if the address randomization is in play that they have a hard time doing that at this point we also have stack arts thing you can do a stack guard yourself if you really want to a stack cards really just an area of memory all right the top of the stack that has an own bit pattern and if that known bit pattern has changed you know that they somebody has tried to overrun your stack or you've done it accidentally but somebody has corrupted your stack so and we'll talk about safe stack a little bit further down the presentation but those are all the command line switches now all of these are turned on by default so make sure if in your build environments that nobody has gone and turned any of these off because you could be running with this on your desktop where you're doing development but in your CI builds or in your your full build you may wind up not having that so what can we do about all this that that's just did everybody understand that example of how that works how do you at the buffer over from so I have a buffer that is a fixed width I give you a string that is much longer than that width it's null terminated no you are I'm giving you the data you're putting it into something on the stack you're not validating that the length fits your buffer like the first example we had so we just rights and rights and rights until it hits an old terminator yeah so the question is is how do you wind up doing this and the answer is is that you wind up not handling the data correctly and you're right it's on the interface and we'll talk about trust boundaries here a little bit later yeah for example a function is called alkanes local buffer and then tries to load an image to that buffer and images received over the network if the image is too large then the buffer gets overflowing the return addresses overflown and when the function returns from the step that corrupted the Genesis is finished and okay so the comment was probably a lot longer for me to repeat but the basis was is it doesn't necessarily have to be an interface coming from the outside that a user is going to have access to it could be an interface of bringing a file over a over a network so for example open ssl and its heartbleed buff vulnerability was a buffer overflow but it had nothing to do with the user it was just two computers talking and one of them happened to have a payload in it that overflowed that buffer you can do the same thing by injecting something in a database the fact that's part impartial the sequel injection attack is I stick something in that whether it comes from a database or it comes from somewhere else I stick something in that sequel call that allows me to run arbitrary code so all of these vulnerabilities then I'm still surprised when I see that sequel injection attack is that the attacks are at the top of the list simply because it's been there for over a decade on Oahu's top ten it just becomes a case that we don't think when we write code of how it could be exploited so we're told like for example in string copy don't you string copy okay we don't use string copy but we use string and copy but we use it incorrectly and what we can sort of understand is that we've we've been given a language here that's actually fairly safe just in the way it's actually created it's just the way we use it can create these vulnerabilities for us so did that get your question okay so yeah that's true my meat yeah so so let's talk about what we can do about it so sons who said that all battles are won or lost before they ever fought because it's all about the planning and it's all about the preparation it's all about the intentions so what we're going to talk about now is what are the things we can do to protect our systems against people trying to penetrate them so I took I hear I I love the core guidelines I use it all the time I love client ID I use it all the time CPP check I use all of these all the time but I wanted to go see how many of these vulnerabilities would they pick up and the answer was none the own yeah this originally was a none until I realized that in Clank ID almost all the security checks were turned off by default so in order to catch that one and that's not even really the bad one because that's sort of the problem with static analyzers is that what happens when you have too many false positives in your static analyzer you turn it off yeah so it's understandable that no one really wants to catch this because all the legacy code out there you would just be getting these blizzards of false positives so what this tells us is as good as the poor guidelines are as good as CPP check is as goes client ID and the static analysis are we have to have something more so let's look at secure coding static analysis tools so there's some commercial products Coverity and sonar and these are all based on known vulnerability databases from either cert or OWASP or there's a database maintained by mitre for the government and you can actually go look and see which ones which vulnerabilities they'll catch in this case I also I went and looked at their vulnerability APA databases to find out if they would have caught any of these bugs and they don't cover any of them and then we have the thread safety analysis we'll talk about that one a little bit later rows checkers which is specifically designed for the cert C and C++ vulnerabilities the problem here is is that you have to build with your rows compiler and what they've provided is a VM that you can run their checkers on which is a little hard to use so I'm pretty sure it's not gonna get a whole lot of traction except in the environments where people are actually using it you have the AR energy model which it's looking for integer overflows in your code or potential energy overflows then you have compiler and forced buffer overflow elimination which is really in the research stages so then it becomes the question is why don't we have a lot of static analysis tools since that would seem to be the thing we really want to do is catch this at at compile time and this is exactly it that the comment was it's hard it's just indicative that is a very very hard problem the reason why we don't think it's hard the reason why we have people who are able to say oh well this is the vulnerability that I'm seeing here is because the very best pattern-matching recognition pattern recognition system you will ever have is your own brain if you go back and look at what little kids that are six months old can do and the patterns they can recognize it's almost stunning the way our brains work and especially the way it processes information doing machine learning on this so the question is is there anybody doing machine learning on this I don't know but I think there's enough tools out there that even if we can't catch this at compile time there's tools we can catch in our CI builds we can catch it and we can then we can actually put in our build environments to catch a lot of these vulnerabilities for us so I don't know that anybody's really taking the time to do that the biggest thing you have to remember about static checkers is they're based on their rules and their rules are sometimes very hard to write going and fine and just gripping for you know string copy that's easy and then there's your rule but there's a lot of behavior because they can't handle things like architectural vulnerabilities they they don't know what the data is that's coming in static checkers really don't have the ability to catch everything they can catch some of the easy stuff but they don't have you build in which means we need something more so what has been developed here in the last few years has is a series of sanitizers I don't know about Windows I'm sure they have their own things but I do more Linux development that I do anything else so these are the ones I'm familiar with so you have address sanitizer which handles things like buffer overflows and thread sanitizer which is for concurrency so there's these four sanitizers that when you turn them on and you're running them they will look into your code and watch as your code executes to look for these kinds of patterns but these actually need to be used with fuzz testing and if anybody was in Marshalls talk earlier today he's an excellent talk on fuzz testing if you weren't OS fuzz is actually googles continuous fuzzing service so if you have an open source project you can submit it to them they will constant they will you have to set up your own fuzz testing but what they do is they run the fuzz tests against it if they find a vulnerability they're going to give you 90 days to fix it before they publish it and if if you do another commit to your repository they'll automatically run on the new commit which is a great surface especially if anybody in here is doing open source lib fuzzer is usually what they use and that's coverage guided fuzzing and what's interesting about that is it will actually learn based on the reflexes of your code so as you're going in you will try one thing and depending on the output whether it fails or passes it will adjust itself slightly American fuzzy lab uses genetic algorithms that might be more way we were talking about here is using genetic algorithms things that can learn machine learning to actually learn how your system works I've heard fuzzy lop was actually used to find the heartbleed vulnerability actually found it fairly quickly so we have dynamic analysis tools that we can run after the fact but what you find in these is that it's really going to be as good as your test cases or as good as the test harnesses that you're using if you're not using anything I would try either one of these you will have to write test pictures for them so you'll have to have the entry points you'll have to and it will handle the testing but so it does take some work on our time it's not the value-added is not free for for this but they are really great tools the combination of the two is oh you over ran your buffer right here so they will so the sanitizer you will be able to tell you where you have undefined Bay or where you have buffer overflow where you have resource contention but we need to go a little bit further because one of the things for all of these things what they do is they test the code and they test the codes reflexes depending on what you put into it but what they don't test is your architecture and we talked about CL eyes and we talked about IPC mechanisms now once you're inside the wire if your IPC mechanisms aren't doing anything to authenticate or anything to test to see who that data is coming from is valid your open up to a do s attack from within your own environment so penetration is actively looking for vulnerabilities and testers and it really is sort of a different way of thinking if you need to be frightened or scared marginally terrified go to DEFCON one of the one of the pieces of advice you get when you go to DEFCON is leave your electronic devices at home oh yeah go take a burner phone that's that was the comment it's take a burner phone these the people that go to DEFCON are absolute masters at penetrating a system I had one of my staff a few years ago and he was outstanding I've never known anybody that could pull the system apart and look and see and be able to know what was going on inside that system there's a lot of people there that a lot of people to go there are people who who actually do whitehat testing so they're going to go in and they're gonna penetrate your system and show your system where the vulnerabilities are because what we find is that most of the vulnerable is actually start at the architecture stage because we're not really architecting for security we're acting for performance we want functionality we want scalability so we want we don't think about security as an aspect at all or even after we've put the SystemVue now the third bullet point I actually thought long and hard before I put that in there is let the hackers help this is not my company speaking I'm this is not their advice this is my personal advice you could if you wanted to find out how quickly your system could be penetrated to set up a honeypot don't make it a segment off your network don't put it inside your firewalls that's happened it's a physically separated network it's got your system in it but it has garbage data in it you'll be surprised how when you publish this and it's out there how fast people will come and try and penetrate it I'm in this game and a game which lot of people are very good at but you're gonna find out just how quickly your system is going to get penetrated where they're going to go but you make sure it's a physically separated entity and obviously get your company's position before it again this is my personal suggestion I'm not speaking on behalf of any company I work for or at work or in the past or will work for in the future oh yeah so the comment is I don't need to do what okay so the comment was is you don't need to subject your company to this and put it inside go put it outside the wire just put it on the network somewhere unless somebody again that the person who did this for an article got hacked in five minutes awesome what's really interesting is that when you go find out where all the IPS that the attackers are coming from it's sort of interesting to find that security video anybody here do test-driven development well just a few so for those of you who don't do test-driven development or don't like testing driven development it's basically way of taking your class writing all the test cases so that they fail failing them and then come back and write the code that makes all the test cases pass you could do the same thing with security video that's really an offshoot of fuzzing but you're doing it in a much your stage this is at the unit test level of so this is when you're doing your build and you're running your unit test before your commit you're going to commit your code go beyond fuzz testing so fuzz testing especially for these is very good at finding patterns but there's other things that fuzz testing doesn't handle like for example none of those fuzzers really handles traffic that is of particularly light like json or xml so if you're writing something that takes those you're going to have to do this at the DVD level and write your own test cases that send malformed plushy more malformed data into your class to make sure that it doesn't choke now the one problem we all have as engineers we've all been there is that we write this pretty piece of code we have this wonderful solution we're very proud of it we've tested it it's held up it's done all the things that it's supposed to do but when it comes to testing is we're really bad about just hammering on our own code there is a we call it confirmation bias I write the tests that make sure my code pass I'm not doing it intentionally that's just the way human beings are wired together don't test your own code give your code to somebody else and let them go and do mean things to it and and this is a much harder and I put this on here and we'll talk a little bit more about this a little bit later is don't allow release dates control your testing it's very easy and if you work in an agile world ok I've got my two-week sprints but somebody has made a commitment to the market out here and what we tend to do even an add your world is we tend to sacrifice quality upfront and then we try and get all the testing in sort of at the end and then of course QA gets hammered I mean we all know that happens to the waterfall world but when you've made a commitment out here to the market you're really putting yourself into a waterfall model even if you're doing two-week iterations there has to be something in your testing that says we have to get to this point and some bugs are worse than others vulnerabilities I don't know what your severity rankings are one for places where I've worked is is the worst so you have to make sure that those sev one bugs are not the things that can be deferred any vulnerability is a certain one until somebody comes back and says yeah it's actually a low probability that we can that they can get this it's going to have a low impact on the system so here we talked into some of the things we can actually do as far as writing our code so how many of us use third-party libraries use open source libraries guess what the bad guys use open source libraries too in fact they go do code reviews and run fuzz testing against it because they want to find out where the vulnerabilities are so when they find out that you're using it they'll know how to go and exploit your code so one of the things that you can do is writing security wrappers for your third-party libraries so I have some library that it's it's a hardware dependent library I can't get through I can't do anything with without using this library but you can write a security wrap around it that validates the data that comes in and comes out and also handles exceptions now I know especially in my domain and hardware we have a tendency to turn off exceptions because we think they're bad the problem is what we're turning off is our ability to do something about the exception not turning off the exception itself there are some things in standard separate libraries and I use them in the embedded world for example if you pop off of an empty vector it's going to throw and oh is it now undefined baver well it doesn't throw an exception that's interesting okay because I have I have one throw an exception on me the advantage okay so what I'm hearing from somebody who's on the standard committee is its undefined beef okay sorry I made this they're not on the standard committee what you want to make sure is that if something in that library has the potential to throw like being out of memory then you can catch that exception without it bubbling the top of your call stack and you pay for their crimes trust management we're talking about validating data the reality is is we can validate data at every possible step as we're getting it because then we're gonna spend most of our time validating the same piece of data over and over and over again what we can do is create a concept of a trust boundary so in the world that I work I've got a JavaScript front-end I've got a c-plus was back in the Java Script is sending me information now the Java Script are great guys actually sometimes I'm going writes the JavaScript but I don't always sanitize my data so if I build a trust boundary between my C++ code and the Java Script then that means that I don't really trust anything it's sending me and I validate everything that comes in through those interfaces what that means is is that for example if we sending something that's an enumeration I've checked to make sure that it's valid and I have to have an error state to go back to tell the JavaScript side that what they sent me is invalid I validate lengths I mean they're putting in the first name and last name and someone puts this massive string in there the JavaScript side probably doesn't care and then it sends it to me I need to validate it at the point of entry so what I do is I treat everything that comes out of that as foreign data even if it's my own code that way I'm sure that that data has been validated the minute comes in what you want to avoid is duplicate validation steps so what you're going to do is you're going to take these you're not going to build these at every layer you're going to build the data that every interface that comes in is going to validate it once and then inside your code so that you're not constantly revalidating the same piece of data over and over again you're sure that what you have is valid so we also have the embed you heard of the principle of least privilege everybody heard of it okay so the principle of least privilege says is that you run with the minimum amount of privileges necessary in order to do what you need to do but there are times when for example that you need an additional privilege so you will grant yourself that privilege but then what happens a lot of times is we forget to revoke it when we have an exception or when we have an error and it becomes the same the same problem we had with with locking mutexes and then suddenly you error out and you didn't unlock it and now we have our ai ai this is a place where if you're going to grant yourself additional privileges use RIAA to make sure that once you've exited out of that context that that privilege gets revoked because there's a lot of vulnerabilities that wind up where somebody has granted it they didn't revoke it and now once they're penetrated they've got these escalated privileges but that becomes only one layer of the protection that's just another layer the problem we have and I see this in almost every codebase is that complexity is the enemy we are really worried about performance we're really worried about elegant structures we're really wanting elegant code but we have this in situ right complex things that create emergent behavior and emergent behavior is behavior that surprises you so something happens some input comes into your system and you say oh I didn't know it was gonna work that way and that's and our customers and the people who use our systems are very good at giving us emergent behavior because they use our systems in ways we never really thought that they they would ever be used but complexity also makes it hard for us to reason about our code because if you've ever listened to any of Shawn parents no raw loops no raw synchronization primitives talks whenever you're talking about code that is very complicated in the way it's put together it's very hard for us to reason about how that codes gonna behave under certain circumstances and then understandability the more complex something is the more difficult it is for the next person who comes in behind you that has to maybe fix about trying to figure out and I stole this from Bob because he talked about this the other day in fact I stole the word directly from him we need to think about the way we write code it's great to take advantage of all of the new tools that we have in modern C++ but when you write code that is so complicated that there's only one human being in the entire company who understands it we will lose situational awareness on how that code behaves will get emergent behavior and then we're gonna get surprised the next thing is logging we don't ever think about logging I don't know about you but I don't think about logging but one of the problems we do with and they need an in on the other other side of pattern when you have an exception a lot of us will just let it bubble the cotton top of the call stack and then you won't read the core file later or try and handle it later one of the things that when we'll look at here in just a minute one of the things is the pattern of corrupt memory tells us something where it's corrupted tells us something so if you can catch exceptions and log your memory into something through that's easier for you to consume than digging through a core file it makes it much easier for you to get to the answer quicker and find out where the vulnerability is the other thing we have to and talk about is what we put into our logs we're not gonna we're not gonna crypt our logs nobody does that it's there is way too much overhead but one of the things that if you've ever gone through your lock ask yourself how much could I determine from my system by reading the logs that are coming that come out of my system do I put user names in it do I put information that would let me track how things are built within my system and you obviously need to know it but the more information that we put into them these logs are usually unprotected which means when someone's inside the wire this first place I goes I'm gonna look at their logs and see what's in it you know how does the system run and then audit trails now I've come from a law enforcement background and from a defense background where we needed to have audit trails because I was either dealing with evidence or I was dealing with sensitive information and so I need to know who touched that information and was it changed from as it passes through the system but if you treat logging as a part of your security model which is when I see certain exceptions or certain things happening I'm logging them big and bold so they're easy for me to grab easy to find I'm not losing these exceptions in the noise can be have any questions so far trust boundaries so yeah so the question is is can I build an insecure trust boundary absolutely you can build an insecure anything fuzz testing security testing penetration testing all the things we've been covering up until now trust boundaries are not exempt from testing they are you've all a trust boundary really is is I get data comes in from some foreign source and by the way everybody that comes into that is considered a foreign source and I validate the data I make sure that my strings are null-terminated and then the right size for the context those kinds of things so yeah you can build in and secure anything absolutely yes so the comment was is that it's very hard to build trust boundaries because today's private interface becomes tomorrow's public interface and that's absolutely true I'm more talking about third-party libraries than I am standard template libraries because STL is actually part of the standard and one of the things that's nice about the standard is that cert is actively involved in the standards committee so they're actually in helping craft the language so that the language is secure now STL is a library but it's also a library that's been extremely well tested it's pretty well known I mean millions of Engineers use it I'm more talking about open-source libraries that are really specific to some domain that you're in that maybe it doesn't have as big a user base it hasn't been through testing nobody's fuzzed it it's just something that is very specific to your domain that you should not just because you're using it trust it and the way you can do that is to build a secure wrapper around it so that you're catching exceptions that are coming out you're validating goat data going in you're validating the data that comes out well so where do you catch her the question was that or the comment was is that you you do catch exceptions anyway and where do you catch those exceptions so where do you catch your exceptions is at the top of the call stack you just let it bubble up and restart the app right so in this case a security wrapper that goes around a third-party library catches it as it comes out of that library so you know that's exactly where it came from and you can protect against because their code problems become your code problems see I trust the code that I write and Trust the code that my colleagues write but I don't necessarily trust the third-party libraries that come in so in this case you're just insulating yourself against somebody else's mistake anything else so code reviews everybody's favorite topic I don't know about your experiences but my experiences in the past have that we tend to take it easy in Kotov use we don't want to offend a colleague we don't want to be overly harsh with a colleague especially when they've got some sort of a something that really isn't the bug but maybe it's not done the best way what we have to sort of get and we do tend to focus on form and function I don't like the name of your variable years lying is too long what we really need to focus on is correctness because correctness is part of security performance yes we all care about performance but also about the security of what you're writing are you validating or do you know that the data has been validated upstream from you that you're not working on data where you've lost situational awareness the important thing though is is that ruthlessness is a virtue in this case because if you don't find the bug in a code review some hackers gonna find it six months later when you've forgotten that it was actually there and you're gonna have that moment oh you know I saw that bug and I didn't say anything about it the other thing is and this last one I actually came from I did I've caught myself doing this for a few times if you guys use git and Gerrit so you check in and then you you know everybody does a review and so you've got some changes you put out a second patch then may be a third patch by the time you get three or four patches you add towards the after the initial commit you sort of forgotten about what the commit really was doing to the rest of the of the code that it was being checked into so one of the things that I've stopped doing is that I don't +1 off an incremental commit I always go and look at the entire commit in the context of the code that it's modifying instead of doing it off the last patch that was checked in oh cuz yeah I see they had this comment and they fixed that comment so I don't know if anybody else here does that but I've caught myself letting things slip through a couple of times and it's something that I stopped doing so the comment was is that's a shortcoming of the git review process other code review tools to ensure do you have a suggestion looks like code collaborator brocaded okay same so we have two other options for code review tools I liked I liked getting Garrett but it does write it does have that vulnerability so the other thing we need to look at as far as code of uses legacy code I don't like working in legacy code I don't know about the rest of you I avoid it when I can but it often has the worst vulnerabilities especially the longer the older that code has gotten the worse the vulnerabilities are going to be in it fuzz testing and all the other testing we will do does catch a lot of these you find a lot of it and legacy code it becomes one of those I didn't know that was there but legacy codes really not reviewed because nobody likes to touch it it's almost radioactive so so nobody's really going to go back and review it yet that hides a lot of vulnerabilities so one of the things I would implement is actually taking time to go back and review your legacy code from a security aspect your ups and you don't always see if you had a static a check or run on it'll obviously go through the same process but a lot of times a second set of eyes will catch things and I've had this I've checked things in and I've had a second set of eyes and I just missed it yeah throw it away so so so the question was any suggestion for multiple decades long codebase throw it away and start over I'm glad that's your problem that's not the one I've got but code we consider legacy today back then what did they do that it became that has this reputation of being having abilities right so the question was is that code you write today will be legacy code tomorrow and what keeps us from writing the the same vulnerabilities into the code tell me if I'm getting is wrong that it becomes legacy code at some point time knowledge that's why I'm doing this talk probably few people in here and vector when I asked who's got a security program in place there was like three of our hands the more we get the knowledge out the more we think about this the better the code will write today so then when it becomes legacy code in the future we don't have those vulnerabilities behind us where we're not expecting them so I know we don't think about it and I'll talk about this here in just a minute we don't tend to think about security because it's really not the most glamorous part of our job I mean we get kudos for writing a really great algorithm or writing a really fast data structure but we really don't think about security until we get burned then the last thing is libraries you don't write we talked a little bit about open source everybody here use open source so - the bad guys so one of the places we need to go in code reviews is to make sure that we're code reviewing our third-party libraries is anybody in here ever gone in third car code reviewed a third-party library they're using one two should do the entire library always awesome yeah most people will do a piece but not the whole thing you did always so you're the trust but verify you have to verify these things with code reviews you have to go and look at the code you you need to know how it works under the hood because if the static analyzer doesn't hit it if the fuzz testing hasn't gotten a test case that exactly matches what's going to kick that vulnerability out use the best pattern matching system you've got which is your own brain to go code review that code and then favor libraries that have security audits so I don't know if anybody keepin well this is booze con so everybody keeps up with it but there's a new library called beast they got put in by Vinnie Falco one of the things Vinnie did was he actually went out and had a security audit run on the code I'm gonna venture to guess most of boost has not had that do you know you don't know or no it hasn't no no it hasn't yeah and yet boost is ubiquitous I mean I use it but if you'd be if we begin favoring third-party liar or open source libraries that have been secured you test if that brings the pressure back to people who are putting those libraries outside we're not gonna use a library til it's got gone through security on it every commit because just because it passed now just cuz beast is pretty good now I think they found two vulnerabilities one was minor one was a medium vulnerability they thought the minor could be left the medium when he fixed but what happens a year from now after he's had a whole bunch of commits so best practices I said this before validates your data if you want to cut out the overwhelming majority of exploits that people will run up against your code make sure that you maintain situational awareness on your data if you don't that's where you're going to get burned the architecture would be fantastic and have all the security in the world but it will be the data that you're working on it's going to be your Val your your vulnerability I know some people already do this treat warnings as errors in the same way that pain in our bodies tells us something if broken warnings in our code tells us something is inconsistent there's a reason why those warnings are there the problem is is that those warnings often get ignored I've seen code roll out that's got 1,500 warnings and this isn't products that used every day in the field you're not going to be able to go from 1500 to zero in a commit what you're going to have to do and what I've what we've had good success of the past is you're not allowed to add more warnings in your code then you can fix and the ones that you add get scrutinized even more to make sure there's nothing that you can't do to remove that warning some of them you live with but if you treat them all as well as errors then those inconsistencies that are in your product will not become the vulnerability someone can exploit later design it always starts with the design if you're designing a new system the first thing you have to think about is your security model that's permissions authentication who's going to be talking to our IPC mechanisms am I putting CL eyes on my hardware the best thing you can do is don't put CL eyes on your hardware if you have field technicians that need it give that to them let them go and just run it in place off of USB or off of some other capabilities so that they can use it but it's not sitting on the box so we can't be used to compromise your system establish a code stay on a coding standard everybody here have a coding standard that's more than just line length and names and really no it has to include security becoming familiar with these kinds of having internal training becoming familiar with these kinds of vulnerabilities will save you from putting them in your code but also codifying them in a coding standard so that you're you're avoiding certain tiya certain classes of vulnerabilities will make a huge difference in your ability to get the code out without there being a vulnerably something to take advantage of we talked about static analysis tools penetration fuzz test and unit testing the big thing is get buy-in from above to make sure this is a really really strict it this is probably one of the hardest things because you have a deadline and any suggestions so every time I have to have this conversation with somebody it usually comes down to this if you don't secure your company and your software three things are going to happen one they're going to get inside is guarantee it two they're going to figure out how to get into your banking system which means they're going to go after your money after they've gone after your money you're going to miss payments to creditors miss your payroll and declare bankruptcy you can't think about security in sort of a context of the same way we think about well what are the odds somebody's gonna die on an airplane you know where we have actuarial tables to it if if your company is using an actuarial a approach to deciding whether or not they want to implement security in their systems they are just begging to get hit show them this talk take the material from this talk put your own talk together go inside your company because you need buy-in from above and below and that's really the last two things educate yourself train your team you don't want to be the security guy it's absolutely the worst position you can begin because you're always the person who is pointing out problems and what winds up happening is is when you don't get hit and they have a security program you don't get hit people say well where does he do all day and we're good nothing's happened when you get hit and someone gets inside it's like what does he do all day I mean we just got it it's the worst place you can ever be what you want is an organizational shift which you want us to put together the Evanson I don't know who the the question was you know how do you how do you get this change in your organization a lot of it depends on your organization I'm working I've worked with companies before that have been through this they've seen just how bad is it so it's a easy conversation to have I mean they're all I mean nothing makes my job easier trying to get a company to implement a security plan like a hacker who got inside the wire on him and just wrecked things so the comment was if you can find a really good security guy grab him get him in your company because this gentleman has somebody in his company to put in all these best practices penetration testing fuzz testing all of these other things and that's true I mean it's we would never leave our house open we would never leave the car unlocked in a major Metro we'd never leave our money sitting in the living room floor yet we it's amazing how stupid we can be at the corporate level where we don't think about securing what is really the most important thing you have which is your job and the company that you're responsible for guiding and I'm sure that the CEO of Equifax thought they had a really good program and then some hacker wound up showing them just how bad they were so the other thing to remember is you have to be committed doing this from an organizational level look whether you like it or not whether I like it or not we've just gotten drafted into the front lines of a war that is between us and them for some definition of them and that's the way it is it shirts and skins it's us in them and we can look at it as well you know what are we going to Dumon after all we have to do all this other work or we can look at this as a challenge the same challenge that we would look at if we're going to go and build some brand new system what we have to see this as is that when they're in our software on our network in our environment we're the Eagle not the kitty and if we will look at it from that viewpoint and see this is the challenge that is is we will stop looking in this as an additional duty we have to do and now it's just something we do in order for our companies to survive the thing that you have to understand and this is where the commitment comes in is you will never know if you've gotten it right you will only know when you get it wrong so the fact that you haven't been hacked in two years is absolutely irrelevant because your code base is changing you're adding new stuff new features you will always know when you've got it wrong but you'll never know when you've got it right so I have 15 minutes left do I have any more questions because I wanted to show you this demo the demo ok so so I have a VM here yes charlie I have a password on my VM which by the way is password pas SW already all lowercase so if anybody's thinking I'm gonna go hack his VM yeah you got the password Ari okay so I'm sorry no well yes because it's sitting on my box here and that's connected to Wi-Fi okay so I've just created a few programs here so vulnerability if you notice there's a file called vulnerable that's actually running his route which is not atypical for something in an operating system you can think of this as something else that would be a something run with higher escalation higher privileges and then you are I have my very bad code which all it does is just read bad file better okay awesome okay I'm gonna sit down sorry cameraman but it's hard to okay so I have my very bad code this is actually the bad copy is actually the same code we saw before that had the vulnerability in it we're coming in with a length we're not checking it we're just slamming it right into the buffer the bottom part in main just reads just reads the the file and then calls Matt now I'm gonna do one thing because I actually have a better graphic that will show you Oh five C's I can make this bigger no we go all the way to the end of the slide deck sorry is there I have no idea what it is I've used this right there okay so this is gonna be a bit different strategy what we're going to do is we're not going to we don't have a nice big buffer in our in our function that we can operate it's only 24 bytes so what we're going to do is we're actually run it against the buffer that's above in main so it's a bit different so I've added a little more information in here main has parameters and the buffer and then when we called the the bag copy we have the arguments the return address and then the the frame and then and then our buffer and what we're gonna do is we're gonna start from our buffer we should know right all the way up but we're going to put the payload at the end up in Maine and let me go back down here so keep that in mind and we don't want to update now that would be unfortunate so let's look at better probably not no ctrl+ better never break it see okay so bad file will just hex dump bad file so you notice up here we have what looks like our address and that's true we're overriding the address we're filling up the buffer we have plus the return plus EB P plus all the parameters and we're covering B's and this is this is why we have that big block is so we don't have to have pinpoint accuracy and then you have our no op slide followed by at the very end you can see the vulnerability so if the stars align when I run the vulnerability I have a different prompt and I say Who am I I'm route and if I want to look at the shadow file which I can't normally I now can look at the shadow file so I've used a buffer overflow run to exploit - hey I'm just huh basically we're all the password information is stored it's the one file if you really want to break all the passwords in the system on a Linux system you want the shadow file this this is something that only root can access or something with those particular privileges can access so one of the things why this becomes so devastating is the fact that I now have root access I can do anything I want on this box wipe the hard drive anything and then once I exit out of it if I were to try and go and cat the shadow file I permission dines because I'm back at me I ran a 32-bit VM because it was easier and the question was what are the the things I had to turn off besides running up 30 22 try and turn off a SLR and I had to compile this so that it wasn't doing any of the safe stacked stuff in the background so this would be and I made it just to make it easier on myself because it's more important that you see how these work not necessarily that I can get past all of the things which I'm sure if I put my mind to it I could do it it was more important to put the talk together than try and figure out how you pass this but let's let's look at the one other thing here let's say I have another one that is exploit fail because and and all that's going to do and if you look at the size types all that's doing is that it's is that's creating a bad file that's now 200 bytes long it's much smaller file what you'll find is when someone is trying to penetrate your systems they're going to fail a lot because it takes time to figure out where to get that buffer to to find the sweet spot so let's go run vulnerability now and we get a seg fault why cuz I'm overriding in from memory that it's really needing but it's actually it's not overriding it with the right memory so if we go into gdb and we're gonna run vulnerability and we want our we should have a core file here yep so now we want to go in and we want to look at the back trace so we have four frames that we need what there is there's main bag copy and then two others so what do you look for when you want to find whether or not why this is crashing and if it's part of the vulnerability so let's just look at our bag copy frame first so now we're actually in that frame then we want to probably not why do you want me to go to attack peanuts don't see what you see what source oh I didn't compile with the symbols in so it's we're not gonna show the source and that's actually not what you want to look at what you want to look at is the stack so let's look at the stack and see what is there helps if it's a dollar sign so the data on the stack sort of looks unremarkable I mean there really isn't anything in there that sort of points in one direction or another so let's try going up a level and see because remember I ran my I ran it against Maine up into the main stack so if I run the same thing again I look at main stack notice all the hex 90s so what your what's interesting here is all these hex 90s and then you've got the repeated addresses up top exactly that's the sled you're gonna have hex 90s in your code here and there if you go put something in a pile or Explorer you will see hex 90s they'll be onesies and twosies but if you ever get into this situation you're looking at the stack of something that is crashed and you're seeing hex 90s in blocks like this that's a dead giveaway that somebody's actually trying to probe your system to see if they can run a buffer overflow explain now there's other buffer overflow exploit don't get me wrong this isn't the only way to do it this was just the easiest way to do it so the comment was there's a main waste encoder no op what are they okay so typically though if you're using a no op sled in a buffer like this you're going to see these hex - not now well no not okay so the comment was is I won't see them now because I've told you what to do this is actually old school stuff this is like back in the early 2000s yeah what you'll see if you've got these things turned on is they'll actually tell you you're running gdb and you dump your core file it's going to show you that your stack was corrupted what I'm the point I'm trying to make here is that how when you deal with corrupted memory how that memory is corrupted tells you something and we need to listen to it okay other questions actually back here what you're looking for in this and this is where sort of the open source part of it comes in is that if you're in a stack frame you already know where the stack frame begins what you're looking for is an ad you're actually looking for an address that's close because that's where you want to put you that's the the pointer you want to point is back into a buffer that's actually fairly close to you I played with it and it was it was much it's trial and error I mean bank yeah that's what that that's how they used to do it anyway that's really how they do it now it's just more complicated to do this kind of an exploit simply because that we've added a SLR we've added stat guards and all the kinds of things sorry I didn't either so the question is most of the overruns are not on the stack I don't necessarily think that's true yes you can you can do heap overruns you can also do pointer code pointer exploits which are going to be somebody creates a structure there's you know some data above it you overflow that buffer there happens to be a pointer right below it that's actually a function pointer and now you're executing arbitrary code this is these kinds of exploits are actually I mean it takes certain mind to think these things through and to have a really in-depth understanding of the it took me a while to learn how to do something like this just because I don't deal in this on any given day I've got other domains that I have to to deal in and you had a question you had your hand up did you have a question okay you're just scratching something okay any other questions okay great thank you for coming [Applause]
Info
Channel: Coding Tech
Views: 34,661
Rating: 4.857585 out of 5
Keywords: security, web security, hacking, program security, secure coding, secure code, app vulnerabilities
Id: zlEdzJccdps
Channel Id: undefined
Length: 87min 25sec (5245 seconds)
Published: Sat Jun 09 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.