Free Hacking API courses (And how to use AI to help you hack)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
So Corey, that's amazing. What you've just done, if I understand correctly, and correct me if I'm wrong, is you have pasted another value in, just give us the correct term, but that allows you to see the data of someone else, right? That's right. So we found this resource ID, we've pasted it into this API request, and we've found the GPS location of their car. The focus of testing needs to start there. When APIs are exposed to the Internet, they're meant for public consumption. Those often contain the most valuable resource that an organization is trying to keep safe. It also contains what the attackers are most interested in. And that's the data that's there. Now, I have to ask the question again, Corey, this is real world stuff, right? You see this all the time. Yeah, that's right. We see excessive data exposure. That's fairly common. In addition to that, authorization vulnerabilities plague APIs. There's something about being authenticated to an API where the security controls stop there and it's often forgotten about. Maybe it's long, complex tokens that are being used or something like that. In the fast paced world of cybersecurity, whether you're into red teaming or blue teaming or even purple teaming, one skill that stands out as a game changer is coding. I see it with so many hackers and cybersecurity experts, those who can code excel in cybersecurity. If you're a cybersecurity analyst or you're interested in red teaming, enhancing your scripting and programming skills isn't just an asset. It's almost a necessity if you want to do really, really well. And here's the good news. You don't have to enroll in expensive courses or spend countless hours on confusing tutorials to level up your coding skills. Brilliant is here to help you think like a coder with a comprehensive and engaging programming courses. Brilliant offers a suite of nine programming courses that covers everything from the fundamentals of programming to the most cutting edge topics. You can learn to think and code, explore creative coding, master Python and get a solid grasp on computer science essentials like algorithms and data structures. You can even delve into the hot topics of today, like neural networks and quantum computing. What I really appreciate about Brilliant is their approach to teaching programming. It's very hands on. It's very intuitive. Perfect for those of us in cybersecurity who need to apply these skills. Practically, it doesn't matter whether you're a beginner or looking to sharpen your programming skills. Brilliant's courses are designed to boost your coding skills in the most effective way. So if you're ready to take your cybersecurity expertise to the next level through coding, Brilliant is a perfect starting point. Sign up today using my link below brilliant.org/davidbombal. And for you, my viewers, the first 200 people that sign up get an exclusive 20% discount off an annual plan. Don't miss this chance to enhance your cybersecurity skills with the power of coding with Brilliant. You're not just learning, you're evolving into a more skilled, confident and versatile cybersecurity professional. Can't wait to see what you do with these skills and see how they transform your career. I really want to thank Brilliant for their continued sponsorship of my channel. I believe that you can change your life through education. And I want to thank Brilliant for helping make the world a better place through low cost education. Everyone, it's David Bombal back with the amazing Corey Ball. Corey, welcome. Hey, David. Great to have you here. Just for people who haven't seen our previous video, I've linked it below. Corey is the author of this book, probably from what I've read and from what a lot of people have said, this is the best API hacking book out there. Highly recommended by so many people. Corey has not only written this book, but he's also created a free course, which you can get below. So I've linked that below. Corey, tell us about the course that you've created. It's not just one course these days. It's actually a lot more than what we spoke about previously when we had our previous interview. So tell us about your course and tell us about what we're going to be covering today, because I believe it's a fantastic demo about hacking APIs, but also with a twist of AI. Yeah. So the last time we spoke, we were just launching the APISec University. And the first course on there was my API pen test course, which guides you through everything from using APIs, testing for vulnerabilities, the most common vulnerabilities that you'll find with APIs and then exploiting those vulnerabilities. That was the first course we launched, but since then, we have so many more courses. I have another course out on the OWASP API Security Top 10, talking about the 2019 version and the recent of 2023 version and all the vulnerabilities captured there. In addition to that, we have a lot of great other courses like the API Fundamentals course. And we have courses on documentation, Gateway. We're launching an LLM course for APIs. Yeah, there's just a lot of great free information over there. I mean, it's amazing. And I want to emphasize that you guys are giving it away for free. So there's no cost to attend any of these courses. Yeah, that's right. The courses are completely free. You sign up with your email. You can go through, complete the course, watch all the videos, read the content, take the quizzes and walk away with a free certificate of completion on those courses. And then we also just launched certifications recently. Those are paid, but one does test the knowledge. It's the CASA, which is the Certified API Security Analyst certification. And that one tests your knowledge about the OWASP API Security Top 10, the most common vulnerabilities and how to remediate and prevent those from taking place. In addition to that, we launched the ASCP, which is the API Security Certified Professional. That one is special to me. We spent a lot of time crafting that. And the way that certification works is it's a hands-on hacking lab where you have to test out two API-driven applications, find and exploit the vulnerabilities there, and then report your findings back in order to earn that certification. And right now there's a 12-hour limit. So you have to be on top of it. And get through your testing, be organized, find those vulnerabilities and exploit those within the timeframe. I love that. So for everyone who's watching, you've got no excuse. Go and grab the free content. Lots of courses. There's also like a car hacking course, I believe, as well. Let's say a whole bunch of courses. So that's fantastic. Go and get the free content. And if you need a certificate, and Corey, that certificate is free. That's a certificate of completion. But you can get the you can actually take the exams and then get proper like certs to verify your knowledge of APIs, right? Yep, that's correct. And what I love about what you said last time when I spoke to you is you had this piece, I can't remember exactly what you said, but something along the lines like, why do people bother about network pen testing and stuff because APIs are this huge problem out there? Yeah. So APIs represent close to 90 percent of all web traffic that's going across the Internet. And so when you consider just that fact alone, the focus of testing needs to start there. When APIs are exposed to the Internet, they're meant for public consumption. Those often contain the most valuable resource that an organization is trying to keep safe. It also contains what the attackers are most interested in. And that's the data that's there. Yeah. I mean, you get you get past all the firewalls and everything to the API. So if you don't secure your APIs, it's a massive risk. Yeah. So what are you covering today? Because not only are you giving the free course, but here you're doing a proper demo and there's an AI twist to it, right? Yeah, there's a couple of AI twists to it. So today I'm going to be guiding you through testing crAPI, which is the OWASP completely ridiculous API free lab that you can you can launch. You can find it on GitHub. And I also use it in my course. And so if you need instructions, you can you can find that in the course or you can just go to the GitHub, download the repo and test away. So with that, we're going to be using a few tools. We're going to be reverse engineering the API. So this is a process that you have to go through. If the API provider does not have documentation for you to use, if it's perhaps like an API that's just running in the background of your web request, then reverse engineering that API would give you all of the requests that are involved that represent that API, the endpoints, the request parameters. And if that documentation isn't there, you have to build that out manually. And so we're going to use a more automated process in order to create our own documentation. And then I'll guide you through using Postman and Burp Suite in order to exploit a couple of vulnerabilities found in crAPI. That's fantastic. I mean, we're talking offline. There's some misconceptions people have. So maybe you're going to highlight those as well and show us the right way to do things. So I'm going to keep quiet, Corey. You take it away. Show us. Show us the demo. I'm really looking forward to this. OK, sounds good. OK. So first, I just want to make sure that we have our lab running. And we have the tools launched over here. So you're running Kali in a VM or something on Windows, right? Yep. So this is Kali Linux running in VMware. And here we have the completely ridiculous API. In addition, let's see, maybe I should just go through this logically. So first thing for me when I'm hacking an API or a web app, typically you're likely starting with the web app and then you're going to pivot over to focusing on API testing. So the first thing that you're going to want to do is test the web app as it was designed. So you really want to understand what is this web app about? What are the requests involved? How are resources created? How are they identified? And what requests can you use to access those resources? One of the most popular and prominent vulnerabilities for APIs is broken object level authorization. And these authorization vulnerabilities actually represent a lot of the OWASP security top 10. And the problem is when there are no security controls in place that prevent user A from accessing user B's resources, it can go more complex than that. It could be group A and group B. It could be group A plus a user. It's just any time one user is accessing the resources of another and they shouldn't be able to. So what we're going to be looking for are those requests that handle those resources. And we want to find out what sort of names and IDs are given to those resources. It's that way we can analyze the endpoints that are involved in the web app and the API and then test those out from there. With crAPI launched and ready to go, what we're going to do before we start interacting with that and using it as it was intended, we're going to make sure that we proxy our traffic. And in order to reverse engineer the API documentation, I'm going to be using a combination of two tools. One is called Man in the Middle Web, which is just a proxy tool. And the other is called Man in the Middle Proxy to Swagger. Okay. So here we're going to start up Man in the Middle Web. This is just going to proxy all of our traffic and... Over here, we can see Man in the Middle Web is running and we'll make sure that our traffic is being captured. I'm setting my proxy to go through port 8080, which is described right here. And then we'll load crAPI. You don't need Foxy Proxy. I saw that pop up there. That's for something else, right? Foxy Proxy is just a browser add-on that makes it easy to switch between different proxies. Right. So with that on, with the page refreshed, we can see that Man in the Middle Web is capturing our traffic. So with that ready to go, we'll go through, use the web app as it was designed. And I will create our user account that we're going to be working through this with. Okay. So once we've created our account, we'll want to log into the web app. And although this part may be boring for some people, it is so important to go through the web app. Do everything that you can within the functionality of the app. So click on all of the buttons, create posts where you can post, update your profile where you can do that. Is that just so that you get an understanding of how it works? Or are you capturing something in the background? Why would I need to go through it? Yeah, it's a good combination of both. So one, you're getting an understanding of how the web app works, the functionality that's involved, and you're also capturing all of the requests. And so we can already see that an API is being used in the background. So it's being used for login. It's being used for the signup. There's a version two that's being used for the dashboard. And we just want to capture as much of that as possible. Of course, there are other ways of generating the traffic. So you could do a directory brute force where you're using a tool like GoBuster, DirBuster to just guess all of the endpoints that are out there. But if it's not getting the exact response, then it wouldn't populate the results there. But also an interesting thing, specifically with crAPI and other APIs, the way that the paths are made and the way that they respond are not typical for what those tools are expecting to find in web apps. So if we look at the structure of the path for some of these APIs, so we're seeing the type of endpoint actually comes before the API and the version in the path. Not super, not very standard. So those those tools, what they're going to do is they're going to start with local host or whatever path you provide it, and then it's going to go on to the next. So maybe it'll find identity, but I would even guess identity being a part of the API might just respond with a 400 or 502. And when those tools see the 502, they're going to stop and they're going to move on to something that's responding with a 200 or a 300. And so going through and using it this way is going to capture what currently exists within the web app. Once we've captured some of the requests for crAPI, then you want to start thinking about what is this app? And so really you can see there's a shop we could buy some auto parts from. There's a community that we could talk to. We can create our own profile. I'll re update this. So you can upload a video, you can upload a picture, you can have some general information about yourself. But what is this app all about? And it's here on the dashboard that we find that we should be able to add our vehicle to this. And with that, we'll be able to see where our vehicle is located, request some repairs, buy some parts out of the shop, and then talk to the community. So we have a pretty good understanding of what crAPI is all about. When you launch the lab in your own environment, crAPI does also come with a mail server. And so this simulates an email that would be sent from this application over to your email. So your vehicle details have been sent over to your email address. Go ahead and check the Mail Hog web portal for that. Come over here and you'll see your VIN as well as a PIN code to use. Oops. And then you'll want to add your, then you'll want to add your vehicle to the portal using your VIN and PIN code. So crAPI is just like a VM that you download, that you can use, correct? That's right. You can download the GitHub repo and use Docker Compose to stand up the environments. So once you have your vehicle added to the application, you can see new functionality comes up. So here you can contact a mechanic with an issue. And the final thing we'll do before we switch over to creating our documentation is just to refresh the GPS location of our car. Okay. Once that is done, we can go over to Man in the Middle web proxy and save our flow. And now we're going to go through the process of creating our own documentation from that intercepted traffic. Okay. So we're going to be using Man in the Middle proxy to swagger. And I'll just break this down. So once we have our flow from Man in the Middle proxy, we're going to use Man in the Middle proxy to swagger. And in this command, you're going to use the dash i to specify the flows file that's typically found in your downloads. And then you're going to use dash o to specify the output, which we're going to call spec dot yaml. And then the dash p, this is going to be the path to the API. In this case, with crAPI, the paths are named somewhat in a strange way. And so you have to get to the base of where you start to see the endpoints coming out of that. But in most cases here, you would put google.com slash API slash. And then typically following that, they'd have a version, version three, and then the rest of the endpoint. In this case, we're just going to do that base. And then the dash f is going to be for the type of input, which is a flow. And from here we can hit enter. And this just goes through and creates our yaml. So once we've ran this command, we can use a text editor like sublime and open up our yaml. So once we've opened up spec dot yaml in sublime, what we want to do is to find the API endpoints and get rid of this ignore tab. And so everything from the colon over where we see an API. And so here you could see this dashboard. That's going to be a web request, not an API request. So we don't want to do it there. This could be a time consuming thing. But with a tool like sublime, what you can do is select the exact request that you want to alter and hold control. And then once we have all of our API requests, we do a quick keyboard shortcut to edit all of them simultaneously. And that's going to be control shift L. Once we have that, we can go back to the base and just hit delete until we have our API endpoints all spelled out. And we're just going to save this. Once this is saved, we can go back and we'll need to run this script one more time. OK, to test out API documentation, you can use the swagger editor. And then here we can import our API documentation, which should be in this specs dot yaml file still. And you can see here all of the endpoints and the request methods that are used across the board. And this is going to be great for importing into other tools like Postman. Once you have it into Postman, it'll be organized. You can create variables. You can get your hands on interacting with the API. And then once you want to dive into testing an individual endpoint, then you can proxy that traffic over to burp suite to. Really conduct some heavy testing on individual endpoints. If you didn't do this, you'd have to download the API from documentation or something, right? Yeah, so everything we've done here is for instances where an API provider doesn't provide documentation. In most cases with larger APIs or APIs that are meant to be consumed by the public or by end users, then documentation would be provided. Perhaps a Postman collection or Open API specs, something easily easily consumable. And all of this reverse engineering of an API is really for those instances when none of that is available. This is great for instances like crAPI where it's not clearly available, but also for private APIs or partner APIs. Sometimes they don't include the clearest and most consumable documentation. Private APIs could just be developers within their own network that either know how to use it or they have some documentation in a knowledge store somewhere. In any of those situations that you find yourself in, you can use this process to create these Open API specs and then you can leverage those in Postman or other places. So in this case, there are not too many endpoints here, but I do want to show off some of the stuff that you can do with leveraging ChatGPT. I'll be showing off the hacking API's GPT, which is available with a premium ChatGPT membership, but I'll show off some of the things that you can do leveraging the chatbot to test APIs. So this is a GPT that you've created, right? Or is it just like normal ChatGPT anyone can access it or do I need to do something special? Yeah, so this is a GPT that I created. I worked on it for a while to get everything working in a pretty ideal way for API security testing and anyone with a premium membership can access this. If you don't have a premium membership, I have a prompt that I share that you could leverage on the free tier and you can combine that and get pretty good results as well. Okay, so some of these prompts down here, they just make it easy to take advantage of the functionality that I built into hacking API's GPT. And to start out, we can say something like based on the following endpoints, which are the most concerning and what we can do while that's starting up is to just take this. We could let's just try out the YAML file itself and we can add an attachment, go to our downloads, grab that spec and let's see if this works out. Corey, I'm a bit slow. So let's say a company does have API documentation. How do I get a YAML file like you've done here? Because you've shown us the manual way of doing it. But like, do companies just give you a file that you can download or, you know, how do I get to the point where I can upload something like this? Yeah. So on web apps that have API's that are meant to be consumed by end users, typically there's a documentation page and there you can find either a postman collection or more typically an Open API specification that you can leverage. And yeah, they just make those easy for you to download. A lot of those API's are meant to be self-service. They should be able to find it, download it, import it into postman or some other tool and then begin testing it, at least for our purposes. OK, so it doesn't like the YAML file. So it does it does work with dot JSON so we can put in files that way. But I actually like to do it and even more dumbed down version, which is we can run this. We can go back over to our flows file, actually. So you just take a list of endpoints like this. Maybe while you're creating your documentation, before it's fully over in that spec, you can just copy these and paste them in. And so just talk about what's going on here. So based on the following endpoints, which are the most concerning and then hacking APIs, GPT responds with you need to provide me some endpoints in order to get those answers. We send these over and this when you have just a few endpoints like this, it's pretty easy to go through and see where the resources are and what's interesting. But when you're working with a larger API, you can send hundreds of endpoints here. And this is going to break down. Let's take community API v2 posts. So this is a vulnerability that you're going to be on the look for. So broken object level authorization. Maybe in these posts, there's something you can do to access the posts of other users. Here it's tagging a business logic flaw for validating coupons. And again, it's not actually interacting with the API and testing it. It's just providing suggestions based on naming schemes and maybe other information scrapping out of the path, such as the version number. And so here it's saying you should test to validate this coupon fields only accepts business logic or expected data and see what else you can send to it. The log in, it's going to go for authentication type vulnerabilities. And then down here at the bottom, we have this vehicle ID. So if we remember that we are looking, we are refreshing the location in the web app. And so it's actually pulling that out and saying, you should check this out for insecure direct object reference, which is also in the OSP API security top 10 known as broken object level authorization. And so here you could see there's this very large token that if you attempt to brute force this, it's going to take forever. And so you could think through maybe a developer thinks this token is complex enough to where it can't be brute forced. It can't be guessed. So as long as it's not exposed in some other way, that that alone is enough of a security control to prevent anyone else from finding the other vehicles of other users. And here the hacking API is GPT. It spells it out and it gives the rationale that the presence of this vehicle ID in the path, maybe it's improperly handled and you could find vehicles that don't belong to this user. And the cool thing is that's that's what we're about to explore. This is our end goal is to get here. But we need to figure out, you know, is there another way for us to find out complex tokens of other users? So that's where we're going to need to dig into the API, start making requests and see what data we're getting back. And so the next step, once we've done all of that, is to jump into Postman. I just want to say, Corey, that's amazing, because what you've done there, you know, it's easy to say, OK, here's an API. And then for a lot of people who are starting is like, OK, now what? And you're giving a bunch of suggestions there, which is fantastic. Yeah, I, I, let's see, I definitely leverage that on larger public facing APIs. Obviously, when you're testing private environments, you do want to be aware, you know, that that data is going to ChatGPT or open AI servers. Yeah, that's a good point. And so you want to be careful with sensitive data, of course, but public facing API paths. I those are fair game. And so sticking those in there and getting some highlighted results really helps with endpoint analysis and giving you a head start on your testing. Typically, I'll take the Reddit API, which just has hundreds and hundreds of endpoints. And you can sort of do it that way, where you just select the endpoints, stick it into the hacking APIs, GPT, and it'll give you back some great results. And there have been plenty of times when the review that it's done, I with all the time it would take me to go through all of those. I think like those are the ones that I would have picked out anyways. So that's pretty helpful. So Corey, just before you move ahead, is it possible to show something else on this API? Because it's you know, what you showed us already is fantastic. But is it like, do you have any other like crazy, cool stuff that you can show us? Yeah, there are a few other cool things that you can do. So on top of just copying and pasting endpoints, you can provide a dot JSON specification of an API, and it'll review that file in a very similar manner. In addition to that, I do have functionality built in to interact with JWTs, which is a very common form of API authentication token that's passed back and forth between the server and end user for authorization and authentication. And then I've also built in some payloads. So if you needed some ideas for starting out on fuzzing the API, you could also use this action down here in order to generate some of those. But I do personally, I do like the JWT one a lot. And if let's see if we have well, once we begin interacting with the API and Postman, we'll get a JWT out of that. And we can come back to this, stick it in here and it'll decode the JWT and perform some analysis if it finds some risky information or information disclosure in that. That would be great to see. Okay, so let's hop over into Postman and begin interacting with our API. Okay, back over in the Swagger editor, once you save your API documentation, you can import that over into Postman. Okay, and the great part about Postman is it really helps you organize and visualize the APIs that you're interacting with. Before man in the middle proxy to Swagger, reverse engineering your own API documentation would involve proxying a lot of traffic into Postman and then renaming and reorganizing it. So it's really nice to go through that process and be able to have all of your API requests in one place. It's great. I mean, all you were doing is interacting with the website and you grab all this data. That's great. Yep. So first thing we're going to do is log in and we can test out some other requests before you do that. But eventually everything is going to point you back to logging in. When I import new documentation and some of the first few things that I do, one in this case, I'm going to rename the documentation to. crAPI docs. And when you're working with a collection inside of Postman, then you often want to check, you know, is authorization in place? Currently, there's nothing set up here. How about the variables? So the variable for this, it already has in our, the value that we're using. If you're interacting with a web app, this could be blank for you to fill in and you would just put in the base URL. So everything leading up to what you find in the end points. And then you can test out a request and this'll require authorization. So we shouldn't get back a good response. So we see, we get back 404 here, which points us to logging in and becoming an authenticated user. So we can see here that we're going to need our email address that we use during setup. If you don't remember that there's this request right above for signing up. So if this didn't work out, we could use that our bad password that we set up earlier that passed the requirements. And then we can send that over to see if that works out. And you can see that we get back this token just by looking at this, starting with an EY. We can see that it's broken up by three periods. That gives us the indication that we're working with the JWT. And so before we continue on, we could check this out using the hacking API's GPT to see what it can tell us about this. So we should be able to paste in our JWT and the hacking API's GPT should be able to decode that. Show us what's inside of the payload and let us know if there's anything concerning either about the header and how the JWT is configured or any data contained in the payload. So we see email address and our role, not necessarily a big surprising information disclosure, but you can see anything that a developer puts in there. And so we've seen passwords, we've seen admin emails and other juicy information that could be leveraged in other attacks. So based off of that, we can go back over and interact with the API some more. Some other things you could do with JWT, you could edit information. So if you said you could simply say something like change the user role to admin and re-encode or encode the JWT back to base 64. And then you could take that and then paste it back in in a request and see how it handles that. The problem with doing that is it's not going to have the signature. And so if the API doesn't have controls in place to check and make sure for that, which a lot of them do, then it could just accept that new user role that you provide, just thinking through a possibility there. But back in here, we have our tokens. We will take this and then we can add it at the top level to our authorization. And if you just wanted to quickly interact with the API, this would be one way to do it. You could just come in here, paste the token and then go from there. But we want to leverage some of the cool functionality that's built into Postman. So we can actually change this into a variable and we could do that at the collection level. You can make it an environmental variable if it's going to carry over between collections. But here we'll just set it up. Yeah, typically with sensitive data, you're going to want to handle that differently. You'd want to do that at the global level. You can put it in and make it a secret. So that way it's not exposed as it's about to be. But we could just save this variable as token and then the current value is going to be that. And we'll save. And now we should be able to make some of those other requests that are going to be using our token. And we just want to check that authorization is set to inherit from parent, which it should be across the board. Sometimes if you import a documentation, import documentation from a web app, you may find that that's not as expected. But the last thing that you want to do now that you have this variable set is go over to authorization. And here you're going to use the brackets to find your token and then save that. Go back over to a request that's going to inherit from parent and use that variable to get back a good response here. So this is showing that we've made a successful request with our token. You're interacting with a live API now, which is obviously that dodgy site on your computer, but that would be interacting with a real sudden internet press. Correct. Yeah, so here we can see the response code 200, the time it took in the amount of data. And then here we're seeing the response body and everything we got back there. This base URL, we can hover over that to see that we're interacting with crAPI still. And now becomes the arduous part of working with the API, making sure that all of these requests are working. And again, this is just another form of using the web app as it was designed. Now we're using the web API as it was designed. So we can go into here. This is one of the requests that the hacking API is GPT called out. So we can go interact with that and we could see in this request when it's, when it has a successful response, it's going to come back with our car ID and our GPS, the GPS location of the car and then our full name. So that's a juicy request, but we need to find where this token is also used and see if that's tied to other users or something like that. If there was a way to search for the vehicles of other users or something along those lines, that's the sort of request that would come to mind to begin experimenting with. But on your way there, yeah, you just want to go through your request. You want to use it as, as it's meant to be used and see, check out the responses, see what happens when a response is not successful. Pay attention to your response codes. That way you get an idea of how this API has been designed to respond in certain situations. If you provide a bad token. So let's go back to this request really quick. And you can see if hypothetically we were going to try to brute force this when we're rolling, we're going to get 400 bad requests. So now we have a little bit more information like of how this API is going to work. And all of that is just going to be very useful when you're interacting with an API, especially, I mean, if you're a bug bounty hunter and this is your focus, you're working on one big API. And you're trying to find the weaknesses that you can provide a proof of concept for learning the API and using it as, as, as it was designed is just going to be super important. So here now we're in the community endpoint, API V2 community posts, and we're looking at the forum here. And this is going to show everyone that posted back over in the crAPI forum. And so here's a post that we made. This has the ID of the post itself. You have the title, you have the content, but then once you start digging in, you can see anyone that's making this request to this community page. It's also including their nickname, their email, you know, maybe those are fairly standard, but we're also including the vehicle ID along with their profile picture. And so this is now known as broken object property level authorization in the 2019 version. This would be known as an excessive data exposure. And what's happened here is that the developers program this so that the data would be filtered out by the front end. And it's just sending the full data object from the backend and then having it filtered out by the time that it reaches the webpage. And we can dig through this and see we have plenty of other users in here. And with other users, we see other vehicle IDs. And so just to demonstrate, this is what it looks like when you interact with the API. If we go back to crAPI, we go over to the forum page. This is what it looks like on the front end. You're not seeing the vehicle IDs. All you're seeing is the data that you'd expect to see. So the title of the post, the user, their picture and the comment that they left. So pretty big difference of when you start interacting with the API and have an understanding of how it works. Then you could find the vehicle ID. You could go back to our location post. I just want to say in the real world, you finding stuff like this all the time, right? Yeah, excessive data exposure is a fairly common finding. And often it's what helps you provide a better proof of concept. So once you can capture that data, whether we see, we do see this where a complex ID is used and that by itself, at some point of the development cycle is taken as a security measure in and of itself. And that's a flaw, security by obscurity, anything like that. So we do see that being used. We have a complex token, therefore, no one's going to be able to find that. So then our goal as attackers becomes how do we find that? And oftentimes an excessive data exposure is hiding somewhere in the API where just too much data has been exposed. And if not, maybe there is a business purpose. This sort of gets into business logic flaw. Maybe there is a business purpose that they thought, you know, it would be useful if the mechanics could have this vehicle ID and use it in their way. Well, OK, maybe that API has a weakness that I can go and attack and find the vehicle ID through that route. And yeah, it's just the combination of finding the requests that interact with sensitive resources and finding the requests that expose those resources. And so here we can see another happy hacker, not our happy hacker, but another happy hacker that's out there. We could see the vehicle ID and we can see the GPS location of their car. And so likely something that if they're using a mechanic or this Web app that they don't want exposed to every person that's out there that's using the Web app. So, Corey, that's amazing what you've just done. If I understand correctly, correct me if I'm wrong, is you have pasted another value in just give us the correct term. But that allows you to see the data of someone else. Right. That's right. So we found this resource ID. We've pasted it into this API request and we've found the GPS location of their car. Corey, you mentioned earlier, perhaps using BURP. Is it possible to demonstrate that as well? Yeah, definitely. We could set up a proxy, send this request over to BURP Suite and then interact with it using a repeat. For me, when I'm deciding, you know, where do I want to work between Postman and BURP? In the book, I actually refer to it as testing wide versus testing deep. So Postman is a great tool to test wide so we could see our whole API laid out over here on the side. We could set up all of the requests properly. Postman has some cool functionality built in like the collection runner. So if I wanted to test every single request simultaneously, there's a tool called collection runner that we could, I could do a quick demo of that too and just show how it works. But it'll go through and make every single request. It'll give you back the response code. And if there was a broken request or something that needed more attention, you could see that. In addition, when you're thinking of testing wide across an entire API versus testing deep into a single request, when you're working with API, one of the top vulnerabilities is improper inventory management, which means that versioning hasn't been handled correctly. And the API is letting you access an unsupported version of the API. And so this would be similar to an unsupported software vulnerability. It doesn't mean that something's currently wrong, but it does mean that you could dig into it more. And I like to call it the gateway into additional findings for APIs, because once you find an unsupported version of the API, then you can start to check things like GitHub issues. For example, maybe version one had a SQL injection vulnerability that was fixed in version two and version three. But now you've found version one with testing wide. There are a few requests that still actually use that legacy version of the API. And now you can go back in and test those for a SQL injection. And just just that one small change of testing for the version across the entire API is going to save you a lot of time testing wide and having those unsupported versions stand out a lot more for focus testing. So I'll just demo the collection runner really quick. So you can go to the top level of your API and go to run collection. And here this is just going to show you every single request that it's going to make. We have this set to run manually. And down over here is the button to run crAPI docs. This will go through and make every single request and show you in a neat form of how how they're responding. And so you could see we have some 400s, we have some 200s. And when I'm starting out using an API and I want to test it as it was designed, we want to go through and fix a lot of these bad responses. So if we're getting a 500, we want to explore that and see why that's happening. If we're getting a 400, 500, any of those, you just want to go through, dive deep into that individual request, look through the parameters for some of these like the reset email. You can imagine this change email is probably getting a 404 because there isn't an email set up in the parameters already. So testing that would be selecting it, getting into the exact request, setting up those parameters and then saving the request so that you can make it successful 200. And that's you just want to see that across the board so that you know, you've tested the API at a baseline properly to be able to dig much deeper into those requests. But once you've done that and now you want to test deep, you're going to want to transition over to burp suite. And I could show this off in burp suite community edition. This is the free tier. And what we need to do is set up our proxy in postman and have it forward that traffic over to burp suite. Okay. And once you've set that, you can close it out and it'll save. We can move over to burp suite, go to our proxy. This is set up on port 8080 and we can see in the HTTP history, there's nothing here yet. And so once you've built out your API and you're getting successful requests across the board, I like to take that and run the collection with the proxy. On and that will send over your full scope of testing to burp suite. And we can see those requests populate. And then once you have the request that you want to dig deeper into, you can send it over to intruder if you're going to try and brute force and payload. So if this had been a simple numerical value that represented the resource ID, maybe we'd send that to intruder and just continue those numbers. So if the value was 85, we would test 75 through 90. I'm only saying a small amount of numbers because burp suites free tier intruder is throttled. So you stop seeing decent results after a few dozen requests. Here we can see that same vulnerable request as before. This is looking for the vehicle location of an individual user. We have this ID here. This is for a happy hacker. A we can actually also see our community requests and we can see all of the vulnerable IDs captured in this excessive data exposure. So let's move that over to repeater so we can have a better view of it. We'll send our vulnerable request over here. This is the excessive data exposure vulnerability. OK, and now we can see our response. This comes through a little clearer over here than in the proxy window or HTTP history. So we can just go down and we can actually search by vehicle ID to make this part of the process faster. And here we're seeing several users, happy hacker one, robot. So let's steal or find robots, cars, locations so that we can steal. And we're just joking. Don't go and do anything that you're not supposed to do. Of course, proof of concept. That's right. So here we can just paste the value and now we can see we know the exact location of robots vehicle. Now I have to ask the question again, Corey. This is real world stuff, right? You see this all the time. Yeah, that's right. We see excessive data exposure. That's fairly common. In addition to that, authorization vulnerabilities plague APIs. There's something about being authenticated to an API where the security controls stop there and it's often forgotten about. Maybe it's long, complex tokens that are being used or something like that. But oftentimes the tests that go from can user A interact with other resources that don't belong to them. The controls for that are missing and that can lead to pretty serious findings, passports, drivers' licenses, medical IDs. All of these things could be exposed over a single request just like this. Corey, I really want to thank you so much for sharing. You know, you've written this book, fantastic book that shares your knowledge. Lots and lots of hard work to write a book like this, I know. But you also created free training and, you know, doing the YouTube videos. I really want to thank you for sharing and giving back to the community. Yeah, of course. Thanks so much for having me on. I really enjoy doing this. So it's great. So everyone who's watching, put your comments below. What else would you like Corey to demonstrate? Is there anything else that you know, specific topic that you're struggling with that you'd like him to come and demonstrate or talk about? Otherwise, go and sign up for the free training link below. Once again, you have no excuse now. You have free training. You know, it's all available to you. So go make the most of it. Corey, thanks. Thanks so much.
Info
Channel: David Bombal
Views: 81,055
Rating: undefined out of 5
Keywords: hacking api, api, api hacking, api hacking tutorial, api hacking bug bounty, api hacking 101, api hacking full course, api hacking tools, api hacking with postman, api hacking for beginners, api hacker, api hacking demo, api hacking kali linux, api hacking course, hacking an api, hack api, owasp api top 10, bug bounty, hacking apis no starch press, hacking api no starch, hacking api book, hacking apis corey ball, corey ball hacking apis, rest api, hacking api with postman
Id: TcMP_-clMAo
Channel Id: undefined
Length: 53min 46sec (3226 seconds)
Published: Sun Feb 25 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.