How to scan a website for vulnerabilities using Burp Scanner

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this tutorial we'll cover how to scan a website for vulnerabilities using burp scanner how to monitor what scans are doing while they're running and how to export results there are two basic ways of performing vulnerability scans using burp first you can let burp do an end-to-end managed scan that means crawling to discover content and functionality and then auditing for vulnerabilities and the second way is that you can select individual items to be audited so let's look at the first way which is performing an end-to-end managed scan the way you launch one of these is you go to the dashboard and you click on new scan this opens the scan launcher you see at the top you have an option for the scan type so you could select whether to do a crawl and audit which is the default or whether just to crawl to discover content so we're going to leave it on crawl and audit then you need to enter a url or multiple urls to scan so i'm going to enter a url there under protocol settings you have an option whether to scan using both http and htcps which is the default or whether to scan just using the specified protocols that you've given in general if you aren't absolutely sure whether an application uses both protocols and has content and functionality on each one it's best to leave the default setting that's all you actually need to do to perform a default scan is just give up the url and click ok but we're just going to walk through some other options that you can use to fine-tune the behavior of the scanner and optimize it for different purposes so down at the bottom here we have detailed scope configuration and here you can configure in more detail exactly what is in and out of the scope so it will always begin crawling from the start urls that you've given it but here you can specify additional url prefixes that should be included or url prefixes that should be excluded so if you have other domains or other folders that you want to include in the scan you can add them under the included prefixes if you have parts of an application that you don't want to include you put them under exclude the scan configuration section is where you can configure a mass of other options to control how the scanner works one way to do this is to add a new configuration and you can specify whether it's for crawling or auditing so let's start with crawling and here we have sections for various areas of configuration and you can expand each of these to define it so crawl optimization you can control the link depth you can configure the crawl strategy as to whether you prefer speed versus deep coverage under crawl limits you can set some limits for how long the crawl should take or the maximum number of locations or the maximum number of requests you can control how the crawler handles application login functions so by default burp will attempt to self-register a user if it discovers a user registration function it will populate that and attempt to use the credentials to log in with during the scan you can also trigger login failures this means submitting invalid credentials to reach functionality or behavior that is accessed behind a failed login so that might be a forgotten password feature you can control hybrid will handle application errors during the crawl so whether you want to pause or stop the crawl if there are too many timeouts because the application maybe isn't responding properly and there are some other miscellaneous options as well so if we create a configuration for auditing it works in the same way but we have a bunch of different options so under audit optimization you can configure whether the scanner should optimize for a faster scan or a more thorough scan depending on what kind of payloads are being sent you can also configure the audit accuracy so you can tell but whether to favor fewer false negatives or fewer false positives so there are some scan checks some types of techniques that burke can use to find vulnerabilities which are inherently prone to false positives because of the kind of evidence that they gather so one example of those is a sql injection payload which if successful will cause a time delay sometimes time delays happen anyway because of varying latency in the application's responses or other delays and sometimes they could line up with burp's payloads so that will repeat those payloads several times to establish a correlation but this is where you can configure how much of a correlation to look for and whether to favor false positives or false negatives you can configure the kind of issues that perk will report so you can select these by the type of scan technique that is being used so one extreme we have passive techniques passive issues which can be reported just by examining the application's normal traffic sending normal requests the other extreme we have intrusive active scans which involve a lot of more heavy payloads being sent to the application these can in some applications cause damage if the application is vulnerable and in between you have light and medium active vulnerabilities burp scanner also does javascript analysis so it will use static analysis and dynamic analysis techniques on the rendered page on the dom to discover vulnerabilities and you can configure whether to do that if you prefer you can configure individual vulnerabilities to be checked for and you can configure the specific detection methods that are being used to find those so as you can see but can scan for masses and masses of different vulnerabilities you can configure how application errors are handled during the audit similar to during the crawl so you can skip insertion points or skip audit items if you see too many errors you can configure the types of insertion points that are used so you can configure whether that will place payloads into locations like url parameters body parameters and other types you can optionally tell burp to move some parameters to different locations and audit them again there so this occasionally is useful you you'll have some applications where there's a parameter that is normally in the url and there's something like a web application firewall that is filtering payloads that appear in the url but if you switch that parameter to the body the applet it still reaches the application the application is agnostic as to where it appears but it bypasses the waff and you can maybe reach vulnerability this ends up taking a fair bit longer in the scan because we have to do all the work multiple times for different parameters but sometimes that's really useful you can configure whether to ignore certain types of insertion points there are some well-known insertion points based on their name or their type that are part of platform frameworks and features that we don't normally expect to see vulnerabilities in and you can configure those there you can configure how burp handles frequently occurring insertion points so there are insertion points like cookies where the same parameter will appear everywhere in every request and burp can do a full audit of that every time it appears but it's generally not very efficient to do that so by default what bert will do for all kinds of insertion points is if it sees them occurring frequently it will begin doing a normal thorough audit and if they prove to be boring so if the same insertion point always leads to the same vulnerabilities or no vulnerabilities at all it will gradually scale back to a faster scan to be more efficient on those insertion points there are some other options for insertion points whether to use nested insertion points to see one type of data inside another and you can configure some details about how burp does javascript analysis whether to use dynamic techniques or static techniques and whether to request missing javascript dependencies that are needed to render the page one thing that's worth pointing out about how the scan configuration options work is that within each of these dialogues each section is collapsed by default and that means it's not defined and if you expand it then you're defining just the options for that section so if you save a configuration with one section expanded you're just defining the settings for that section and what that means is you can add multiple scan configurations that just configure settings in specific areas so you might start with your generic general base configuration that you like to use and then you can add in additional configurations after that to fine-tune different areas of the options it also has a configuration library this contains various built-in predefined configurations that you can use for different purposes and you can also easily create your own custom configurations save them to the library and then quickly select them later under application login you can provide burp with credentials for known accounts that exist which will be used during the crawl and audit so if you provide these when burp finds a login form it will log in using those credentials and if any new content or functionality is discovered that will be crawled and audited within the user context so you can add multiple of these you might have one for ordinary users or you might have one for a manager role within an application or an administrator and but will use each of these separately and discover content finally under resource pools you can configure how that will make use of your network resources that you have so you can configure the maximum number of concurrent requests that the scan can make or you can configure the delay between requests if you want to impose some throttling this can be useful if you have a very fragile application that you can't scan very quickly you can make that slow down or if you are doing a lot of scans in parallel and you don't want to overwhelm your own machine or your network connection you can limit the total amount of traffic that can be generated from all the scans together anyway to launch a scan all you really need is the url so that's what we're going to provide and then we're going to click ok to launch the scan so we do that you can see in the dashboard in the task list we have this new entry which represents the scan and it says crawl an audit of the url that i provided you can see in this status line here the scan progresses through various different phases and it tells you what you're doing and it gives a progress bar for each phase so you can see how the scan is progressing what you also have is the request count the numbers of errors that have been encountered and the number of locations that have been discovered during the crawl and then when the scan moves on to the audit phase you can see the number of issues that have been discovered what you also have on the top right is the issue activity log and this is where vulnerabilities will appear in real time as they discovered for you to monitor the progress of the scan you also have an event log where various information various errors any other details you might need to be aware of will appear now as well as the issue activity log the other main place where issues get reported is in the target tab so this will give you a site map of all the content that has been discovered it will let you see all of the individual request responses that were found during the crawl and it will also report all of the vulnerabilities that were found and you can drill in to individual folders and see the vulnerabilities that exist just within those paths or you can look at everything together the way bert reports issues is that it gives you all of the information that you need all of the evidence to understand the issue so here we have a cross-site scripting vulnerability and we can see the full request and the highlighted payload that was sent and then in the response we can see the payload came back and the site is vulnerable different types of vulnerability will report different evidence so here we have one that is using the collaborator to detect an external service interaction so we send a payload with the url on the collaborator and that caused a dns interaction for that domain name the second way of performing vulnerability scans is to send individual items to be audited so anywhere in burp you can select one or more requests choose scan from the context menu and send them to scanning so i'm going to go to proxy and open burp's browser and i'm going to visit a url so i'm going to forward all the traffic through the proxy and then in the history we can see the request that happened i could select one of these and do scan if we do that the scan launcher opens in the normal way but we now have a third option which is to audit the items that i had selected and here we can see that one url that i'd selected is in the list all the other options are the same except for application login because we're not going to do a crawl now there are no login settings and to audit that single item we just click ok and the audit begins if we go to the dashboard we can see we have this new task just doing an audit of that single item if we do that again we select a different item and do scan we now we now select open scan launcher scan launcher opens for this different item the first time we launched a scan in this way but created a new scan task to do the work we now have the option whether to add this new item to that same task or whether to create a new task for it and what that means we can do is we can select different scan configurations to use for different items i'm going to go to the configuration library and select light active audit checks if i launch this scan and go to the dashboard you can now see there are two audit tasks the first one using the default configuration and the second one using the audit checks light active configuration now having done that what we can do if we want is to select a different item and when we look at the scan context menu we can see we can open the scan launcher or we can send this item directly to one of our existing tasks this is a really nice way to have different tasks set up and running with different configurations and as you work manually you can direct individual requests to a specific task with a specific configuration that's all the ways of launching scans so now let's take a look at how you can monitor scans as they're progressing and view the results so on the dashboard entry for a task you can click on view details or this little icon to open some more details about the scan in progress so on this details tab we have pretty much the same details that are shown on the face of the dashboard you can see requests errors locations that were crawled numbers of issues discovered during the audit you can also open the configuration for the scan and change some of the settings as it's in progress you go to the audit items tab this shows you all of the items that have been discovered during the crawl and you can view their progress through the different phases of the audit so you can see the passive phases the active phases the javascript work you can see the full url you can see the number of issues that have been reported for each item numbers of requests and errors if you double click on an item you can see the base request and response so this really gives you complete visibility into what verb is doing during the scan you have the issue activity tab this contains the same detail as on the dashboard but just for this one task this one scan so you can select individual issues and you can drill into the evidence to examine them as before and you also have the event log which just contains the event log entries for this task finally if you want to generate a scan report containing the issues that were discovered you can do that anywhere that you see issues you can select one or select many and choose report selected issues from the context menu you can also in the sitemap select a folder or a host and go to issues and say report issues for this host so if we do that we open the reporting wizard we can choose html or xml format you can select which types of details to include in the report you can select whether to include full http requests and responses or just relevant extracts to avoid the report getting too big you can select which issue types to include if you want to exclude some of them and then you can save the report so if we give the report a name and save it that report's been generated we can then open that report and view it here we can see the report we have a summary at the top showing us the number of issues that have been reported by severity and confidence and we can drill down and see all of the individual issues and each one by default will include all of the background remediation detail and where applicable it will have requests and responses including the highlights of the interesting parts so that's how to scan a website for vulnerabilities using burp scanner how to monitor what scans are doing while they're running and how to report results you
Info
Channel: PortSwigger
Views: 82,043
Rating: undefined out of 5
Keywords: burp suite, burp scanner, dafydd stuttard, portswigger, burp suite pro, burp suite professional, penetration testing, bug bounty, application security testing, web security
Id: VP9eQhUASYQ
Channel Id: undefined
Length: 19min 17sec (1157 seconds)
Published: Fri Jul 31 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.