Comptia Security+ SY0-601 Exam Cram DOMAIN 3

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to domain three of the security plus exam cram series which focuses on implementation that is a really broad topic and in this session we're going to touch on every topic in domain 3 detailed in the official exam syllabus now the series is designed to help you prepare quickly and inexpensively really focusing on the key aspects of topics to help you reach in there and pick out the right answer more effectively on exam day in fact if you're new to the series you should probably go back and watch the intro because there's really some important strategy information there that will help you optimize your preparation efforts it's really intended to be the first resource you use in your prep to identify your weak spots and the last resource you review as a refresher before walking into the exam i'm glad you're here we have a lot to cover so let's get started [Music] welcome to domain three of this comptia security plus exam cram series now domain three focuses on implementation which is not only a very large domain in terms of content but also the most heavily weighted of all five now in this series we have six core videos the first is the intro video that includes my detailed exam preparation strategy that's sure to reduce your preparation time and effort and improve your results on exam day i expect to release five to ten shorter supplemental lessons around what proved to be more difficult topics for learners based on your questions over time and in this domain as with all remember that i am focusing line by line on the topics that are called out in the official exam objectives so what you see in the official exam objective document in terms of topics they're all going to be covered here the guide i suggest you use is the official security plus exam study guide and practice test bundle that includes a thousand flash cards and practice questions including online access to those practice exams so this really eliminates one more resource you need to buy namely practice exams to prepare for this exam you can get it on amazon.com i have a link to the least expensive option in the video description and one more thing worth pointing out is that the bundle includes a 10 exam discount coupon which effectively pays more than half your cost for the test bundle itself so great great value in that respect and as with all videos in the series a pdf copy of the presentation is available in the video description folks tell me it's helpful in reviewing for exam day so let's start with 3.1 given a scenario implement secure protocol so we're talking about protocols and use cases here and implement really means choose the right protocol for a use case but what we're going to do is tie the protocols to their use cases which should make choosing the right answer on exam day an easier task so i've put these into tables that will help you review these quickly you have the protocol on the left the use case on the right you see port and tcp udp protocol information where applicable listed as well and you'll see some protocols here that are probably familiar to you if you've been in it for long for example secure shell port 22 used for secure remote access now ipsec in particular i think you want to know the protocols and modes for ipsec because they're called out specifically in the exam objective so we'll touch on those briefly here in just a moment and i think it also makes sense for the exam to group by use case as you're memorizing so for example if we look at secure smtp imap4 pop3 s mime those are all related to email right so we could maybe group those together in our memorization activity we could look at sip and srtp which are going to be focused on voip and and internet telephony i mentioned ipsec protocols and modes we want to be familiar with so in ipsec we have authentication header and encapsulating security payload commonly abbreviated as ah and esp so ah provides a mechanism for authentication only and because authentication header doesn't perform encryption it's going to be faster than encapsulating security payload now esp provides confidentiality encryption and data integrity and it can be used with confidentiality only authentication only or both of those together so it is configurable in that respect now ipsec modes are also important and the first is transport mode in which iep addresses and the outer header are used to determine the ipsec policy that will be applied to the packet this is good for host to host traffic and in tunnel mode two ip headers are sent the inner packet determines the ipsec policy that protects its content this is going to be good for vpns and gateway to gateway security ipsec does come up in a couple of different areas in domain three i believe these are the most important elements you should understand and commit to memory to be ready for the exam so let's move into 3.2 given a scenario implement host or application security control so we'll touch on endpoint protection database and application security os hardening and boot integrity so let's start with endpoint protection so we have antivirus which at the root of it is designed to detect and destroy viruses and what it does with the virus is configurable of course right so we make quarantine for example and then we have anti-malware which similar to antivirus stops threats but anti-malware focuses on all kinds of malware viruses trojans worms and potentially unwanted programs and then we have endpoint detection and response so this is integrated endpoint security that combines real-time continuous monitoring and collection of endpoint data as well as rules-based automated response and analysis capabilities which you'll sometimes hear termed investigation even these capabilities are generally delivered together in a single solution today and they usually go beyond antivirus signature based protection to identify potentially malicious behaviors what we'd call zero day behaviors or emerging threats and they're going to use machine learning and artificial intelligence to do that how that happens will vary by vendor but ai and machine learning are going to be common elements of that edr solution all right data loss prevention so this is a way to protect sensitive information and prevent its inadvertent think unintentional disclosure and dlp solutions can identify monitor and automatically protect sensitive information in documents and we're really talking about protecting personally identifiable information protected health information customer information any sort of sensitive business data and with dlp software typically you can create policies that you can apply to email to your sharepoint portals to cloud storage and in some cases even to databases and these policies will typically have canned formulas to identify different types of data like pii like protected health information like credit card data or they'll provide you the ability to write your own regular expressions to identify what you consider sensitive so let's talk firewalls for a moment so when we think about modern firewalls there's the web application firewall which protects web applications by filtering and monitoring traffic between a web application and the internet it typically protects web apps from common attacks like cross-site scripting cross site request forgery and sql injection now many of your web app firewalls will come with pre-configured owasp rule sets that protect against the owasp top 10 application vulnerabilities and then there are next generation firewalls these are deep packet inspection firewalls that move beyond port protocol inspection and blocking they add application level intrusion prevention and they bring intelligence from outside the firewall often in the form of consuming threat intelligence feeds for the exam i'd know the abbreviations as well so you'll hear a web app firewall referred to as waff web app firewall and then next-gen firewalls are abbreviated in gfw so let's talk intrusion detection and intrusion prevention and knowing the difference i believe will be important for the exam so with intrusion detection or ids it will analyze the header and the payload so think packet content and when a known event is detected a log message is generated so think alerting or notification now intrusion prevention on the other hand will analyze those whole packets both the header and the payload looking for known events and when a known event is detected a packet would typically be rejected so the difference between ids and ips is intrusion prevention takes action where intrusion detection is focused on notification now 3 2 calls out host based ids and ips so this really refers to intrusion detection or prevention in software form installed on a host generally on a server although you'll see host-based intrusion detection and prevention for client operating systems as well like windows 10. and also in the endpoint protection category is the host based firewall so this is an application firewall built into desktop operating systems like windows or linux now because it's an application it's going to be more vulnerable to attacks in some respects versus a hardware firewall and it's fairly important that we restrict service or process access to ensure that malicious parties are not able to simply stop the service or kill the process thus disabling the firewall and generally speaking you're going to see host-based and network-based firewalls used together in a layered defense we'll have network-based firewalls filtering traffic between network segments for example or between the trusted network and the internet and we might use host-based firewalls on the client operating systems to restrict port access to restrict lateral movement should an end point be compromised for example so let's talk boot integrity which ensures that host are protected during the boot process so all protections are in place once the operating system is fully operational now unified extensible firmware interface or uefi is the modern version of bios that is more secure and it's needed it's necessary for a secure boot of the os that is to say the older bios cannot provide secure boot then there's measured boot where all the components from the firmware applications and software are measured and the information is stored in a log file and that log file is on the trusted platform module the tpm chip that's on the motherboard now trusted secure boot and boot attestation is available in operating systems like windows 10 that can perform a secure boot at startup where the os checks that all of the drivers have been signed and if they haven't the boot sequence is going to fail as system integrity has been compromised and this can be coupled with attestation where the software integrity has been confirmed for example bitlocker implements attestation and its keys are stored on the tpm chip on the motherboard so in the database category we have tokenization which is deemed more secure than encryption because unlike encryption it cannot be reversed so it takes sensitive data like a credit card number and it replaces it with random data that is the token that non-sensitive data is the token as an example many of your payment gateway providers store the credit card details securely and generate a random token but tokenization can help companies meet pci dss and hipaa compliance requirements because it's strong protection in that case now in hashing a database may contain a massive amount of data and hashing is used to index and fetch items from the database and this makes search faster as the hash key is shorter than the data and the hash function maps data to where the actual records are held incidentally like tokens hashes cannot be reversed it's a one-way operation and salting so salting passwords in a database is like salting passwords in an identity provider it adds random text to the password before hashing to increase the compute time for a brute force attack incidentally this also renders rainbow tables ineffective a rainbow table is a table of hashes for common passwords that basically reduces compute time in a brute force scenario and because the attacker is unaware of what the salt value was that was injected into those passwords before hashing the values for the same passwords in their rainbow table won't match what would be present in a salted database of passwords now let's talk about application security and common controls to prevent attack so input validation is typically number one on the owasp top 10 list of vulnerabilities and validating input ensures that buffer overflow integer overflow and sql injection attacks will not succeed against applications and their backend databases we want to use these anywhere data is entered using a web page or a wizard and ensure that the form only accepts data in the correct format within a range of minimum and maximum values incorrect format should be rejected forcing the user to re-enter the data for that field now cookies are used by web browsers and they contain information about your session and the problem with cookies is they can be stolen by attackers to carry out a session hijacking attack to minimize this probability setting the secure flag in website code ensures that cookies are only downloaded when there's a secure https session http headers are essentially designed to transfer information between the host and the web server with headers an attacker can carry out cross-site scripting attacks as it's mainly delivered through injecting http response headers we can prevent these by entering the http strict transport security header hsts this ensures that the browser will ignore all unsecure http sessions code signing uses a certificate to digitally sign scripts and executables to verify their authenticity and confirm their genuine most of your commercial software companies today implement code signing and some intrusion prevention systems will allow us to submit that code signing certificate as an entity that indicates any script or executable signed with it is safe and should be allowed to execute an allow list enables us to specify applications that should be allowed to run this can be set up using an application whitelist it's typically called firewalls intrusion detection and prevention systems and endpoint detection and response systems will often give us an allow list feature where we can specify what should be able to run and on the flip side they may offer a block list or deny list feature that enables us to prevent specified applications from being installed or run by submitting that deny list to the specified solution and here again firewalls ids ips and your edr systems will typically have some sort of block list feature now let's talk about secure coding practices a developer who creates software writes code hopefully in a manner that ensures there are no bugs or flaws that's what secure coding means the intent is to prevent attacks like buffer overflow or integer injection and we can take that a step further by conducting static code analysis this is where the code is not executed locally but it's analyzed by a static code analyzer an analysis tool the source code is run inside the tool that then reports any flaws or weaknesses bear in mind that static analysis requires source code access on the flip side dynamic code analysis is where the code is executed and a technique called fuzzing is used to inject random input into the application and the output is reviewed to ensure appropriate handling of unexpected input this can expose flaws in an application before it's rolled out to production now this doesn't require source code access because it's conducted by actually running you know executing the code this can be a little confusing if this is your first time looking at static and dynamic code analysis so i'm going to present it to you another way and it's the way we see it presented in the cissp exam where static code analysis is called static application security testing different name for the same thing it's analysis of computer software performed with without actually executing the programs the tester has access to the underlying framework design and implementation and essentially requires that source code so they can analyze that source code with appropriate tooling or even manually dynamic application security testing that's dynamic code analysis that's where a program communicates with the web application you know executing the app the tester has no knowledge of technologies or frameworks that the application is built on but no source code is required you'll also see these referred to as in the case of static testing testing inside out because we're looking at the source code inside the app where dynamic testing is testing outside in so just another way to look at it hopefully that helps so manual code review is just what it sounds like it's reviewing code line by line to ensure the code is well written and error free this takes a fair amount of expertise it takes somebody who knows how to code and it tends to be tedious and time-consuming as one might imagine fuzzing is a process i mentioned earlier whereby random information is input into an application to see if the application crashes or memory leaks result or if it handles the unexpected input gracefully and returns their information we want to use this to remedy any potential problems within application code before a new app is released and when we do this before release this is a white box testing scenario but fuzzing can also be used in a black box scenario so after release and before production deployment we can check for improper input validation so we can use fuzzing to inject unexpected values into a form or a wizard and see if the application handles that unexpected input gracefully this is going to be a black box testing scenario because there's no need to have knowledge of the framework or access to the source code so moving on to hardening we'll start with open ports and services we should really have only listening ports running that are absolutely necessary we should filter traffic coming inbound to those ports restricting access to only the networks that we expect traffic to be coming from and to disable unnecessary ports and services entirely so we can handle our filtering through firewalls whether that's at the network level or a host-based firewall and we can disable running services or processes to kill unnecessary listening ports entirely and the registry so this is a windows construct and we want to restrict access to the registry and control updates through policy wherever possible a bad actor who gets unexpected access to the registry can add configurations that persist across reboots enabling them to establish persistence and to begin moving laterally in your environment and you always want to take a backup of the registry before you start making changes bad changes to the registry over clean up so deleting too much is not an action you can come back from easily unless you have a backup of the registry and disk encryption so drive drive encryption can prevent unwanted access in a variety of circumstances and using full disk encryption or self encrypting drives will be a part of this strategy we're going to talk about both of these later in this module and then at the operating system os hardening can be implemented through security baselines we want to establish a baseline of what is normal and expected at the operating system level and the good news about taking an action like that where we standardize security across an operating system say based on role like a client or an application server we can apply those through group policies or management tools mdm like intune and those baselines can implement everything i mentioned above here we can incorporate all those into one baseline configuration then there's patch management sometimes called update management this is about ensuring that our systems are kept up to date with current patches in the world of windows this means patching on patch tuesday once a month and a few out-of-band patches over the course of the year your patch experience is going to differ a bit in linux and vendor by vendor with your network devices but we want to evaluate test approve and deploy patches we definitely don't want to take all of our patches to production immediately to avoid unexpected impact there's definitely a bad patch that gets released every now and again and we need system audits to verify the deployment of approved patches to our systems this would include vulnerability scanning once once a month at least and some sort of reporting system so we can ensure these patches are actually getting out there and we want to patch both native operating system and third-party apps i find that third-party apps are often overlooked oftentimes because there's not as much automation available to ensure that process is consistent and manageable in terms of effort it's not made as convenient for third parties and you want to apply out-of-band updates promptly especially when we see updates coming out of band from microsoft there's definitely a reason for that often times it's addressing zero day threats so you want to roll those out quickly we do find organizations without patch management will experience outages from known issues that could have been prevented simply by patching their systems drive encryption we have full disk encryption which is built into the windows operating system it's called bitlocker and bitlocker keys are stored on the tpm chip on the motherboard in the world of linux there's an fte implementation called dmcrypt although i don't think you'll hear about that one on the exam and then we have self-encrypting devices or self-encrypting drives where encryption is built into the hardware of the drive itself and anything that's written to that drive is automatically stored in an encrypted form so a good set should follow the opel storage specification which is a specification for self-encrypting drives and then there's the hardware rooted trust so when we implement full disk encryption they use a hardware route of trust for key storage it verifies that the keys match before the secure boot process takes place a tpm is often used as the basis for a hardware route of trust we'll talk about tpm in just a moment here so the tpm is the trusted platform module it's a chip that resides on the motherboard of the device it's multi-purpose like for storage and management of keys used for full disk encryption but that's not its only purpose it provides the operating system with access to keys for a number of functions but it prevents drive removal and data access and finally sandboxing an application is installed in a virtual machine environment isolated from our network in a sandbox we call it this enables patching you know testing ensuring that it's secure before putting it into a production environment we ensure that system is hardened before we expose it to other systems on our network and it also facilitates investigating dangerous malware so if we have a security breach if we have a compromised system we can take it offline we can isolate it in a sandbox and allow forensic investigation to continue in a linux environment this is known as change root jail moving on to 3.3 we're going to talk about implementing secure network designs we'll touch on load balancing network segmentation virtual private network and port security to name a few so let's start with load balancing and a network load balancer an nlb is a device used to direct traffic to an array of web servers application servers or really any other service endpoint we usually think of these in the context of web servers but it's really bigger than that and there are several ways we can set up a load balancer the two you want to remember for the exam are the active active configuration where we have multiple load balancers acting like an array all dealing with traffic together as both are active and you can have an active active scenario with two load balancers but you can certainly have a larger array and i've worked with larger arrays myself but it's at least two and then there's the reality here that if we lose one of these load balancers if we have say two load balancers and they're both running you know at eighty percent of capacity if we lose one of those a single load balancer failure could impact performance and in that case if they're both running say 80 percent utilized a single load balance or failure will degrade performance so we need to make sure that we have enough in the active active configuration to sustain that loss now the active passive configuration is where the active node is fulfilling all of the load balancing duties and the passive node is simply listening and monitoring the active device and should that active node fail then the passive node will take over providing redundancy it's not going to scale as well but it it's a perfectly okay strategy at lower scale if one device is enough we have that second device in passive mode just to allow us to deal with potential hardware failure it gives us redundancy if i mention nlb from this point forward i'm talking about network load balancer or if i say load balancer so nlb really means either of these things consider these three terms interchangeable so the virtual ip address on a load balancer eliminates a host dependency upon an individual network interface so when web traffic comes into the network load balancer from the virtual ip address on the front end the request is then sent to one of the web servers in our example in the server farm on the back end so if i just draw a simple picture here we've got our nlb we assign a virtual ip address or a vip it's sometimes called when that request comes in the nlb is going to spread those requests over our back end server farm so think of the front end as the vip in the back end as our server farms in this case and in terms of scheduling options this determines how the load is distributed by the load balancer and depe it's going to depend on the device you're working with but they typically have multiple options like least utilize hosts because the nlb will will have some measure of status of the servers in the farms based on that number it will be able to make a decision as to which is least utilized now it could be looking at pure connections it could be looking at cpu utilization uh there are typically a number of measurements that can be leveraged there there's dns round robin which is where when the request comes in the load balancer simply contacts the next server in the list so it's really just going across the servers one by one server one server two server three that's not a very intelligent option but it's functional to a degree and then we have affinity that's when a load balancer is set to ensure that a request is sent back to the same web server based on the requester's ip address or the requester's ip address and port or even their session id so it's pinning that user's request their session to the same server affinity configuration can be referred to in in tuples uh often so we can configure affinity based on on ip and port or ip and port and session id we can we can go all the way you'll see load balancers with five tuple capability but this is also known as persistence or a sticky session where the load balancer uses the same server for the session for that user so moving on to network segmentation let's start with intranet which is a private network designed to host information internal to the organization so collaboration here is going to be limited to employees users within the org and then we have an extranet which is something of a cross between the internet and the internet it's a section of an organization's network that's been segmented to act as an intranet for the private network but also serves information perhaps to external business partners which i find is the most common scenario or even the public internet and then finally we have a screened subnet this is an extra net for public consumption it's typically labeled as a perimeter network or a dmz but network segmentation if i could sum it up in one sentence is a way to control traffic and isolate static or sensitive environments xero trust so xero trust security addresses the limitations of the legacy network perimeter-based security model it used to be that we'd simply put firewalls up at the perimeter we'd have proxy servers to proxy internet access and everything inside that boundary was trusted so xero trust security throws that out the window and really moves the security boundaries closer to the entities we're managing closer to the identity in fact you'll frequently hear that xero trust treats user identity as the control plane it is the gateway to accessing resources and xero trust assumes compromise or breach in verifying every request bottom line no entity is trusted by default so this kind of supersedes trust but verify so with zero trust we're bringing those security boundaries from the traditional corporate network perimeter down to the identity to the device to the apps to our information protecting those entities anywhere they should reside in our environment or outside of our environment getting work done so reasons for segmentation and there are a few so boosting performance so we can improve performance through a scheme in which we have systems that often communicate located on the same network segment while systems that rarely or never communicate or should not communicate are located on other segments in reducing communication problems we can reduce congestion by reducing the need for crossing into other domains in our network or having too many talkers in a particular network segment so we can just take the unnecessary traffic out and reduce congestion it's kind of like the network addition of blowing your nose providing security so this can definitely improve security by isolating traffic and user access to those segments where they're authorized so if we have network segments hosting sensitive data for example customer databases we can greatly restrict access to only the systems and users who need that access and that's going to boost our security overall so east-west traffic may come up on the exam this is where traffic moves laterally between servers within the data center north south traffic incidentally moves outside of the data center then we have the virtual local area network or vlan so this is a collection of devices that communicate with one another as if they made up a single physical lan so on a switch a layer two device we can create a virtual local area network a collection of switch ports that you know could consist of users in multiple different locations different floors etc servers in different areas but they behave as though they're a single physical network this creates a distinct broadcast domain then again we have that screened subnet so that's where we place a subnet between two routers or firewalls to say it another way versus how i said it earlier and you'll have bastion host within that subnet you'll see web servers frequently hosted in a screened subnet in a perimeter network let's move into vpn so a virtual private network extends a private network and across a public network basically enabling users and devices to send and receive data across shared or public networks as if their devices were directly connected to the private network so one example is a user connecting from home to the corporate vpn so they can access corporate resources but that's not the only scenario so let's dig in a bit so in vpn we have always on mode which is a low latency point-to-point connection between two sites so this is typically a tunnel between two gateways that is always connected so we'll see this connecting branch offices to say the main corporate office or to a data center you know l2tp and ipsec this is the most secure tunneling protocol that can use certificates kerberos authentication or a pre-shared key but ipsec vpns are going to be the top of the mountain when you have this discussion generally speaking so an ipsec vpn provides both a secure tunnel and authentication and then we have an ssl vpn which works with legacy systems and uses ssl certificates for authentication this is going to be less common by virtue of the fact that it's really a legacy construct html5 vpns similar to ssl vpns in that they use certificates for authentication they're easy to set up you just need an html5 compatible browser opera edge firefox safari just any of them these days but ipsec i expect will be the one of most focus on the exam so let's talk through some more ipsec vpn scenarios so we have split tunnel versus full tunnel so in the full tunnel scenario it means all traffic goes across the vpn whether it's destined for work resources on private subnets or it's just internet browsing and then we have split tunnel which sends traffic destined for the corporate network over the vpn and internet traffic goes directly to the its normal destination the normal route out the access point you'll see this split tunnel configuration very commonly in work from home scenarios with users sometimes organizations will opt for full tunnel instead and it just depends on whether they put a greater premium on saving bandwidth over that vpn saving capacity or if they're more interested in filtering and monitoring internet traffic for that user during work hours and remote access versus site to cite the site-to-site scenario tends to use an always-on mode where both packet header and payload are encrypted and it's always essentially always running but in the site-to-site scenario it's ipsec tunnel mode and in the remote access scenario it's the user initiating the connection that's what we call ipsec transport mode so moving on to dns domain name system this is a hierarchical naming system that resolves a host name to an ip address so we're going to talk about dns in the security context i may step slightly outside that boundary just for purposes of helping you better understand dns so dns allows us to match a host name or a fully qualified domain name to an ip address so a fully qualified domain name is the host name plus the domain such as in this example server1.contoso.com or another example www.microsoft.com www is the hostname microsoft.com the domain name together those create an fqdn so dns supports many record types a few that you'll see commonly are the a record which is a host record just simply a name mapped to an ipv4 address we have a cname which is an alias that's how we can map multiple names to the same ip address quite easily simply referencing that original host we have the srv record which helps clients find services like a domain controller in fact your active directory domain controllers when they boot up when they start they will register srv records with dns then there's the mx record which indicates a mail server now these next two are going to step into the world of security firmly here so we have sender policy framework or an spf record so this is a text record that's used by dns to prevent spam and confirm email has come from the domain it appears to come from essentially by allowing us to create a list of allowed senders then there's domain based message authentication reporting and conformance or dmarc this is another dns text record that's used by internet service providers to prevent malicious emails like phishing or spearfishing attacks and to secure our email better you will find spf and dmarc are used together as portions of a more complete solution i would say these records are used by just about everybody today then we have the dns cache which stores recently resolved dns requests for later reuse reducing calls to the dns server and that cache is a function of the dns client and that cache is present on windows clients and servers and linux clients and servers and how long that lookup can be cached as determined by the ttl the time to live that's specified on the record at the dns server then we have a host file which is a flat file where name and iep pairs are stored on a client for lookup and it's often checked before the request is sent to the dns server although i'd say the host file is rarely used these days but you'll find a host file on both windows and linux operating systems both client and server the dns server normally maintains only the host names for domains it is configured to serve the domains for which it is considered to be authoritative the authority the root server so dns name servers that operate in the root zone they can also refer requests to the appropriate top level domain server so at the top of the dns hierarchy we have the dot the root and then you have some tldservers.com.net.org and then you have those second level domains microsoft ibm right and that second level plus the first level makes up the domain microsoft.comibm.com now in the world of security for dns we have dns sec which which is dns security and it prevents unauthorized access to dns records on the server so each record is digitally signed creating an rrcig it's a resource record that holds a dns sec signature for a record set to protect against attack so know that that rrsig that resource record is a digitally signed record now we're not talking specifically about attacks in this domain but i want to bring up dns attacks here in the context of dns while we're talking about it in its top of mind so we have dns poisoning where an attacker alters the domain name to ip address mappings in a dns system to basically redirect traffic to a rogue system or perform a denial of service and then there's dns spoofing where an attacker sends false replies to a requesting system basically beating the real reply from a valid dns server then we have dns hijacking which is also known as a dns redirection attack so there are many ways to perform hijacking the most common way we see this done is through a captive portal like a pay-for-use wi-fi hotspot in a public space and then finally there's a homograph attack which leverages similarities in character sets to register phony international domain names that appear legitimate to the naked eye for example they'll replace the latin character a by the cyrillic character a in example.com sometimes you'll see phony domains where the letter i is replaced by the number one or the letter l is replaced by the number one to bottom line it for you the the focus of just about any dns attack the end goal is to send that user who makes a dns request to a fake resource whether it's a fake malicious website or fake application endpoint usually it's web-based that right there is a pretty good picture of the end goal for any dns attack all right let's talk network access control so a desktop or a laptop off the network for an extended period of time may need multiple updates upon return to get back up to our corporate standards now after a remote client has authenticated what network access control does is checks that the device being used is patched and compliant with corporate security policies and it might do that through some mechanism internally what i see more commonly these days is the network access control feature on a network stack like cisco will check in with your mdm platform to see if the mobile device management platform says that system is compliant and then it will allow a compliant device to access the land whereas for a non-compliant device that may be redirected to a boundary network where a remediation service addresses the issues that boundary network is sometimes called a quarantine network and network access control can be agent based or agentless some operating systems actually include network access control as part of the operating system itself so essentially no additional agent is required and these generally perform checks when the system logs into the network and logs out of the network making them less configurable which may be undesirable in some environments so if you need additional control or flexibility or functionality you might need a persistent or a dissolvable agent so in the persistent scenario that's a permanent agent that's installed on the host be that a windows client or a linux client the dissolvable agent is known as temporary it's installed for a single use i worked with a scenario recently here where we used a cisco integration with microsoft intune so cisco network access control simply checks with intune to see if the device is compliant before granting access so we don't need an agent at all out-of-band management so this enables it to work around problems that might be occurring on the network like the network itself being down in some respects so out-of-band management on devices may include cellular modems or serial interfaces in larger environments this out-of-band management functionality may even be centralized but it gives it a way to respond and to manage when the network is not at its best so in terms of port security there are two types there's 802.1x and switch port security so port security is when anyone authorized or not plugs their ethernet cable into the wall jack the switch allows all traffic with port security the port is turned off so this is considered undesirable as it limits functionality of the switch we're disabling a switch with 1x the user or device is authenticated by a certificate before the connection is made this prevents an unauthorized device from connecting it allows an authorized device to connect though so this is preferable because we're not disabling the switch in any way we're not limiting functionality and there are some other protections that can be configured so loop protection is when two or more switches are joined together they can essentially create loops that result in broadcast storms and spanning tree protocol prevents that from happening by forwarding listening or blocking some supports now bridge protocol data units uh bpdu these are frames that contain information about the spanning tree a bpdu attack will try and spoof the root bridges so that stp is recalculated basically throwing your switch into a state of recalculation and relative uncertainty and a bpd bpdu guard enables spanning tree protocol to stop such attempts and then there's dhcp snooping which is layer two security that prevents a rogue dhcp server from allocating ip addresses to a host on your network and then there's mac filtering so this is creating an authorized list of wireless client interface mac addresses so the layer 2 hardware address and within our wireless access point blocking access to all non-authorized devices now certainly this can come up in some ethernet wired network scenarios where you have mac filtering taking place for authorized nodes the max spoofing is a way that some attackers get around this faking a mac address to appear as a legitimate authorized note on the network max spoofing is a pretty easy thing for attackers to do 3.3 so given a scenario implement secure network designs we'll talk about network appliances and the wide range of firewalls that are available so let's start with network appliances and the jump server so this is a remote admin workstation typically placed on a screen subnet aka the perimeter network or dmz that allows admins to connect remotely to the network to perform some admin activities then there's a ford proxy this is a server that controls requests from clients seeking resources on the internet or an external network and then there's a reverse proxy which works in the reverse direction so this is placed on a screen subnet and performs authentication and decryption of a secure session to enable it to filter the incoming traffic so ford proxy is outbound request reverse proxy is the inbound so flavors of intrusion detection system so we we talked about host based intrusion detection and prevention earlier which allows us to monitor activity on a single system only it's it's typically an agent on the host the drawback there being that attackers can discover and disable them now network-based intrusion detection can monitor activity on a network and it's not as visible to attackers generally speaking so network-based intrusion detection and prevention we haven't talked about yet at detail but it's really ids or ips at the network level generally in hardware form so network-based intrusion detection analyzes whole packets header and payload looking for known events and when an event is detected a log message is generated and then of course optionally maybe an email notification for example and intrusion prevention in the network-based form like host ips the packet header and payload is analyzed looking for known events and when a known event is detected a packet is rejected so really the difference between ids and ips is that one logs an event the other takes action and we have a couple of types of ids systems that you'll be expected to know on the exam so there's behavior based which creates a baseline of activity to identify normal behavior and then measures the system performance against the baseline to detect abnormal behavior you'll also hear the the behavior-based variety referred to as anomaly-based or heuristic based and then they're signature based sometimes called knowledge base this uses signatures similar to the signature definitions you'd see in anti-virus or anti-malware software with behavior based it can detect previously unknown attack methods so it can detect emerging threats that maybe haven't yet been defined in a signature where signature based is only effective against known attacks for which a signature is available both host based and network-based systems can be knowledge-based behavior based or a combination of both of these so also around network-based ids and ips you want to be familiar with modes of operation so we have inline mode or in band where the network intrusion detection or prevention system is placed on or near the firewall as an additional layer of security in other words traffic would be traveling through that device the passive mode out of band it's called traffic doesn't go through the device there are sensors and collectors that forward alerts to the network based intrusion detection system and on that topic of sensors and collectors these are two words for the same thing essentially you can place a sensor collector on a network to alert the intrusion detection system of any changes in traffic patterns on the network and for that needs if you place a sensor on the internet side of the network it can potentially scan all of the traffic from the internet which could be valuable in detecting anomalous behavior so the hardware security module or hsm this is a physical computing device that safeguards and manages digital keys it performs encryption and decryption functions for digital signatures authentication and other crypto functions it's not unlike a tpm except it's often removable or an external device where a tpm is a chip on a motherboard quite typically so let's talk types of firewalls we'll start with the web application firewall abbreviated waf so these protect web applications by filtering and monitoring http or https traffic between a web app and the internet looking for common attacks like cross-site scripting sql injection looking for those o wasp top 10 type attacks and some of these web app firewalls come pre-configured with owasp rule sets that you can apply to them to quickly get them up and running and protecting against these common attacks and then there's the next generation firewall abbreviated ngfw which is a deep packet inspection firewall that moves beyond the typical port protocol inspection and blocking i think the two defining characteristics you want to remember for the exam is that a next-gen firewall is application level inspection and it brings intelligence from outside the firewall typically in the form of threat intelligence fees so continuing with types of firewalls we have deep packet inspection so this is packet inspection that both inspects and filters header and payload of a packet so basically the header and the body and it can detect protocol non-compliance spam viruses intrusions it's going to be very capable because of the depth of inspection unified threat management which you'll see abbreviated utm so this is a multi-function device composed of several security features in addition to a firewall it can cl can include a variety of functionality ids ips a tls proxy web filtering quality of service management bandwidth throttling vpn anchoring antivirus these are going to be a lot more common in small to medium business scenarios this multi-function because all of that functionality in a single box is only going to scale so far so let's talk firewall and state so we have stateless which is where the firewall is watching network traffic and restricting or blocking packets based on source and destination addresses or other static values it's not aware of traffic patterns or data flows it's typically faster and it performs better under heavier traffic loads frankly because it's doing less and then we have stateful packet inspection which you'll hear more commonly these days which can watch traffic streams from end to end they're going to be aware of communication paths and can implement various security functions like tunnels and encryption it's going to be better at identifying unauthorized and forged communications then there's the network address translation or nat gateway which allows private subnets to communicate with other cloud services and with the internet but it hides the internal network from internet users and then that gateway will typically have the network access control list for the private subnet so it's going to restrict visibility and access for those inbound notes now content or url filter is going to look at the content on the requested web page and it's going to allow a block request depending on the filters and it's going to block inappropriate content in the context of the situation so what you might call inappropriate is going to differ by your audience it might be different in a school scenario versus adults in a work scenario so open source versus proprietary so open source is where the vendor makes the license freely available and it allows access to the source code though they may ask for an optional donation now there's no no vendor support out of the box with open source so you might have to pay a third party to support you in a production environment one of the more popular open source firewalls is pfsense and there's a link if you want to go just see a scenario for what that open source option looks like and then there are proprietary firewalls these are more expensive because you have to pay for them but they tend to provide more and better and more configurable protection and many vendors work in the space you've got cisco checkpoint palo alto barracuda i'm just naming a few what you won't get is source code access because it's proprietary and commercial hardware versus software firewall so hardware firewalls will be a piece of purpose built network hardware it's going to offer more configurable support for land and wayne connections it often has superior throughput versus software because it is hardware designed for the speeds and the connections common in an enterprise network a software firewall is going to be something you might install on your own hardware it's going to give you the flexibility to place firewalls just about anywhere you'd like in your organization on servers and workstations you know you can basically run a host-based firewall on just about any computer the software firewalls are going to be more vulnerable in some respects versus hardware as we discussed earlier most most importantly or simply because the attacker could potentially disable the process or service that represents that running software firewall so next let's compare application versus host based versus virtual firewall so application firewall is typically catering specifically to application communications often that's http or web traffic we did talk about one example of an application focused firewall in the next generation firewall which has layer 7 capability then there's host based so this is an application installed on the host os where the application is the firewall it's a software component like for windows or linux and you'll find firewalls for both the client and server operating systems and then virtual comes in the cloud where firewalls are implemented as virtual network appliances quite often and they're available both from the cloud service provider they'll have native solutions but you'll also see third party partners often commercial firewall vendors that will offer some cloud specific virtual network appliance for that csp so let's talk network device type so firewalls are essential tools in managing and controlling network traffic and a firewall is used to filter traffic and it varies by type but it may filter it anywhere from layer 3 all the way up through those application focused layer 7 devices now a switch repeats traffic only out of the port on which the destination is known to exist switches are known for their efficiency and traffic delivery uh creating separate collision domains and improving the overall throughput of data switches are layer two devices there are a a layer three switch you'll hear about occasionally but when we're talking switch unless otherwise specified it's a layer 2 device routers used to control traffic on networks used to connect similar networks and control the flow between the two they can employ static routing tables or dynamic routing functions these are layer 3 devices and then there are network gateways that connect network using different protocols these are known as protocol translators they can be standalone hardware devices or a software service you'll see these built into windows operating systems and sometimes linux as well network gateways work at layer 3. so route security so routers are not designed to be security devices but they do include some built-in capabilities that do provide some measure of security function one of these being an access control list which is used to allow or deny traffic and if no allow rules exist then the last rule is a deny rule it's called implicit deny and we can configure an access control list on the ingress on the inbound traffic and on egress which is the outbound traffic of an interface so for the exam make sure you know the difference between ingress and egress inbound and outbound and that acl will evaluate traffic on multiple criteria similar to a firewall in some respects quality of service ensures that applications have the bandwidth they need to operate by prioritizing traffic based on the importance and the function so traffic of real-time functions like voice and video streaming might be given greater priority and that priority is often going to be human configurable it will be adjustable based to your organization's specific needs so let's talk about the implications of ipv6 so network security focus changes somewhat when we move to ipv6 one change is that there are many more addresses on v6 versus version four which means it's more difficult to perform a complete port scan or interface scan when we're working with ipv6 addresses now the security tools like the port scanners and vulnerability scanners have long since updated their tooling to take advantage of ipv6 ipv6 creation long preceded its adoption and ipv4 is still much more common on your private networks today in but because there are so many addresses available in v6 there's less need to perform port address translation or outbound network address translation on the network which can simplify the communications process but if we think about nat in business use case scenarios it is itself a security feature as it removes direct access to source user and in some cases internet browsing with ipv6 we remove the address resolution protocol or arp and without arp there can't be any arp spoofing so that's a bonus it doesn't imply that v6 is any more or less secure than ipv4 but it changes the attack vectors for example a neighbor cache exhaustion attack can use ipv6 protocols to fill up the neighbor cache interrupting network communication so when we move to a new protocol the attack vectors change based on that protocols weak spot port mirroring also known as port spanning sends a copy of all data that arrives at a port on a network device over to another device or sensor for investigation later or in near real time the switch we have a reserved port that will mirror all the traffic that passes through that reserved port now it works across multiple switches because it's a logical configuration whereas a physical device like a port tap requires installation connected to every switch and we might leverage spanning to inform the network intrusion detection system of changes in traffic pattern for example this will increase the load on the switch so it should be configured with knowledge of the traffic type and volume that we're dealing with we don't want to overwhelm the resources of the switch in terms of processing memory etc then we have monitoring services which provide additional security on the network usually in the form of qualified eyes helping us monitor network security and activity this is very common with security information event management sim and sore functions that we talked about back in domain one and section 1.7 it's often an outsourced security operations center a sock function to provide qualified eyes on our network 24x7 monitoring and potentially alerting or remediating on issues after hours and that might be helpful in maintaining compliance really any sort of compliance it might be hipaa gdpr pci obligations anything where we we have a need for high security and responsiveness related to our sensitive data and then there are file integrity monitors that detect changes to files usually that shouldn't be modified automating notification and potentially remediation this commonly monitors files that would never change things like your operating system files where changes indicate some type of malicious activity where this can also be helpful is in detecting changes to your baseline configurations if you have an unwanted change in your baseline that can then proliferate to all newly deployed systems or configurations moving on to 3.4 given a scenario install and configure wireless security settings so we'll talk about cryptographic protocols related to wireless functions we'll talk about authentication in the wireless context we'll talk about authentication methods and setup methods related to wireless and installation considerations are going to be some physical some human activities in that section so i doubt you'll be tested on it but i wanted to drop a chart of the 80211 standards here just in case in part so i could mention that in 802.11 we saw the definition of wired equivalent privacy which was really kind of the first attempt at wireless security at wireless protection so tkip temporal key integrity protocol was designed as the replacement for wep without the need to replace legacy hardware so to improve security without the need for wholesale hardware replacement and it was implemented into 802.11 wireless networking under the name wpa wi-fi protected access ccmp which is counter mode with cipher block chaining message authentication code protocol so you can see why folks just say ccmp so this was created to replace wep and tkip the wpa standard we just talked about it uses advanced encryption standard with 128-bit key we see aes used quite a bit today although a 256-bit key is much more common in most uses today ccmp is used with wpa2 which replaced web and wpa so we have wpa2 that implemented ccmp for authentication and that's again using aes encryption with 128-bit key and then in 2018 we see wpa3 wi-fi protected access 3 which addressed the weaknesses in wpa2 it uses a much stronger 256-bit scheme for encryption gcmp 256 it was called there are two versions of wpa there's wpa3 personal for home users and wpa3 enterprise for corporate users so sae 3 is a relatively new 80211 authentication method it stands for simultaneous authentication of equals and it's used with wpa3 personal and it replaces the pre-shared key option in wpa2 which we'll talk about more in a moment it protects against brute force attacks though and it uses a secure diffie-hellman handshake called dragonfly and it uses perfect forward secrecy so it's immune to offline attacks it's because you have a unique session key negotiated for each user session so an attacker getting online interacting with the network and compromising a key from a past session does nothing for them in accessing future sessions so w3a personal versus enterprise let's break that down a bit so w3a personal uses sae which means users can use passwords that are easier to remember and it uses perfect forward secrecy then we have w3 pa 3 enterprise which supports 256 bit aes whereas wpa only supported 128 bit and and 256 bit is actually required in several scenarios by the us government so that's significant it uses elliptic curve diffie-hellman ephemeral for its initial handshake so wireless authentication protocol so we have eep which is extensible authentication protocol this is an authentication framework it allows for new authentication technologies to be compatible with existing wireless or point-to-point connection technologies and then there's peep which is protected extensible authentication protocol which encapsulates eep within a tls tunnel that provides authentication and potentially encryption and then we have leap which is lightweight extensible authentication protocol that's a cisco proprietary alternative to tkip or wpa this was developed to address deficiencies in tkip before 802.11i or the wpa2 system was ratified as a standard so continuing with wireless authentication protocols we have eep fast which was developed by cisco and it's used in wireless networks and point-to-point connections to perform authentication it actually replaced leap which was considered insecure after a time we have eep tls which is a secure version of wireless authentication that requires x.509 certification so x.509 certificates it involves three parties a supplicant which is the user's device the authenticator which would be a switch or a controller and then the authentication server which would be a radius server then we have ttls which was a wpa2 enterprise scheme it uses two phases the first is to set up a secure session with the server by creating a tunnel utilizing certificates that are going to be seamless to the client so you don't have to do anything special on the client the second phase uses a protocol like ms chap to complete the session this was designed to connect older legacy systems really 8021x is transparent to users because it uses certificate authentication it can be used in conjunction with a radius server for enterprise networks and then we have radius federation which enables members of one organization to authenticate to another using their normal credentials so it's federating the trust is across multiple radius servers across differing organizations a federation service where network access is gained using wireless access points so the concept of federation doesn't only occur here in the wireless authentication world you may have heard of federation in active directory scenarios on premises active directory federation services was pretty common to allow additional authentication methods to active directory on-prem and in this case the wap forwards the wireless device's credentials to the radius server for authentication so we're tying that authentication to the back end it commonly uses 802.1x as the authentication method which relies on extensible authentication protocol eep that we talked about earlier all right so so going back in time a bit there's wpa2 pre-shared key which was introduced for the home user who doesn't have an enterprise setup of course so the home user in wpa2psk would enter the password of the wireless router to gain access to the home network so psk and wpa2 was replaced by sae and wpa3 and that's the main reason i wanted to mention it was just to show you which newer solution supplanted this one and then there was wi-fi protected setup and with wps the password is already stored in the device all you need to do is press the button to get connected to the network the password is basically stored locally so it could be brute force that was one of the weaknesses there this was a home use scenario strictly a home use scenario and the enterprise flavor of wpa2 or wpa3 is used in a centralized domain environment where you have a corporate scenario and this often involved a radius server combined with 802.1x using certificates for authentication you would only see that in a work scenario so captive portals so these are common in airports and public spaces where the wi-fi redirects users to a web page when they connect to the ssid it provides additional validation of identity normally through an email address or a social identity it might include accepting an acceptable use policy and they may offer some sort of premium upgrade for faster service that's really common in airports actually that you get the slow boat for free and an offer of faster service for a price now a site survey in a wireless scenario is the process of investigating the present strength and reach of wireless access points deployed in an environment to optimize the configuration it usually involves walking around with a portable wireless device taking note of wireless signal strength and mapping this on a plot or a schematic of the building now wap placement if you're installing a new access point you want to make sure you place it in the right location you want minimal overlap with other access points and you maximize the coverage that's being used in your environment it should basically minimize the number of physical access points which optimizes cost right we're making the most of our hardware you want to avoid placement near electronic devices that could create interference in areas where signals can be absorbed metal objects and bodies like elevators and concrete walls absorb signal you're not going to get good wireless signal through an elevator door or through a concrete wall and you want to ensure that the access point is in a place that doesn't send signal too far outside of your existing work areas enabling unwanted access attempts potentially from bad actors channel overlap so in addition to minimizing coverage overlap you want to choose different channels so there are no conflicts between your access points and then we need to think about controller and access point security so at home you typically have a small number of devices but in a large office you're going to have potentially many access points and each one of those has a separate configuration now a wireless controller can enable central management of configuration as well as central management of security patches and firmware updates of the access point so you're going to have to shop for a scenario that's designed for that larger scale corporate wireless scenario and you want to use https to encrypt traffic to controller and web interfaces so ensuring that if you're connecting you know to a management interface on a device for example that it's a secure connection and on the access points themselves we want to use strong authentication methods never leaving default passwords in place in 3.5 given a scenario implement secure mobile solutions so in 3.5 we're talking about mobile connectivity mobile device management and mobile device security overall so let's start with communication consideration so we have 5g so 5th generation cellular faster speeds and lower latency than 4g now unlike 4g 5g also doesn't identify each user exclusively through their sim card you can assign identities to each device and some of the air interference threats like session hijacking that were present in 4g are dealt with in 5g now 5g comes in two flavors there's the standalone version and the non-standalone version and the standalone version of 5g will be more secure than the non-standalone because the nsa version anchors the control signaling of 5g networks to the 4g core it was kind of a hop in the process of rolling out 5g and the non-standalone version supports that rollout process until the providers could get entirely to a standalone version so why do i bring that up well it ties control signaling to the 4g core which brings with it some legacy security concerns and your major providers at least in some locations may still be running a non-standalone version there are a lot of factors involved in getting up to standalone nationwide in all of their equipment now in 5g the diameter protocol which provides authentication authorization and accounting will be a target because 5g has to work alongside older tech 3g and 4g are still a thing right we have many phones out there that aren't 5g capable old vulnerabilities may be targeted and because a scale of iot endpoint counts on 5g is exponentially greater distributed denial of service is going to be a concern you know taking over a large number of a particular type of device on 5g we could have a very large botnet we're facing there now some carriers originally launched on that non-standalone version of 5g which again continues to rely on availability of the 4g core how much of this is going to come up on the exam i think probably not a great deal i want to arm you with where the security concerns come in with 5g just so you're educated there and you may be prepared for anything that could come up so let's talk about sim card subscriber identity module cards so that's essentially a small computer chip that contains information about your mobile subscription it allows users to connect to telecom providers to make calls to send text messages to use the internet we know that sms messages are used as a second factor in authentication and that's going to be tied back to your device to your sim card and one of the auth factors most prone to attack is sms because sim hijacking is an attack that has been executed in the real world in fact we've read real reports of a bad actor walking into a mobile store you know down on the corner and convincing the person behind the calendar that they were somebody they weren't and getting a new sim card so sms as a second factor strongly discouraged uh for exactly that reason bluetooth so that's 80215. it's a personal area network and it's a definitely another area of wireless security we should be concerned about it connects your headsets for cell phones mice keyboards gps and numerous other devices connections are set up using pairing where primary device scans the 2.4 gigahertz radio frequencies for available devices so the primary device is your mobile phone in this scenario or any bluetooth capable device right the primary device can be your smartphone your tablet your pc scanning for available devices pairing typically uses a four digit code if it uses a code at all that code is often simply four zeros to reduce accidental pairings but that's not terribly secure is it then we have rfid radio frequency identification so this uses radio frequency to identify electromagnetic fields in the tag to track assets this is pretty commonly used in shops as the tags are attached to high value assets to prevent theft i'd be familiar with the use case for the exam it's really about access badge systems and in retail scenarios the anti-theft use case we have near field communication which is built on rfid it's often used with payment systems and it's subject to many of the same vulnerabilities as rfid so you see nfc used in the touch pay system you might use at the grocery store to uh to tap your card in a touchless pay then gps uses satellites in the earth orbit to measure the distance between two points it's used in map and find my phone use cases usb universal serial bus so some of your mobile devices can be tethered to a usb dongle to gain access to the internet a flash usb device can be used to transfer data between devices so certainly with usb it is a data exfiltration concern and it's in corporate environments it's often blocked through policy often a mobile device management solution then we have infrared which is a purely line of sight medium it has a maximum range of about 1 meter we'll see this with infrared printers which are more common in home use cases or work from home use cases infrared is not encrypted but attack requires close physical proximity point to point connections this is a one-to-one connection between two devices on a network often describing a wireless scenario like a directional antenna connecting two wireless networks or a wireless repeater connecting laps and then we have point to multipoint which is very common in 802.11 networks where we have a wireless access point connecting to multiple wireless devices you know the clients all right mdm mobile device management let's talk about common features in secure mobile device management solutions so passwords and pins so with your mobile devices like smartphones they're very easy to steal and you can conceal them by putting them in a pocket right so strong passwords and pins with six or more characters are going to be very important in personal scenarios i see many people with a four digit pin so in a business scenario you definitely want to move up to six characters which you should really do in a personal scenario as well this also allows for devices to be disabled upon x number of failed attempts then we have geofencing which uses gps or rfid to define geographical boundaries and once the device is taken past the defined boundary the security team can be alerted so for the exam i would remember that geofencing prevents mobile devices from being removed from the company's premises if desired application management uses whitelist to control which applications are allowed to be installed on the mobile device and related is content management which stores business data in a security of the device in an encrypted format to protect it against attacks and to prevent confidential or business data from being shared with external users this will come up again when we talk about mobile application management which is very common with byod devices remote wipe so when a mobile device has been lost or stolen it can be remotely wiped and the device at that point will revert to its factory settings and the data will no longer be available except in the case of what we'd call a selective wipe or a partial wipe so these wipe options in mdm solutions allow removing business data only so in a scenario where a user brings their own device byod we call that it wouldn't be appropriate for the company to wipe the entire data back to factory settings because it would remove the user's personal data so you'll find the remote wipe options nowadays have a partial wipe option screen locks screen locks are activated once the mobile device has not been accessed for a period of time after it's locked the user gets a fixed number of attempts to correctly enter the pin before the device is disabled geolocation uses gps to give the actual location of a mobile device can be very useful if you lose or drop a device for sure for the exam i would remember that geo tracking will tell you the location of a stolen device push notification so these are messages that appear on your screen even when your system is locked now this information is usually pushed to your device without intervention from the end user and it may include sensitive information right we could potentially see a confidential email shown on a lock screen and some mdm platforms provide policy based control as to whether app notifications can appear with the notifications on the lock screen so micro sd hsm so that's a hardware security module that's a physical device that provides cryptographic features for your computer but in the micro sd hsm scenario it's in a smaller mobile form factor it basically enables associating a smaller piece of hardware with the cryptographic functions for encryption keygen digital signatures or authentication that we get with an hsn unified endpoint management so this provides management of the hardware like desktops tablets smartphones iot devices ensuring that all of our endpoints are secure and compliant and it can manage the security and applications on running devices but you notice i mentioned desktops tablets smartphones and iot devices so so a defining characteristic of uem is that fact now it can identify and block devices that have been jail broken if we're talking about ios devices or rooted in the android case but again multi-platform support is a key characteristic here really common example of mdm is microsoft intune which manages windows ios android and mac os mobile application management so mam allows a security team to manage application and data security even on unmanaged devices essentially it controls access to company applications and data and it can restrict the exfiltration of data from company application so we can prevent saving data to unapproved locations to unapproved apps we can prevent copy and paste into unmanaged apps this is going to be very useful in byod scenarios bring your own device which enables business data access securely on a personal mobile device because we can put security controls around just the applications and data without interfering with the personal data and functions of that unmanaged device se android includes selinux functionality as part of the android operating system it provides additional access controls security policies and it includes policies for configuring the security of these devices prevents any direct access to the kernel of the android operating system which of course goes out the window if the device is rooted which is why through mdm we want to make sure we don't allow rooted devices it does provide centralized management for policy configuration and device management as well so finishing out 3.5 we have enforcement and monitoring and deployment models around mobile solutions so let's start with third-party application stores so there's a danger of downloading apps from third-party app stores as there's no guarantee of the security of the app being installed so this could pose a security risk because the vetting process for mobile apps and third-party stores may be less rigorous than what we see in the official app stores like the apple app store the google play store or the microsoft store side loading so this enables installing an application package uh typically in an apk format uh in the android scenario on a mobile device this is useful for developers to run trial or third-party apps they might be developing but it also allows unauthorized software to be run on a mobile device so rooting or jailbreaking so custom firmware downloads are used to root an android mobile device it basically gives the user a higher level of permissions on that device it removes some important elements of vendor security jailbreaking is what we call that in the apple ios world that's the equivalent of rooting on android so make sure you're familiar with those two terms and it allows you to run unauthorized software and remove device security restrictions just like rooting on android now you can still access the apple app store even though jailbreaking has been carried out for the exam rooting and jailbreaking remove the vendor restrictions on a mobile device to allow unsupported software to be installed so custom firmware is called out specifically in the exam objectives we just talked about that in the context of routing and jailbreaking so i'll just throw it up on the screen again here as a placeholder carrier unlocking which is when a mobile device is no longer tied to the original carrier it allows you to use your mobile device with any provider and also to install third-party apps very common that if you're making payments on the device to a mobile provider they will lock that device to their carrier until you have made all your payments at which point you can have that device unlocked so you can move carriers firmware over the air updates so firmware is software installed on a read-only memory chip on a hardware device and used to control the hardware and firmware ota updates are pushed out periodically by the vendor ensuring that a mobile device is date and secure a really common example is when the mobile device vendor sends a notification that there's a software update if you're an apple user you know exactly what i'm talking about short message service or sms this is text messaging it's become a very common form of communication it can be sent between two people in a room without other people in the room knowing about their communication however text messages can also be used to launch an attack sometimes without user intervention in any way then there's multimedia messaging service or mms which is a way to send pictures as attachments it's similar to sms messages but with media support and then there's rich communication services or rcs which is an enhancement to sms and it's used in services like facebook and whatsapp to send messages so you can see the red receipts so you can see that the person on the other end has read the message you can also send pictures and video so rich media support and the image capability makes mms and rcs paths for data theft for sure external media so an sd card or other external storage media that you might plug into a mobile device may enable unauthorized transfer of corporate data so data x filtration usb on the go allows usb devices to be plugged into smartphones and tablets to act as a host for other usb devices so attaching usb devices in this case can obviously pose security problems as it makes it easy to steal information apple doesn't allow usb on the go incidentally recording a microphone so smartphones and tablets can record conversations with their built-in microphones that could be used to take notes but they could also be used to take conversations or record the proceedings of a confidential meeting also an element of the device we can control through mobile device management if it is a managed device and not a byod scenario gps tagging so when you take a photograph gps tagging adds the location where the photograph was taken this could certainly be a privacy concern most modern smartphones do this by default gps tagging is on out of the box so if you don't want gps tagging enabled it's generally a step you will have to take to disable next up we have wi-fi direct and ad hoc so wi-fi direct allows two wireless devices to connect each other without requiring a and wi-fi direct is single path so it can't be used for internet sharing on the other hand ad hoc allows two devices to connect without a web but it's multi-path and they can share an internet connection with someone else tethering which is when a gps enabled smartphone can be attached to a laptop or another device to provide internet access now if a user uses a laptop and they connect to the company's network and then tethers to the internet it may result in split tunneling which presents a security risk if the device is compromised now mobile devices can often also function as a wi-fi hotspot over usb or bluetooth just buy the buy payment methods so smartphones allow credit card details to be stored locally so that the phone can be used to make contactless payments using near field communications which we talked about previously so for byod needs to be carefully monitored someone could leave the company with a company credit card in their phone and continue to use it mdm may prevent the payment function by disabling this tool in the mobile device management policies camera use smartphone cameras pose a security risk as trade secrets could be stolen by taking photos and research and development departments for that reason will often ban the use of personal smartphones in the workplace because we need the capability to control that camera functions controlling that camera function prevents the theft of intellectual properties mdm policies can disable cameras on company on smartphones mdm can also disable screen capture so taking screenshots of documents on mobile phones deployment models so first up we have bring your own device or byod very popular today where an employee is encouraged to bring their own device so they can use it for work it's cost effective for the company they don't have to buy the user a phone it's more convenient for the user they don't need to carry two phones it does need a couple of policies written policies to be in place to be effective one of those is acceptable use and the other is an onboarding off-boarding policy just to establish expectations for both parties so acceptable use policy outlines what an employee can do with that device during the working day then we have the onboarding policy where we set parameters for device configuration like requirements to access corporate data such as a minimum operating system the device not being rooted or jailbroken and even in a byod scenario using mobile application management where we're managing only application and data and not the device we can still specify generally speaking the os and the rooted or jailbroken restriction then the off-boarding policy how corporate data will be wiped from the device and as i mentioned earlier most mdm platforms support a selective wipe removing only company data rather than taking the device back to factory defaults but mdm solutions with mobile app management functionality can manage corporate data on byod devices then we have the corporateon model which is pretty straightforward it's a device fully owned and managed by the company it has full control over mam and mdm options then we have choose your own device cyod this is where a new employee chooses from a list of approved devices it avoids problems of ownership because the company has a limited number of tablets phones and laptops they offer simplifying management compared to the byod scenario and when the user leaves the company and they offer the devices are taken from them as they belong to the company they're corporate owned so that's the difference between byod and cyod and then there's corporate owned personally enabled or cope this is when the company purchases a device like a tablet or a phone or a laptop and they allow the employee to also use it for personal use so they use it for work but they can use it outside of work this is often a better solution for the company than byod from a management perspective because it can then limit what applications run on the devices and it also frees the company to perform a full device wipe if it's lost or stolen in that byod scenario if a device is lost or stolen you know technically we're only authorized for a selective wipe and also in deployment models vdi or virtual desktop infrastructure these are hosted desktop environments on a central server in a cloud environment typically it provides a high degree of control and management automation and in the event of security issues the endpoint can easily be isolated for forensic investigation if desired provisioning a new desktop is also generally just a push button operation so it's highly automated and brings a high degree of control vdi is a very common deployment solution for contractors and offshore teams which brings us to 3.6 given a scenario apply cyber security solutions to the cloud so we'll be talking about cloud security controls across storage network and compute and we will be talking about security solutions in the cloud and in line with the security plus exam we'll talk through these in a vendor neutral fashion so the concepts we touch on here should apply more or less equally to the major cloud platforms like microsoft azure amazon's aws google cloud platform to name a few they may have slightly different names for functions sometimes branding related but these will apply more or less equally so you have geographies and a geography is a discrete market that generally consists of two or more regions that preserve data residency and compliance boundaries so you'll see geographies carved out per continent generally speaking you'll see something a little more granular in asia with regards to the china carve out due to laws for that country and regions offer protection against both localized disasters as well as regional or large geography disasters so within a region you will typically see multiple data centers so for example i'm looking at an azure map here but the maps for amazon and google would be similar so for example in uh east u.s you'll see multiple physical data centers in that region and for large geography disasters the csp will automatically create region pairs where they pair a primary and a failover region to allow automation of service failover in the case of a large geography disaster say we have a tornado or a hurricane for example the region pairs are generally chosen by the csp and that's in part due to the fact that they're configuring some automated failover of services that you need to do nothing about to allow to happen and it also ensures that a good decision is made for that backup data center they'll generally put 300 plus miles between the primary and the secondary they choose to match in that data center pair and availability zones offer protection at the physical location level within a region with independent power network and cooling it's generally comprised of two or more data centers it's going to be tolerant to data center failures via redundancy and isolation and even with features like a load balancer you'll typically see that a load balancer can be zone redundant even with a single address resource policies to state what level of access a user has to a particular resource and ensuring the principle of least privileges followed is going to be very important for security and audit compliance even in the cloud so the rules don't change when we move to the cloud and the csp will provide details on how their cloud platform can help organizations meet a variety of compliance standards like hipaa like gdpr like pci dss secret management so our csps offer cloud services for centralized secure storage and access for application secrets so consider a secret anything that you want to control access to it could be passwords certificates tokens cryptographic keys api keys this will generally support programmatic access via an api to support devops and cicd continuous integration and deployment operations and you'll see granular access control at the vault instance level as well as to the secrets within then we have integration and auditing so integration speaks to the process of how data is being handled from input to output and a cloud auditor is someone responsible for ensuring that the policies processes and security controls defined have been implemented that auditor will typically be a third party from outside the company and they test to verify that process and security controls and the system integration are working as expected now a few of those controls they may be testing might include encryption levels access control list privilege account use password policies anti-phishing protection data loss prevention controls anti-ransomware the process will typically be repeated periodically on an annual basis at most or at least and self-audits ahead of external audits are very common so if we have an external audit annually it's very common that we'll self-audit more frequently and certainly self-audit ahead of that external audit so now we'll talk about cloud security controls around storage including permissions encryption replication and high availability so your cloud service providers will assign each customer a storage identity and put them into different storage groups that have appropriate rights to restrict access at the tenant or subscription level so we're really talking about the csp controls at a service level right now they will also put some default service level encryption in place as well and they will typically restrict permissions from the public internet out of the box and this is in part due to some hard-fought battles in the past where we saw providers like amazon have some very high profile breaches with their s3 storage buckets due to weak defaults for relational databases you'll see transparent data encryption available for data at rest encryption for data in transit is typically tls or ssl which is really just industry standard and replication so a method wherein data can be copied from one location to another immediately to ensure recovery in case of an outage is available out of the box and in the cloud generally speaking multiple copies of your data will always be held for redundancy even locally typically what you have are locally redundant options zone redundant options and geo-redundant options so you can choose your level of redundancy and of course understanding that in the cloud we're paying as we go we're paying for what we use so when we get into those higher level geo-redundant options we get greater protection against regional or larger geographic failures but we're paying more for that privilege high availability ensures that copies of your data are held in different locations an automatic failover between a region pair in event of an outage is very common so you'll have to establish redundancy for your application to use that data that fails over between region pairs but you'll find that for storage and some of the other underlying services that there'll be some failover by default that happens with no need for action on your part it's built into the platform so let's switch gears and talk cloud security controls for the network so we'll touch on virtual networks public and private subnets segmentation we'll revisit api integration briefly and we'll talk about connecting our public cloud to our on-premises data center the hybrid cloud we'd call that so let's start with virtual private cloud so this is a virtual network that consists of cloud resources where the vms for one company are isolated from the resources of another company and separate vpcs can be isolated using public and private networks and within your networks you can have multiple subnets so at a subnet level we'll have public subnets that can access the internet directly generally through a firewall and protected private networks now virtual networks can be connected to one another through a vpn gateway scenario so a site-to-site vpn so to speak or through network peering and again the precise verbiage will vary from vendor to vendor from microsoft to amazon to google so we're really speaking about this in a vendor agnostic way to the degree we can and for vdi client scenarios a nat gateway for internet access would make sense so clients can browse the internet safely so private subnets let's dig into this just a bit further so i mentioned private subnets cannot connect directly to the internet so they can be configured to go through a nat gateway of outbound internet connectivity as desired client vms and database servers will often be hosted in a private subnet this will be common for the vdi scenario for the virtual desktops but this is not for public services like websites it's going to be a different configuration for sure a private subnet will use one of these address ranges and these address ranges were not defined by the cloud providers these were actually defined in rfc 1918 which specifies these address ranges as private meaning they're not routable over the internet and all other address ranges except for the 169.254 are going to be public addresses and that 169.254 private range belongs to a self-addressing scenario that doesn't factor in the security plus exam so we won't waste time on that now moving on to public subnet resources on a public subnet can connect directly to the internet so public-facing web servers are often going to be placed within that subnet and the public subnet may have a nat gateway or a firewall for communicating back to private subnets and an internet gateway public services like websites are typically going to be published through a firewall so you might use a web application firewall for example in the case of a website uh the the firewall you need in the cloud will depend on your use case so vpc connectivity we can connect to that virtual private cloud through a vpn using ipsec with a vpn gateway it's called a transit gateway sometimes it depends on the cloud provider you're working with but we can set up a vpn for that connectivity uh network peering is another method so we can connect virtual networks in the cloud through a peering function that most of your providers will offer peering is actually the more common option between cloud networks it's generally simple to set up and it's going to be faster than a site-to-site or network to network vpn connection the site-to-site vpn option is common for on-premises to cloud connectivity that would be a hybrid cloud scenario so segmentation now security of your services that are permitted to access or be accessible from other zones involve setting up a set of rules to control that traffic and the rules are going to be enforced by the ip ranges of each of the subnets and within a private subnet segmentation can be used to achieve departmental isolation or any manner of role-based isolation for example we might put our sql servers over onto a specific subnet and then restrict ingress traffic to that subnet to just the application servers that need to talk to the database servers so it really just depends on your use case in that respect so for apis we're talking about rest apis generally speaking which is what you'll encounter with apis today for the most part this enables multi-language support it can handle multiple types of calls return different data formats and we need to make sure that with apis that we've implemented encryption authentication and hopefully rate limiting throttling and quotas so these are going to help us from unwanted access and in the case of rate limiting throttling in quotas should prevent disruptive attacks denial of service attacks we talk about this in domain two so if you haven't watched domain two go back and have a look at that section so let's talk cloud security controls for compute so security groups now a cloud provider has to secure multiple customers and they do use firewalls behind the scenes but they can't grant individual customers direct firewall access that are used to keep the customers separate from one another instead they'll use something of a security group to define permissible network traffic consisting of rules similar to what you'd see in a firewall rule set a dynamic resource allocation this uses virtualization technology to scale the cloud resources up and down as demand grows or falls how this is implemented and if this is even available varies widely by the service and the configuration you're working with for example if you're working with infrastructure as a service with just standard vms in the cloud you're not going to generally have any sort of automatic scale up scale down type functionality but when you get into platform as a service or serverless there are going to be some options when we get into containers dynamic resource allocation has some options available to you so instance awareness vm instances need to be monitored to prevent vm sprawl and unmanaged vms because those are going to increase our attack surface and and those will have security consequences and they'll also add costs in the cloud so it's not only a security issue at the cost issue when we go to the cloud because we're paying for what we use so we can use intrusion detection and prevention to help detect new instances and process controls like privileged identity management as well as change in configuration management are going to help us to prevent those unwanted deployments from happening in the first place and a lot of your cloud service providers will offer some sort of policy tooling to help tenants enforce governance policies in other words automating management and and restriction of who and what and where resources can be deployed in your cloud subscriptions so you may see mention of a virtual private cloud endpoint this allows you to create a private connection between your vpc and another cloud service without crossing over the internet and again i struggle with these because they're they're a bit generic your network connectivity options really do vary by your your cloud service provider so we're talking very generic here in terms of these connectivity types but you'll see your your csps offer site to site connectivity options for hybrid cloud as well so you can connect your on-premises data center into the cloud and a site-to-site vpn is the most common way we see that most of those providers will also offer some sort of premium option to connect your on-premises data centers and locations to the cloud without the need to traverse the internet that connection will be faster more secure and also more expensive most enterprises large organizations today have implemented a hybrid cloud model so generally speaking they'll have on-premises resources and cloud resources public cloud resources as we're talking about here and they'll connect those with some sort of site to connection so if your organization works with containers or you encounter containers in your work life it's almost certainly going to be kubernetes so containers offer a more granular option for application and process isolation if you're not familiar containers run within a vm so it's really multiple containers sharing a single operating system most of your cloud service providers offer hosted kubernetes services certainly azure amazon and google all do and in that hosted offering they handle some of the critical tasks like health monitoring of the service and maintenance for you it's really a platform as a service offering you basically pay for what we'd call the agent nodes which is where your containers will run and the management cluster that handles some of the scheduling and management functions comes as part of the service kubernetes has really become the de facto standard so containerization of our apps enables more efficient utilization of hardware resources which is great for the cloud but more importantly it offers more a more granular level of isolation for resources we can control cpu and memory utilization at the container level we're isolating processes in that container restricting access to files and other system resources so let's talk solutions the first on the list in the exam objectives is cloud access security broker or casbi we use a casbi to enforce the company's policies between on-premises and the cloud with regards to apps and data so a casbi can detect and optionally prevent data access with unauthorized apps as well as data storage and unauthorized locations perhaps your organization is an office 365 shop and they use sharepoint and onedrive we could use a casbi to make sure that nobody's trying to store data in their own personal box or dropbox or google drive account we see the phrase shadow i.t associated with casbis with the casby we can identify unauthorized or unapproved tools being used by our employees and we can then potentially implement controls to remediate those issues or block those actions application security we can use your web app firewalls next-gen firewalls which incorporate additional intelligence we can use intrusion detection and prevention systems and the right solution will depend on the application for a web-based application a web app firewall is often the perfect solution for compiled end-tier applications that are maybe a little more sophisticated perhaps a next-generation firewall will be more appropriate so we can get some of that external intelligence and some protocol smarts so let's talk next generation secure web gateway which was called out specifically in the exam objective so firewalls function at the packet level using rules to allow or deny each packet inbound or outbound the next gen secure web gateway works at the application layer at layer 7 looking at the actual traffic over the protocol to detect malicious intent so functions in the next gen secure web gateway can include web proxy policy enforcement malware detection traffic inspection data loss protection and url filtering quite a number of options in the box you may see questions on the exam related to firewall considerations in a cloud environment so one reason we need a good firewall is simply to filter incoming traffic to protect our cloud-hosted infrastructure and applications from attackers or malware i'd mentioned that that web application firewall tends to be the most common we see in the cloud and there's a couple of reasons for that one is cost and another is that the web app firewall meets a common need protecting web apps is a pretty common ask they tend to be easy to configure some that i've worked with include owasp rule sets that i can apply to the web app firewall right out of the gate so i have it configured very quickly so it's cheap it's easy and it's purpose-built so because it's less expensive than the more function-rich next-generation firewall and secure web gateway options it's not surprising we see it a little more frequently but you'll want to to make sure you know some of those feature differentiations that we called out earlier around the firewalls network segmentation is important and it should be supported with appropriate traffic filtering and restriction with the firewall type that's most appropriate for the use case the firewall can filter traffic between virtual networks that we've segmented as well as between virtual networks and the internet and i've mentioned layers repeatedly and if you're not familiar now i'm talking about the layers of the osi model the open systems interconnection layer so this is really a foundational concept of networking and if you're not familiar i want to just bring you up to speed quickly so a network firewall works on layer 3 a staple packet inspection firewall at layers 3 and 4. many of your cloud firewalls like web app firewalls work at layer 7. so just in case you don't know what these layers are i'm going to give you two charts that you can use to prepare your foundational knowledge for the exam so here's the osi model the seven layers starting with the physical layer up to layer seven which is the application layer and here are some protocol examples this will show you where the protocols live in the model and not every protocol is terribly simple for example tls shows characteristics of layer 4 and 5 but but never mind that this is going to give you a foundation and in case you're unfamiliar here's the osi model by function starting with the physical layer layer one so the physical layer contains the device drivers that tell the protocol how to use the hardware for transmitting data data link is where packet formation happens the protocol data unit at the data link layer is the frame at the network layer we're adding routing and addressing information source and destination addresses the protocol data unit at the network layer is the packet so if you hear any discussion of packet they're talking about the network layer the transport layer manages integrity of a connection and controlling the session so you'll have some protocols that will re-transmit lost packets they'll use tcp others will not worry about session and re-transmission those will typically use udp at the session layer layer 5 we're establishing maintaining and terminating connection sessions between computers layer 6 is transforming data received from layer 7 from the application layer into a format that any system any protocol following the model can understand and layer seven is about interfacing user applications services or the operating system with the protocol stack so that's a quick study it'll give you something to look at if you're not already familiar with the osi model so let's talk cloud native versus third-party solutions so platforms like azure like aws like google cloud platform have their own tools in the microsoft world we have azure resource manager in aws there's cloud formation and what i'm talking about here are technologies that support automating deployment and standardizing deployment through a process called infrastructure as code where we define our infrastructure our networks our vms our services in code so it can be deployed in an automatic way it can be monitored and assessed on a recurring basis to ensure it hasn't drifted from the intended configuration set but they're separate tools for separate platforms requiring separate skill sets every platform has their own tooling and that's where third-party solutions can come in sometimes because they add flexibility functionality and perhaps most importantly multi-platform support everyone wants to get to a right once use multiple times state when they're in a multi-cloud scenario easier said than done but if you have a single language that supports multiple clouds it can certainly ease operations for the team so organizations will typically move to third-party solutions when the native cloud solutions don't meet their functionality needs or it's just operationally too great a burden for example some organizations move to terraform for infrastructure as code because it supports the major cloud service providers using a single language so if you understand how to write code with terraform you can define your infrastructure you can describe it and deploy to azure or aws or google cloud platform based on a description of that infrastructure you've written in terraform and your csps offer a marketplace where third parties can publish offers so so whether it's azure or amazon or google cloud they'll typically have a marketplace where say firewall vendors can offer their virtual network appliances like cisco like palo alto as a couple of examples all right moving on to 3.7 given a scenario implement identity and account management control so we're going to talk identity concepts account types and account policies and we'll start with identity providers the identity provider is what creates maintains and manages identity information while providing authentication services and almost always authorization services to applications for example azure active directory is the identity provider for office 365 and a few other examples would include active directory that we see on prem which is also a directory service octa and duo would be considered identity providers so in the world of identity there are a few concepts called out in the exam objectives so an attribute in the context of an identity is a unique property in a user's account details like an employee id something that definitively identifies that user a smart card a credit card like token with a certificate embedded on a chip it's used in conjunction with a pin this is a physical card then we have certificates so a digital certificate includes two keys a private key and a public key and the private key can be used for identity any information encrypted with a user's public key can only be decrypted with the private key so a token a digital token like a saml token used for federation services or a token used by open authentication oauth 2 we'd call it ssh keys now when we're using ssh ssh keys allow us to connect to a linux server using a secure authentication without the need for a username and password it's much like certificate authentication the public key is stored on the server with the private key remaining on the administrator's desktop it's just a key pair account types so this language will vary a bit by cloud provider but we're going to be talking in terms here that are universal for the security plus exam so a user account we're referring to a standard user account with limited privileges typically cannot install software and has limited access to systems there are two types of user accounts there are those that are local to a machine so they only exist on a particular device and those that access a domain like an active directory domain or perhaps an azure active directory account where it's part of a larger database then a guest account this is really a legacy account that was designed to give limited access to a single computer without the need to create a user account this account is normally disabled as it's no longer used and some administrators i would say all administrators these days view it for the security risk it is and it's disabled by default i haven't seen it used in a couple of decades so i don't expect you're going to see much about that but that should cover everything you need to know a privileged account so privileged accounts have greater access to a system and they tend to be used by members of the i.t team so administrators are an example of privileged accounts administrative accounts can install software manage configuration on servers and clients perform admin operations and they also have privilege to create delete and manage user accounts generally speaking now administrators have been told they should have two accounts one for routine tasks and another for administrative tasks and they only use that administrative account that privileged account when they need it and historically that has been true you should remember this for the exam what i would tell you in practice if we if we come to the cutting edge is that is becoming less necessary because some cloud providers now eliminate this need and instead enable admins to activate privilege just in time using a single account because the fact of the matter is if you're administering your environment from a windows machine the moment you log on with that administrative account that accounts credentials are in memory and we have to worry about credential theft should that end point that workstation be compromised but for the exam assume that the two account strategy is still something folks do and i do see customers and colleagues that i work with using that model but more and more i'm seeing a move to systems that support just-in-time elevation of a single account revoking those privileges when they are no longer needed a service account so when software is installed on a computer or a server it may require privileged access to run with nobody at the keyboard and a service account is really a lower level administrative account and kind of fits the bill here it is a type of administrator account used to run an application generally unattended so it's going to have purpose specific permissions an example would be an account to run an antivirus application so to run that service on an endpoint you'll also hear this sometimes referred to as a service principle a shared account when a group of people perform the same duties such as members of customer services at an organization they may use a shared account now when user-level monitoring auditing or non-repudiation so non-repudiation being the ability to prove that an action was performed by a specific individual or a message was sent by a specific individual you have to eliminate the use of shared accounts and i'll take that a step further to say that most cloud identity providers have options to eliminate the need for shared accounts and that would hold true for hybrid environments as well so most enterprises are going to have access to a cloud identity provider to supplement active directory on premises and the need for shared accounts today i find is is minimal but when you use a shared account you can no longer prove down to a single individual who performed an action definitively a generic account may come up on the exam so a default administrative account created by manufacturers for a wide range of smart and internet connected devices you know all the way down to a camera or your wireless access point most of these are going to have a default username and password and of course default passwords should always be changed identifying the presence of these accounts should be part of the onboarding process for that device and address through configuration management in a corporate environment because that default username and password is a common attack vector we talk about that a bit further in domain 1 of this series account policies so complex passwords sometimes known as strong passwords are formatted by choosing at least three of the following four groups so you'll want lowercase uppercase numbers and special characters and identity providers sometimes will have limitations around what characters are allowed you'll see this with websites where you might sign up for a subscription as well but virtually all are going to support password complexity to a reasonable level now password history prevents someone from reusing the same password for example if the number of remembered passwords is 12 only on the 13th change could that password be reused and password reuse is a term used in the exam that means the same thing as password history both of these prevent someone from reusing the same password so to be crystal clear for the exam password reuse and history are two ways of saying the same thing account audits so an auditor will review accounts periodically to ensure old accounts are not being used after an employee changes departments or leaves the company and an auditor will also ensure that all employees have only the necessary permissions and privileges to carry out their jobs that's what we call the principle of least privilege location-based authentication can be implemented in policy just as an additional factor of the conditions of authentication geofencing can be used to establish a region and pinpoint whether you're in that region and if you're not in your expected location you're not able to log in so context aware location can be used to block any attempt to log in outside of the locations that have been determined as allowed regions geolocation can track your location by your ip address and isp and smartphones have location services that use gps so identifying your phone can be helpful in that respect now when it comes to location i do want to mention that many of your identity providers nowadays enable admins to pre-define trusted locations and what that means is that you can configure your identity provider so if a user is logging into the network on a trusted device in a trusted location like a corporate office you can forego that second factor of authentication or you can maybe prompt them once in the morning and then allow them to go without responding to secondary factors over and over and over again in fact we find users are quite dissatisfied if they have to keep supplying that second factor when they're in a known location on a fully compliant managed device impossible travel time i mentioned that's a security feature you see used in cloud providers all the time like microsoft to prevent fraud if a person's in houston and then 15 minutes later they're determined to be in new york their attempt to log in will be blocked and obviously we can see some situations where you get false positives there if somebody is logging on to a system that is located in new york but they're logging on from houston we can sometimes see some challenges with that but with the providers out there today generally there is enough user entity behavior analysis to establish regular patterns for a user that will eliminate these false positives so a risky logon is a security feature used by cloud providers most often leveraging a record of devices used by each user and the response is going to vary by provider but may include a confirmation email to the user to validate their identity or responding to a prompt in an authenticator app but how user and sign-in risk are used varies by provider and then there's account disablement so account management the the identity life cycle we'd call it ranges from account creation and onboarding to its disablement when the user leaves the company and of course disabling an account in a timely manner when a user leaves is going to be very important in preventing data exfiltration time based logins so we may establish for some user-based roles that they may because their shift workers only log in during working hours so we prevent them from logging in outside of working hours for example employees might be restricted to accessing the network between 7am and 6pm now this prevents data theft by preventing users from coming in at 3am when nobody's watching and stealing corporate data and it can also be effective in preventing individual fraud but also collusion because the time restriction will enforce your schedule rotation which is another element of reducing unwanted data loss in our organization if we allow people to log in 24 hours a day then schedule rotations don't matter as much this is going to be very common in some industries i see this commonly in financial services where certain job roles are only allowed to log in during the business day in 3.8 given a scenario implement authentication and authorization solutions so we'll be talking authentication management authentication and authorization as well as access control schemes so let's start with authentication management so we have a password key which looks like a usb device and it works in conjunction with your password to provide multi-factor authentication so it's a physical device one example is yubikey which is a fips 140-2 validation that provides code storage within a tamper-proof container then a password vault so a password vault is stored locally on a device and it stores passwords so the user doesn't need to remember them these are very common for pcs and they use strong encryption aes 256 is very common for secure storage they're only as secure as the owner password that's used to protect the vault itself and they typically use multi-factor authentication a type of password vault exists in the cloud for devops scenarios which we'll talk about later in this module but for that password vault i increasingly see the authenticator apps that are available like microsoft's authenticator app and google's authenticator app offering to function as that central life storage for all your passwords now here are a couple of concepts brought up in the authentication and authorization context tpm which we talked about much earlier in this module so the trusted platform module again is typically a chip on the motherboard and it's used to store key pairs when we're using full disk encryption and then the hsm the hardware security module is used to store encryption keys like it's a key escrow that holds private keys for third parties and the hsm may be a separate device or removable knowledge based authentication so this is normally used by banks and financial institutions or email providers to identify someone when they want a password reset and there are two different types of knowledge base authentication there's dynamic and static and i think one of these will stand out to you as being much stronger than the other so let's start with static these are questions that are common to the user but these are questions that the user has provided answers to beforehand so an example would be what is the name of your first school they may ask the name of your first pet your kindergarten teacher etc they are going to be questions that could potentially be researched by a bad actor from your social media profiles for example now dynamic knowledge base authentication is a bit different it's deemed to be more secure because they don't consist of questions provided beforehand so for example a bank will ask to confirm your identity they'll ask you for three direct debit mandates they'll ask for the date and the amount paid something that you can't predict beforehand and that would only be accessible to you as it requires knowledge of your banking transactions so authentication protocols there's pap password authentication protocol this is a password-based protocol used by point-to-point protocol to validate users it's supported by almost all network os remote access servers but it's considered weak at this point it's really legacy and then there's chap challenge handshake authentication protocol a user or a network host as an authenticating entity that entity may be for example an internet service provider requires that both the client and the server know the plain text of the secret although it's never sent over the network and then we have extensible authentication protocol which we talked about previously this is an authentication framework it allows for new authentication technologies to be compatible with existing wireless or point-to-point connection technologies extensible authentication protocol is really your most common go-to today and then there's 802 1x authentication which is an authentication mechanism to devices wishing to attach to a land or a wireless lan it really describes encapsulation of the e protocol and it involves three parties a supplicant an authenticator and an authentication server so the supplicant is a client and the authenticator forwards the request the credentials over to the authentication server that decides whether or not the request will be authenticated so 802.1x authentication defines encapsulation of eep over ieee 802.11 it's also known as eep overland now we have multiple protocols in the area of authentication authorization and accounting services that show up in the exam objectives and these come up in remote access scenarios frequently so a network access server is a client to a radius server and the radius server provides the authentication authorization and accounting services of the aaa services i like to call them so we have radius which uses udp and encrypts the password only we see radius commonly in remote access scenarios like vpn tacx plus uses tcp and encrypts the entire session we see that pretty commonly in admin access to network devices pretty common in the cisco world for example and then we have diameter which is based on radius and it improves many of the weaknesses of radius but diameter is not actually compatible with radius we see a diameter used with 4g and then i guess the other way you could remember these is radius is udp and encrypts password only tac access tcp and encrypts the entire session and then just park diameter in your head as that mobile scenario and it's 4g but if you have a 5g network that's non-standalone it's still tied back to the 4g core then diameter factors in that 5g scenario but network access or remote access is the use case at the highest level i'd remember for the exam so let's talk single sign-on so single sign-on means a user doesn't have to sign in to every application that they use the user logs in once and that credential is used for multiple apps single sign-on based authentication systems are sometimes called modern authentication so to say it another way single sign-on is a mechanism that allows subjects think user to authenticate once and access multiple objects resources that is without authenticating again so some common single sign-on methods and standards include saml sesame kryptonite oauth typically the oauth 2 standard and open id so for the exam the three to no i would believe would be saml oauth 2 and open id i would know enough to differentiate these three should they show up as answers on the exam so let's touch on all three of these right now at a little bit greater depth so security assertion markup language or saml is an xml-based open standard data format for exchanging authentication and authorization data between parties in particular between an identity provider and a service provider so this is really common in on-premises federation scenarios with active directory where we see active directory federation services then there's oauth20 which is an open standard for authorization that's commonly used as a way for internet users to log into third party websites using their microsoft google facebook twitter any of these accounts without exposing their password so azure ad the identity provider for office 365 supports oauth 2 flows open id this is another open standard it provides decentralized authentication allowing users to log into multiple unrelated websites with one set of credentials maintained by a third party service referred to as an open id provider so an example here would be logging into spotify with your facebook account the kerberos so kerberos is an authorization protocol in microsoft's active directory which we find on premises in enterprise organizations pretty commonly and kerberos is preferred to ntlm which is really something of a legacy authorization protocol in the windows world kerberos provides stronger encryption interoperability and mutual authentication basically client and server are verified it runs as a third-party trusted server known as the key distribution center or kdc it includes an authentication server a ticket granting service and a database of secret keys for users and services so it's using tickets as opposed to passing around password hashes it helps prevent replay attacks through time stamps within tlm you'll hear about past the hash attacks and with kerberos you'll hear about past the ticket attacks but time stamps will help reduce kerberos vulnerability to attack so let's talk about access control schemes we'll start with non-discretionary access control which enables the enforcement of system-wide restrictions that override object-specific access control our back role-based access control is considered non-discretionary and i said that as though it's important because role-based access control is what we see in the world of windows in active directory in azure active directory so it's exceedingly common as an access control scheme today but we'll talk about our back in just a moment okay so next up we have discretionary access controller dac and now a key characteristic of discretionary access control model is that every object has an owner and the owner can grant or deny access to any other subject so when i talk about subjects and objects in the context of authentication and authorization and access control a subject would be your user the entity that wants to be authorized to access a resource and the the object is the resource so subject is that entity that wants access and the object is the resource they are accessing so discretionary access control is use-based and user-centric in that respect so again object is resource subject is user i'll just pencil it in the upper right there so you have it an example of of a discretionary access control system would be ntfs role-based access control a key characteristic of rbac is the use of roles or group so instead of assigning permissions directly to users user accounts are often placed in roles and administrators assign privileges to the roles or to the groups in fact when you get into best practices with role-based access control in active directory on-premises our best practice is to use groups and in azure active directory we're using groups and roles typically and then there's rule-based access control and a key characteristic of rule-based access control is that it applies global rules that apply to all subjects so rules within this model are sometimes referred to as restrictions or filters an example here would be a firewall uses rules that allow or block traffic to all users equally a key point about the mandatory access control model is that every object and every subject has one or more labels and these labels are predefined and the system determines access based on assigned labels attribute based access control so in this case access is restricted based on an attribute on the account such as department location or functional designation for example an admin may require user accounts to have the legal department attribute to view contracts privileged access management this is a solution that helps protect the privileged accounts within a domain preventing attacks such as pass the hash and privilege escalation it also provides visibility into who is using privileged accounts and what tasks they are being used for and these privileged accounts need additional layers of protection really so you'll find privileged access management as native to some cloud identity providers today and it may include a just-in-time elevation feature where that privileged user can request permission to activate that privilege for a limited period of time so let's talk file system permissions we'll start with ntfs which is a windows construct and ntfs permissions are applied to every file and folder stored on a volume formatted with the ntfs file system if you've worked with windows you've probably seen this interface before well you'll see the groups or usernames and then the permissions that they are granted down in the window below and you can get into the advanced area into special permissions and configure inheritance so that can determine if you're applying permissions to just a specific object or to that object and all child objects so on linux we have a permissions model that has two special access modes called set user id and set group id so it recognizes three types of permissions and at three levels so it recognizes read write and execute and there's a numeric value assigned to each of those so read is 4 write is 2 execute as 1. so for example 7 would be read write and execute access 6 would be read and write 5 would be read and execute so that's what it would look like and you'll see that at the three levels we have owner group and other so owner is the user who created the file a group would be a collection of users granted access and other would be any user assigned permissions who did not create the file who is not the owner and moving on to 3.9 given a scenario implement public key infrastructure so we're going to be talking about certificate services and we'll touch on public key infrastructure concepts types of certificates and certificate formats this is a topic where i find many folks struggle i have a lot of job experience with pki so i'm going to call out key associations that will help you lock in your brain what you need to get through pki on the exam and understand how pki works a little better even if you've not used it before so let's start with some concepts here so key management is really referring to management of cryptographic keys in a crypto system operational considerations when we think about pki include dealing with generating our keys exchanging key storage using these destroying keys replacing keys so all of the issuance and revocation the life cycle management really from a design perspective we need to think about the protocols we're choosing because pki is going to give us some options when it comes to cryptographic protocols and we need to pick strong protocols we need to design our pki system in such a way that if a server in our infrastructure is compromised it doesn't compromise all of the certificates that have been issued so we'll need to look at user procedures so who has access in a pki system we don't want one user to have access to all functions so a role separation will be very important and there are going to be some supporting protocols that we'll talk about so at the base of a pki infrastructure is your certificate authority or a certification authority this is the server that creates digital certificates and owns the policies now a pki hierarchy can include a single ca that serves as what we call the root certificate authority and the issuing authority but this isn't recommended a single a single level pki infrastructure we'd only see in a small environment and it's not going to be the most secure because if that server is compromised the whole solution is shot all of your certificates you've issued are compromised so what we'll see in larger environments and more secure designs is a second tier or a middle tier we call a subordinate ca sometimes called an intermediate certificate authority or a policy certificate authority it's also known as a registration authority it sits below the root ca in the ca hierarchy it regularly issues certificates making it more difficult for these to stay offline what do i mean by that what i mean by that is your root certificate authority is typically maintained in an offline state and it's only brought online for renewing its root certificate or issuing certificates to subordinate cas or issuing cas which we'll talk about in a moment now a subordinate ca does have the ability to revoke certificates making it easier to recover from any security breach that does happen now the certificate revocation list contains information about any certificates that have been revoked by a subordinate ca due to compromises to the certificate itself or the pki hierarchy so cas are required to publish certificate revocation list but it's up to the certificate consumer as to whether or not they check these lists and how they respond if a certificate has been revoked so you'll see your browsers today chrome firefox etc they'll throw up a big warning if they hit a website and a certificate has expired you'll get a warning that it's no longer safe so online certificate status protocol offers a faster way to check a certificate status as opposed to a crl with with ocsp the consumer of a certificate can submit a request to the issuing certification authority to obtain the status of a specific certificate rather than checking the certificate revocation list which is a file in a large environment with lots of traffic we can find that crl that file gets quite large potentially a certificate signing request or csr records identifying information for a person or a device that owns a private key as well as information on the corresponding public key it's the message that's sent to the ca in order to get a digital certificate created we create a certificate signing request in order to request that a new certificate be issued the common name or cn as it's commonly written is the fully qualified domain name of the entity like the web server for example subject alternative name this is an extension to the x.509 certificate specification that allows users to specify additional host names for a single certificate this is pretty commonly used today in advance scenarios it's standard practice for ssl certificates and it's on its way to replacing the use of the common name but it enables support for fqdns from multiple domains in a single certificate and it's commonly abbreviated as a sand so you'll you'll see requests for a sand certificate and the providers like digicert for example in the public space will abbreviate that as sand pretty commonly but multiple domain support is the key certificate expiration so your certificates are valid for a limited period of time from the date of issuance as specified on the certificate itself and the industry specifies guidance here so current guidance says the maximum certificate lifetime from widely uh trusted issuing authorities like digicert is currently one year little over one year 398 days this number has gotten shorter and shorter over time so let's talk types of certificates for a moment we have a wildcard certificate that can be used for a domain and a subdomain for example in the contoso.com domain there are two servers called web and mail the wild card certificate is essentially the asterisk so it's just that wildcardcharacter.contoso.com and when it's installed it would work for the fqdns for both of these web.contoso.com and mail.condoso.com so a wildcard can be used for multiple servers in the same domain so it supports multiple fqdns with the same domain suffix which can save us money because we especially if we're buying a certificate from a trusted authority online because we're publishing a website for example to the internet we need a certificate from an issuing authority outside of our organization that's trusted by the world at large a wild card certificate is going to save us money so let's compare that again to the sand certificate that i mentioned a moment ago the subject alternative name can be used on multiple domain names for a single certificate so for example we could attach abc.com and xyz.com to the same certificate in fact you can also insert other information into a sand certificate such as an ip address but the key here is multiple domains in a single certificate i've seen a sand certificate with 20 entries on it with many domains multiple ip addresses we see organizations leaning on this more and more often and you pay for these sand certificates with these external issuing authorities like digicert typically based on the number of entries you want on the certificate a san with five entries is going to be less expensive than a sand cert with 20 entries on it a code signing certificate so when code is distributed over the internet from a software company for example it's important that users can trust that it was actually produced by the claimed sender an attacker would love to produce a fake device driver or a web component that it says is from a software vendor but isn't really so using a code signing certificate to digitally sign the code mitigates this danger microsoft dell any software or hardware company that is releasing software or little utilities they will almost always sign those nowadays but code signing provides proof of content integrity self-signed certificates these are issued by the same entity that's using it it doesn't have a crl it can't be validated or trusted but it's the cheapest way for using internal certificates it's really commonly used in a lab it can be placed on multiple servers but it's only going to work within our organization machine or computer certificates are used to identify a computer within a domain so a device can be authenticated an email certificate allows users to digitally sign their emails to verify their identity through the attestation of a trusted third party known as a certificate authority it allows users to encrypt the entire contents of email messages attachments all of it a user certificate represents a user's digital identity in most cases a user certificate is mapped back to a user account and access control can then be based on that user account in most cases a user certificate is mapped back to a user account such as in active directory so i know this is a lot i want you to stick with me i'm going to tie these pki concepts together here in just a moment i'm going to walk you through a scenario i'll show you a little picture here and we'll try to to just tie all of this information together for you in your head i know i'm hitting you with a lot of information here but pki is very important and if you understand these concepts you are in the minority you're in the minority that you want to be a member of the people who know pki so a root certificate is a trust anchor in a pki environment it's the root certificate from which the entire chain of trust is derived we'll look at this in a picture here in a minute i'll just draw this is the certificate of the root certification authority at the top of the hierarchy it's where the chain of trust begins domain validation so a domain validated certificate is an x.509 certificate that proves ownership of a domain name then you'll occasionally see extended validation certificates these provide a higher level of trust in identifying the entity that's using the certificate these aren't something you see every day but they're commonly used in the financial services sector when there is money involved important transactions it's more likely you're going to see an extended validation certificate so i mentioned the chain of trust let's touch on these certificate authorities we talked about here in the last few minutes so the issuing ca or issuing certificate authority is the server that's issuing certificates to users and devices i mentioned a subordinate ca this is sometimes called an intermediate ca or a policy ca so the subordinate ca will issue certificates to issuing cas it's really there just for policy and these issuing ca assignments in the middle of the hierarchy and what that allows us to do is to maintain our root certificate authority in an offline state so that right there is our chain of trust a certificate that is issued by that issuing ca at the bottom of the chain will be validated all the way up to that root certificate but our root server does not have to be online and it is always recommended that that root ca is maintained in an offline state you only bring it online for the occasional operation such as issuing a certificate for a new subordinate ca you know if we have for example a ca that is compromised then we'd need to bring that route online we'd need to revoke a certificate and issue a new one for a different subordinate for a new replacement subordinate and you'll bring your root ca on online to patch it every now and again but that's what the hierarchy looks like so i hope that makes that a little more plain if you're new to pki now you may see questions about certificate file formats now x.509 certificate formats and descriptions are in this table here and i've outlined the important information for you so you can remember these more easily for the exam so when you export a certificate to a file or you are granted a certificate in a file you can tell a lot about that file based on the file extension which i have listed there in the ext column and then i've also listed here as to whether or not that file format contains the private key because remember certificates are not whole without the private key it consists of two keys a public key and a private key and you'll see here for example with the der format it does not include the private key and you'll see a just the description column describes the common use for that format so you see the der format pem pfx cer p12 p7b and whether or not it includes the private key but this should give you everything you need for remembering these formats but just chunk these together you'll notice of those six formats three include the private key and three do not so how do the private key and public key work together this is confusing to a lot of folks i'm going to take you through a scenario that will clear that right up so we have here franco and maria so franco sends a message to maria requesting her public key so maria has a public key and a private key in her certificate the public key she can share with anyone so she sends that back to franco or more likely her application sends that back to franco franco uses maria's public key to encrypt a message and he sends it to her maria uses her private key to decrypt the message and a message that is encrypted with maria's public key can only be decrypted using maria's private key which only maria holds and she will not share that with anyone it's a private key and that key exchange that i just described here this transaction applies to a wide variety of use cases that you might encounter on the exam so i want to talk through just a few additional pki concepts that are called out in the exam objectives so an online versus an offline certificate authority so an online ca is always running meaning the computer is on an offline ca is kept offline except for specific issuance and renewal operations remember offline is best practice for your root ca stapling so stapling is a method used with online certificate status protocol which allows a web server to provide information on the validity of its own certificate this is done by the web server essentially downloading the ocsp response from the certificate vendor in advance and providing it to browsers without going back to the ocsp endpoint pinning this is a method designed to mitigate the use of fraudulent certificates so once a public key or certificate has been seen for a specific host that key or certificate is pinned to the host so should a different key or certificate be seen for that host that might indicate an issue with a fraudulent certificate trust model so a model of how different certificate authorities trust each other and how their clients will trust certificates from other certification authorities there are four trust models in pki the four main types are bridge hierarchical hybrid and mesh so that picture i drew for you of the issuing ca subordinate ca and root ca that is as you might guess the hierarchical model and that is far and away the most common key escrow key escrow addresses the possibility that a cryptographic key may be lost and the concern is usually with symmetric keys or with the private key in asymmetric cryptography so remember with symmetric cryptography there's only one key in you so if you lose that key that's a problem right with asymmetric cryptography more often than not we're dealing with the loss of the private key which doesn't exist in many places because it's not shared and if that loss occurs then there's no way to get the key back and the user can't decrypt messages so organizations will establish key escrows to enable recovery of lost keys and certificate chaining which refers to the fact that certificates are handled by a chain of trust so you purchase a digital certificate from an issuing authority so you trust that ca's certificate in turn that ca trusts a root certificate probably on an offline route ca and there you have it congratulations you've reached the end of domain three i appreciate you sticking with me this is a big domain i hope you're getting value out of the series reach out to me with questions anytime in the comments below this video or directly on linkedin i'm always happy to talk if i can help you as you prepare for the exam expect modules four and five covering domains four and five in just the next few days and until next time take care and stay safe you
Info
Channel: Inside Cloud and Security
Views: 93,706
Rating: undefined out of 5
Keywords:
Id: CdBD5aFLUEc
Channel Id: undefined
Length: 172min 29sec (10349 seconds)
Published: Tue Feb 08 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.