Cryptography Full Course | Cryptography And Network Security | Cryptography | Simplilearn

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everyone welcome to this video on the full course for cryptography in this course we will learn everything there is to know about cryptography and how it helps keep our data secure on the internet and in local storage with encryption and decryption forming the heart of this lesson cryptography has become the backbone of modern internet security from secure website browsing to exchanging messages on whatsapp cryptography has its roots planted firmly among multiple avenues all of which will be covered in this video so strap up and get ready for a comprehensive course on cryptography that can be completed in just under 3 hours let's take a look at the topics to be covered today we start our lesson by understanding the basic need for cryptography and why it has become an integral part of our daily lives moving on we learn how cryptography works its worldwide applications and its historical significance as far as securing data is concerned next we know about the different ways to employ cryptography such as symmetric encryption asymmetric encryption and hashing once the basic concepts are clearly understood we can move on to more complex topics such as the individual encryption and decryption algorithms used in cryptography the first one being the data encryption standard or the des algorithm next we learn about the advanced encryption standard that is the aes algorithm digital signature or the dsa algorithm and the reverse shameer edelman algorithm also known as the rsa encryption algorithm we then take a look at two hashing algorithms specifically the messaging digest 5 or the md5 hash algorithm and the secure hash algorithm family also known as the sha hash algorithm once we understand the algorithms at play we can learn about some unique applications of cryptography like the secure socket layer handshake or the ssl handshake which helps keep our web browsers safe against malicious attacks similarly we learn about the diffie-hellman key exchange which was a revolutionary protocol that enabled the exchange of cryptographic keys over an insecure channel finally we go through some popular interview questions that candidates go through when applying for a position that deals with cryptography or just cyber security in general but before we start our lesson make sure you are subscribed to the simply learn youtube channel and the bell icon is enabled so that you never miss an update from us so here's a story to help you understand cryptography meet ann ann wanted to look for a decent discount on the latest iphone she started searching on the internet and found a rather shady website that offered a 50 discount on the first purchase once ann submitted her payment details a huge chunk of money was withdrawn from a bank account just moments after devastated and quickly realized she had failed to notice that the website was a http web page instead of an https one the payment information submitted was not encrypted and it was visible to anyone keeping an eye including the website owner and hackers had she used a reputed website which has encrypted transactions and employees cryptography our iphone enthusiasts could have avoided this particular incident this is why it's never recommended to visit unknown websites or share any personal information on them now that we understand why cryptography is so important let's now understand what cryptography is cryptography is the science of encrypting or decrypting information to prevent unauthorized access we transform our data and personal information so that only the correct recipient can understand the message as an essential aspect of modern data security using cryptography allows the secure storage and transmission of data between willing parties encryption is the primary route for employing cryptography by adding certain algorithms to jumble up the data decryption is the process of reversing the work done by encrypting information so that the data becomes readable again both of these methods form the basis of cryptography for example when simply learn is jumbled up or changed in any format not many people can guess the original word by looking at the encrypted text the only ones who can are the people who know how to decrypt the coded word thereby reversing the process of encryption any data pre-encryption is called plain text or clear text to encrypt the message we use certain algorithms that serve a single purpose of scrambling the data to make them unreadable without the necessary tools these algorithms are called ciphers they are a set of detailed steps to be carried out one after the other to make sure the data becomes as unreadable as possible until it reaches the receiver we take the plain text pass it to the cipher algorithm and get the encrypted data this encrypted text is called the ciphertext and this is the message that is transferred between the two parties the key that is being used to scramble the data is known as the encryption key these steps that is the cipher and the encryption key are made known to the receiver who can then reverse the encryption on receiving the message unless any third party manages to find out both the algorithm and the secret key that is being used they cannot decrypt the messages since both of them are necessary to unlock the hidden content wonder what else we would lose if not for cryptography any website where you have an account can read your passwords important emails can be intercepted and their contents can be read without encryption during the transit more than 65 billion messages are sent on whatsapp every day all of which are secured thanks to end to end encryption there is a huge market opening up for cryptocurrency which is possible due to blockchain technology that uses encryption algorithms and hashing functions to ensure that the data is secure cryptography has been in practice for centuries julius caesar used a substitution shift to move alphabets a certain number of spaces beyond their place in the alphabet table a spy can't decipher the original message at first glance for example if he wanted to pass confidential information to his armies and decides to use the substitution shift of plus 2 a becomes c b becomes d and so on the word attack when passed through a substitution shift of plus 3 becomes d w d e f n this cipher has been appropriately named the caesar cipher which is one of the most widely used algorithms the enigma is probably the most famous cryptographic cipher device used in ancient history it was used by the nazi german armies in the world wars they were used to protect confidential political military and administrative information and it consisted of three or more rotors that scrambled the original message depending on the machine state at that time the decryption is similar but it needs both machines to stay in the same state before passing the cipher text so that we receive the same plain text message there are two types of encryption in cryptography symmetric key cryptography and asymmetric key cryptography both of these categories have their pros and cons and differ only by the implementation today we are going to focus exclusively on symmetry key cryptography let us have a look at its applications in order to understand its importance better this variant of cryptography is primarily used in banking applications where personally identifiable information needs to be encrypted with so many aspects of banking moving on to the internet having a reliable safety net is crucial symmetric cryptography helps in detecting bank fraud and boosts the security index of these payment gateways in general they are also helpful in protecting data that is not in transit and dress on servers and data centers these centers house a massive amount of data that needs to be encrypted with a fast and efficient algorithm so that when the data needs to be recalled by the respective service there is the assurance of minor to no delay while browsing the internet we need symmetric encryption to browse secure https websites so that we get an all-around protection it plays a significant role in verifying website server authenticity exchanging the necessary encryption keys required and generating a session using those keys to ensure maximum security this helps us in preventing the rather insecure hdp website format so let us understand how symmetric key cryptography works first before moving on to the specific algorithms symmetric key cryptography relies on a single key for the encryption and decryption of information both the sender and receiver of the message need to have a pre-shared secret key that they will use to convert the plain text into ciphertext and vice versa as you can see in the image the key used for encryption is the same key needed for decrypting the message at the other end the secret key shouldn't be sent along with the cipher text to the receiver because that would defeat the entire purpose of using cryptography key exchange can be done beforehand using other algorithms like the diffie-hellman key exchange protocol for example for example if paul wants to send a simple message to jane they need to have a single encryption key that both of them must keep secret to prevent snooping on by malicious actors it can be generated by either one of them but must belong to both of them before the messages start flowing suppose the message i am ready is converted into ciphertext using a specific substitution cipher by paul in that case jane must also be aware of the substitution shift to decrypt the ciphertext once it reaches her irrespective of the scenario where someone manages to grab the ciphertext mid-transit to try and read the message not having the secret key renders everyone helpless looking to snoop in the symmetric key algorithms like the data encryption standard have been in use since the 1970s while the popular ones like the ees have become the industry standard today with the entire architecture of symmetric cryptography depending on the single key being used you can understand why it's of paramount importance to keep it secret on all occasions the side effect of having a single key for the encryption and decryption is it becomes a single point of failure anyone who gets their hand on it can read all the encrypted messages and do so mainly without the knowledge of the sender and the receiver so it is the priority to keep the encryption and encryption key private at all times should it fall into the wrong hands the third party can send messages to either the sender or the receiver using the same key to encrypt the message upon receiving the message and decrypting it with the key it is impossible to guess its origin if the sender somehow transmits the secret key along with the cipher text anyone can intercept the package and access the information consequently this encryption category is termed private key cryptography since a big part of the data's integrity is riding on the promise that the users can keep the key secret this terminology contrasts with asymmetry key cryptography which is called public key cryptography because it has two different keys at play one of which is public provided we managed to keep the key secret we still have to choose what kind of ciphers we want to use to encrypt this information in symmetric key cryptography there are broadly two categories of ciphers that we can employ let us have a look stream ciphers are the algorithms that encode basic information one bit at a time it can change depending on the algorithm being used but usually it's relies on a single bit or byte to do the encryption this is the relatively quicker alternative considering the algorithm doesn't have to deal with blocks of data at a single time every piece of data that goes into the encryption can and needs to be converted into binary format in stream ciphers each binary digit is encrypted one after the other the most popular ones are the rc4 salsa and panama the binary data is passed through an encryption key which is a randomly generated bit stream upon passing it through we receive the ciphertext that can be transferred to the receiver without fear of man in the middle attacks the binary data can be passed through an algorithmic function it can have either x or operations as it is most of the time or any other mathematical calculations that have the singular purpose of scrambling the data the encryption key is generated using the random bitstream generator and it acts as a supplement in the algorithmic function the output is in binary form which is then converted into the decimal or hexadecimal format to give our final ciphertext on the other hand block ciphers dissect the raw information into chunks of data of fixed size the size of these blocks depend on the exact cipher being used a 128-bit block cipher will break the plain text into blocks of 128 bit each and encrypt those blocks instead of a single digit once these blocks are encrypted individually they are chained together to form a final cipher text block ciphers are much slower but they are more tamper-proof and are used in some of the most widely used algorithms employed today just like stream ciphers the original ciphertext is converted into binary format before beginning the process once the conversion is complete the blocks are passed through the encryption algorithm along with the encryption key this would provide us with the encrypted blocks of binary data once these blocks are combined we get a final binary string this string is then converted into hexadecimal format to get our ciphertext today the most popular symmetric key algorithms like aes des and 3des are all block cipher methodology subsets with so many factors coming into play there are quite a few things symmetrically cryptography excels at while falling short and some other symmetric key cryptography is much faster variant when compared to asymmetric key cryptography there is only one key in play unlike asymmetric encryption and this drastically improves calculation speed in the encryption and decryption similarly the performance of symmetric encryption is much more efficient under similar computational limitations fewer calculations help in better memory management for the whole system bulk amounts of data that need to be encrypted are very well suited for symmetric algorithms since they are much quicker handling large amounts of data is simple and easy to use in servers and data forms this helps in better latency during data recall and fewer mixed packets thanks to its simple single key structure symmetric key cryptography algorithms are much easier to set up a communication channel with and offer a much more straightforward maintenance duties once the secret key is transmitted to both the sender and receiver without any prior mishandling the rest of the system aligns easily and everyday communications becomes easy and secure if the algorithm is applied as per the documentation symmetric algorithms are very robust and can encrypt vast amounts of data with very less overhead we took a look at symmetric key cryptography we used a single private key for both the encryption and decryption of data and it works very well in theory let's take a look at a more realistic scenario now let's meet joe joe is a journalist who needs to communicate with ryan via long distance messaging due to the critical nature of the information people are waiting for any message to leave joe's house so that they can intercept it now joe can easily use symmetric cryptography to send the encrypted data so that even if someone intercepts the message they cannot understand what it says but here's the tricky part how will joe send the required decryption key to ryan the sender of the message as well as the receiver need to have the same decryption key so that they can exchange messages otherwise ryan cannot decrypt the information even when he receives the cipher text if someone intercepts the key while transmitting it there is no use in employing cryptography since the third party can now decode all the information easily key sharing is a risk that will always exist when symmetric key cryptography is being used thankfully asymmetric key encryption has managed to fix this problem let's understand what asymmetric key cryptography is asymmetric encryption uses a double layer of protection there are two different keys at play here a private key and a public key a public key is used to encrypt the information pre-transit and a private key is used to decrypt the data post transit these pair of keys must belong to the receiver of the message the public keys can be shared via messaging blog posts or key servers and there are no restrictions as you can see in the image the two keys are working in the system the sender first encrypts the message using the receiver's private key after which we receive the ciphertext the ciphertext is then transmitted to the receiver without any other key on getting the ciphertext the receiver uses his private key to decrypt it and get the plain text back there has been no requirement of any key exchange throughout this process therefore solving the most glaring flaw faced in symmetric key cryptography the public key known to everyone cannot be used to decrypt the message and the private key which can decrypt the message need not be shared with anyone the sender and receiver can exchange personal data using the same set of keys as often as possible to understand this better take the analogy of your mailbox anyone who wants to send you a letter has access to the box and can easily share information with you in a way you can say the mailbox is publicly available to all but only you have access to the key that can open the mailbox and read the letters in it this is how the private key comes to play no one can intercept the message and read its contents since it's encrypted once the receiver gets its contents he can use his private key to decrypt the information both the public key and the private key are generated so they are interlinked and you cannot substitute other private keys to decrypt the data in another example if alice wants to send a message to bob let's say it reads call me today she must use bob's public key while encrypting the message upon receiving the cipher message bob can proceed to use his private key in order to decrypt the message and hence complete securities attained during transmission without any need for sharing the key since this type of encryption is highly secure it has many uses in areas that require high confidentiality it is used to manage digital signature so there is valid proof of a document's authenticity with so many aspects of business transitioning to the digital sphere critical documents need to be verified before being considered authentic and acted upon thanks to asymmetric cryptography senders can now sign documents with their private keys anyone who needs to verify the authenticity of such signatures can use the sender's public key to decrypt the signature since the public and the private keys are linked to each other mathematically it's impossible to repeat this verification with duplicate keys document encryption has been made very simple by today's standards but the background implementation follows the stimulus approach in blockchain architecture asymmetry key cryptography is used to authorize transactions and maintain the system thanks to its two key structures changes are reflected across the blockchain's peer-to-peer network only if it is approved from both ends along with asymmetric key cryptography's tamper-proof architecture its non-repudiation characteristic also helps in keeping the network stable we can also use asymmetric key cryptography combined with symmetrical cryptography to monitor ssl or tls encrypted browsering sessions to make sure nobody can steal our personal information when accessing banking websites or the internet in general it plays a significant role in verifying website server authenticity exchanging the necessary encryption keys required and generating a session using those keys to ensure maximum security instead of the rather insecure http website format security parameters differ on a session by session basis so the verification process is consistent and utterly essential to modern data security another great use of the asymmetric key cryptography structure is transmitting keys for symmetric key cryptography with the most significant difficulty in symmetric encryption being key exchange asymmetric keys can help clear the shortcoming the original message is first encrypted using a symmetric key the key used for encrypting the data is then converted into the cipher text using the receiver's public key now we have two cipher text to transmit to the receiver on receiving both of them the receiver uses his private key to decrypt the symmetry key he can then use it to decrypt the original information on getting the key used to encrypt the data while this may seem more complicated than just asymmetric cryptography alone symmetric encryption algorithms are much more optimized for vast amounts of data on some occasions encrypting the key using asymmetric algorithms will definitely be more memory efficient and secure you might remember us discussing why symmetric encryption was called private key cryptography let us understand why asymmetric falls under the public key cryptography we have two keys at our disposal the encryption key is available to everyone the decryption key is supposed to be private unlike symmetric key cryptography there is no need to share anything privately to have an encrypted messaging system to put that into perspective we share an email address with anyone looking to communicate with us it is supposed to be public by design so that our email login credentials are private and they help in preventing any data mishandling since there is nothing hidden from the world if they want to send us any encrypted information this category is called the public key cryptography with so many variables in play there must be some advantages that give asymmetrically cryptography an edge over the traditional symmetric encryption methodologies let's go through some of them there is no need for any reliable key sharing channel in asymmetric encryption it was an added risk in private key cryptography that has been completely eliminated in public key architecture the key which is made public cannot recruit any confidential information and the only key that can decrypt doesn't need to be shared publicly under any circumstance we have much more extensive key lengths in rsa encryption and other asymmetric algorithms like 2048 bits key and 4096 bit keys larger keys are much harder to break into via brute force and are much more secure asymmetric key cryptography can use as a proof of authenticity since only the rightful owner of the keys can generate the messages to be decrypted by the private key the situation can also be reversed encryption is done using a private key and decryption is done by the public key which would not function if the correct private key is not used to generate the message hence proving the authenticity of the owner it also has a tamper protection feature where the message cannot be intercepted and changed without invalidating the private key used to encrypt the data consequently the public key cannot decrypt the message and it is easy to realize the information is not 100 legitimate when and where the case requires before getting too deep into the topic let's get a brief overview of how hashing works hashing is the process of scrambling a piece of information or data beyond recognition we can achieve this by using hash functions which are essentially algorithms that perform mathematical operations on the main plain text the value generated after passing the plaintext information through the hash function is called the hash value digest or in general just the hash of the original data while this may sound similar to encryption the major difference is hashes are meant to be irreversible no decryption key can convert a digest back to its original value however a few hashing algorithms have been broken due to the increase in computational complexity of today's new generation computers and processors there are new algorithms that stand the test of time and are still in news among multiple areas for password storage identity verification etc like we discussed earlier websites use hashing to store the user's passwords so how do they make use of these hash passwords when a user signs up to create a new account the password is then run through the hash function and the resulting hash value is stored on the servers so the next time a user comes to login to the account the password he enters is passed to the same hash function and compared to the hash stored on the main server if the newly calculated hash is the same as the one stored on the website server the password must have been correct because according to hash functions terminology same inputs will always provide the same outputs if the hashes do not match then the password entered during login is not the same as the password entered during the signup hence the login will be denied this way no plain text passwords get stored preventing both the owner from snooping on user data and protecting users privacy in the unfortunate event of a data breach or a hack apart from password storage hashing can also be used to perform integrity checks when a file is uploaded on the internet the files hash value is generated and it is uploaded along with the original information when a new user downloads the file he can calculate the digest of the downloaded file using the same hash function when the hash values are compared if they match then file integrity has been maintained and there has been no data corruption since so much important information is being passed on to the hash function we need to understand how they work a hash function is a set of mathematical calculations operated on two blocks of data the main input is broken down into two blocks of similar size the block size is dependent on the algorithm that is being used hash functions are designed to be one way they shouldn't be reversible at least by design some algorithms like the previously mentioned md5 have been compromised but most secure algorithms are being used today like the sha family of algorithms the digest size is also dependent on the respective algorithm being used md5 has a digest of 128 bits while srg-256 has a digest of 256 bits this digest must always be the same for the same input irrespective of how many times the calculations are carried out this is a very crucial feature since comparing the hash value is the only way to check if the data is untouched as the functions are not reversible there are certain requirements of a hash function that need to be met before they are accepted while some of them are easy to guess others are placed in order to preserve security in the long run the hash function must be quick enough to encrypt large amounts of data at a relatively fast pace but it also shouldn't be very fast running the algorithm on all cylinders makes the functions easy to boot force and a security liability there must be a balance to allow the hash function to handle large amounts of data and not make it ridiculously easy to brute force by running through all the possible combinations the hash function must be dependent on each bit of the input the input can be text audio video or any other file extension if a single character is being changed it doesn't matter how small that character may be the entire digest must have a distinctly different hash value this is essential to create unique digest for every password that is being stored but what if two different users are using the same password since the hash function is the same for all users both the digest will be the same this is called a hash collision you may think this must be a rare occasion where two users have exactly the same password but that is not the case we have techniques like salting that can be used to reduce these hash collisions as we will discuss later in this video you would be shocked to see the most used passwords of 2020. all of these passwords are laughably insecure and since many people use the same passwords repeatedly on different websites hash collisions risks are more common than one would expect let's say the hash functions find two users having the same password how can they store both the hashes without messing up the original data this is where salting and peppering come to play salting is the process of adding a random keyword to the end of the input before it is passed on to the hash function this random keyword is unique for each user on the system and it is called the solved value or just the sort so even if two passwords are exactly the same the solved value will differ and so will their digest there is a small problem in this process though since the salt is unique for each user they need to be stored in the database along with the passwords and sometimes even in plain text to speed up the process of continuous verification if the server is hacked then the hashes will need to be brute forced which takes a lot of time but if they receive the salts as well the entire process becomes very fast this is something that peppering aims to solve peppering is the process of adding a random string of data to the input before passing them to the hash function but this time the random string is not unique for each user it is supposed to be common for all users in the database and the extra bit added is called the pepper in this case the pepper isn't stored on the servers it is mostly hard coded onto the website source code since it's going to be the same for all credentials this way even if the servers get hacked they will not have the right pepper needed to crack into all the passwords many websites use a combination of salting and preparing to solve the problem of hash collision and bolster security since brute force takes such a long time many hackers avoid taking the effort the returns are mostly not worth it and the possible combinations of using both salting and peppery is humongous in today's world the most widely used symmetric encryption algorithm is aes 256 that stands for advanced encryption standard which has a key size of 256 bit with 128 bit and 196 bit key sizes also being available other primitive algorithms like the data encryption standard that is the des the triple data encryption standard 3des and blowfish have all fallen out of favor due to the rise of aes the es algorithm stands for data encryption standard it is a symmetric key cipher that is used to encrypt and re-kept information and are blocked by block manner each block is encrypted individually and they are later chained together to form our final cipher text which is then sent to a receiver des takes the original unaltered piece of data called the plain text in a 64-bit block and it is converted into an encrypted text that is called the ciphertext it uses 48-bit keys during the encryption process and follows a specific structure called the fiscal cipher structure during the entire process it is a symmetric key algorithm which means des can reuse the keys used in the encryption format to decrypt the cipher text back to the original plain text once the 64-bit blocks are encrypted they can be combined together before being transmitted let's take a look at the origin and the reason the es was founded das is based on a fiscal block cipher called lucifer developed in 1971 by ibm cryptography researcher host vista des uses 16 rounds of the swisstal structure using a different key for each round it also utilizes a random function with two inputs and provides a single output variable the es became the organization's approved encryption standard in november 1976 and was later reaffirmed as our standard in 1983 1988 and finally in 1999 but eventually the es was cracked and it was no longer considered a secure solution for all official routes of communication consequently triple ds was developed triple res is a symmetric key block cipher that uses a double ds cipher encrypt with the first key delete encryption with the second key and encrypt again with a third key there is also a variation of the two keys where the first and second key are duplicate of each other but triple ds was ultimately deemed too slow for the growing need for fast communication channels and people eventually fell back to using ds for encrypting messages in order to search for a better alternative a public wide competition was organized and helped cryptographers develop their own algorithm as a proposal for the next global standard this is where the rindyle algorithm came into play and was later credited to be the next advanced encryption standard for a long time des was the standard for data encryption for data security its rule ended in 2002 when finally the advanced encryption standard replaced des as an acceptable standard following a public competition for a place to understand the structure of a fistal cipher we can use the following image as a reference the block being encrypted is divided into two parts one of which is being passed on to the function while the other part is xored with the function's output the function also uses the encryption key that differs for each individual round this keeps going on until the last step until where the right hand side and the left hand side are being swapped here we receive our final ciphertext for the decryption process the entire procedure is reversed starting from the order of the keys to the block sorting if the entire process is repeated in a reverse order we will eventually get back our plain text and this simplicity helps the speed overall this was later detrimental to the efficiency of the algorithm hence the security was compromised a fiscal block cipher is a structure used to derive many symmetry block ciphers such as des which as we have discussed in our previous comment crystal cipher proposed a structure that implements substitution and permutation alternately so that we can obtain cipher text from the plain text and vice versa this helps in reducing the redundancy of the program and increases the complexity to combat brute force attacks the pistol cipher is actually based on the shannon structure that was proposed in 1945 the fistel cipher is the structure suggested by horst feistel which was considered to be a backbone while developing many symmetric block ciphers the shannon structure highlights the implementation of alternate confusion and diffusion and like we already discussed the fiscal cipher structure can be completely reversed depending on the data however we must consider the fact that to decrypt the information by reversing the physical structure we will need the exact polynomial functions and the key orders to understand how the blocks are being calculated we take a plain text which is of 64-bit and it is later divided into two equal halves of 32-bit each in this the right half is immediately transferred to the next round to become the new left half of the second row the right hand is again passed off to a function which uses an encryption key that is unique to each round in the fisted cipher whatever the function gives off as an output it is passed on as an xor input with the left half of the initial plain text the next output will become the right half of the second round for the plain text this entire process constitutes of a single round in the fiscal cipher taking into account what happens in a polynomial function we take one half of the block and pass it through an expansion box the work of the expansion box is to increase the size of the half from 32-bit to 48-bit text this is done to make the text compatible to a 48-bit keys we have generated beforehand once we pass it through the xor function we get a 48-bit text as an output now remember a half should be of 32-bit so this 48-bit output is then later passed on to a substitution box this substitution box reduces its size from 48 bit to 32 bit output which is then later xor with the first half of the plain text a block cipher is considered the safest if the size of the block is large but large block sizes can also slow down encryption speed and the decryption screen generally the size is 64 bit sometimes modern block ciphers like aes have a 128 bit block size as well the security of the block server increases with increasing key size but larger key sizes may also reduce the speeds of the process earlier 64-bit keys were considered sufficient modern ciphers need to use 128-bit keys due to the increasing complexity of today's computational standards the increasing number of rounds also increase the security of the block cipher similarly they are inversely proportional to the speed of encryption a highly complex round function enhances the security of the block cipher always we must maintain a balance between the speed and security the symmetric block cipher is implemented in a software application to achieve better execution speed there is no use of an algorithm it cannot be implemented in a real-life framework that can help organizations to encrypt or decrypt the data in a timely manner now that we understand the basics of fiscal ciphers we can take a look at how des manages to run through 16 rounds of the structure and provide the cipher text at the end now that we understand the basics of crystal cybers we can take a look at how des manages to run through 16 rounds of this structure and provide a cipher text in simple terms ds takes the 64-bit plaintext and converts it into a 64-bit cipher text and since we are talking about these symmetric algorithms the same key is being used when it is decrypting the data as well we first take a 64-bit clip plain text and we pass it to an initial permutation function the initial permission function has the job of dividing the block into two different parts so that we can perform fiscal recycler structures on it there are multiple rounds being procured in the ds algorithm namely 16 rounds of fiscal cipher structure each of these rounds will need keys initially we take a 56 bit cipher key but it is a single key we pass it on to a round key generator which generates 16 different keys for each single round that the fistal cipher is being run these keys are passed on to the rounds as 48 bits the size of these 48 bits case is the reason we use the substitution and permutation bonds in the polynomial functions of the special ciphers when passing through all these rounds we reach round 16 but the final key is passed on from the round key generator and we get a final permutation in the final permutation the values are swapped and we get our final cipher text this is the entire process of des with 16 rounds of pistol servers encompassed in it to decrypt our cipher text back to the plain text we just have to reverse the process we did in the des algorithm and reverse the key order along with the functions this kind of simplicity is what gave des the bonus when it comes to speed but eventually it was detrimental to the overall efficiency of the program when it comes to security factors bes have five different modes of operation to choose from this one of those is electronic code book each 64-bit block is encrypted and decrypted independently in the electronic code book format we also have cipher block chaining or the cbc method here each 64-bit block depends on the previous one and all of them use an initialization vector we have a cipher feedback block mechanism where the preceding ciphertext becomes the input for the encryption algorithm it produces a pseudo random output which in turn is xored with the plain text there is an output feedback method as well which is the same as cipher feedback except that the encryption algorithm input is the output from the preceding des a counter method has a different way of approach where each plaintext block is xored with an encrypted counter the counter is then incremented for each subsequent block there are a few other alternatives to these modes of operation but the five mentioned above are the most widely used in the industry and recommended by cryptographers worldwide let's take a look at the future of des dominance of des ended in 2002 when the advanced encryption standard replaced the ds encryption algorithm as the accepted standard it was done by following a public competition to find a replacement nist officially withdrew the global acceptance standard in may 2005 although triple des has approved for some sensitive government information through 2030. nist also had to change the ds algorithm because its key length was too short given the increased processing power of the new computers encryption power is related to the size of the queue and des found itself a victim of ongoing technological advances in computing we have received a point where 56 bit was no longer a challenge to the computers of tracking note that because ds is no longer the nist federal standard does not mean that it is no longer in use triple ds is still used today and is still considered a legacy encryption algorithm in our last video we saw how the data encryption standard also known as ds became the global standard for encryption and data security eventually with so much computing power growth the need for a stronger algorithm was necessary to safeguard our personal data as solid as des was the computers of today could easily break the encryption with repeated attempts thereby rendering the data security helpless to counter this dilemma a new standard was introduced which was termed as the advanced encryption standard or the aes algorithm let's learn what is advanced encryption standard the aes algorithm also known as the raindial algorithm is a symmetric block cipher with a block size of 128 bits it is converted into cipher text using keys of 128 192 or 256 bits it is implemented in software and hardware throughout the world to encrypt sensitive data the national institute of standards and technology also known as nist started development on aes in 1997 when it was announced the need for an alternative to the data encryption standard the new internet needed a replacement for des because of its small key size with increasing computing power it was considered unsafe against entire key search attacks the triple ds was designed to overcome this problem however it was deemed to be too slow to be deployed in machines worldwide strong cases were present by the mars rc6 serpent and the two fish algorithms but it was the rindal encryption algorithm also known as aes which was eventually chosen as the standard symmetric key encryption algorithm to be used its selection was formalized with the release of federal information processing standards publication 197 in the november of 2001. it was approved by the u.s secretary of commerce now that we understand the origin of aes let us have a look at the features that make aes encryption algorithm unique the aes algorithm uses a substitution permutation or sp network it consists of multiple rounds to produce a ciphertext it has a series of linked operations including replacing inputs with specific outputs that is substitutions and others that involve bit shuffling which is permutations at the beginning of the encryption process we only start out with a single key which can be either a 128-bit key a 192-bit key or a 256-bit key eventually this one key is expanded to be used in multiple rounds throughout the encryption and the decryption cycle interestingly aes performs all its calculations on byte data instead of bit data as seen in the case of the ds algorithm therefore aes treats 128 bits of a clear text block as 16 bytes the number of rounds during the encryption process depends on the key size that is being used the 128 bit key size fixes 10 rounds the 192 bit key size fixes 12 rounds and the 256 bit key holds 14 rounds a round key is required for each of these rounds but since only one key is input into the algorithm the single key needs to be expanded to get the key for each own including the round zero with so many mathematical calculations going on in the background there are bound to be a lot of steps throughout the procedure let's have a look at the steps followed in aes before we move ahead we need to understand how data is being stored during the process of aes encryption everything in the process is stored in a 4 into 4 matrix format this matrix is also known as a state array and we'll be using these state arrays to transmit data from one step to another and from one round to the next round each round takes state arrow as input and gives a straight array as output to be transferred into the next round it is a 16 byte matrix with each cell representing one byte with each four bytes representing a word so every state area will have a total of four words representing it as we previously discussed we take a single key and expand it to the number of rounds that we need the key to be used in let's say the number of rounds are n then the key has to be expanded to be used with n plus 1 rounds because the first round is the key 0 round let's say n is the number of rounds the key is expanded to n plus 1 rounds it is also a state array having 4 words in its vicinity every key is used for a single round and the first key is used as a round key before any round begins in the very beginning the plain text is captured and passed through an xor function with the round key as a supplement this key can be considered the first key from the n plus 1 expanded set moving on the state array resulting from the above step is passed on to a byte substitution process beyond that there is a provision to shift rows in the state arrays later on the state array is mixed with a constant matrix to shuffle its column in the mix column segment after which we add the round key for that particular round the last four steps mentioned are part of every single round that the encryption algorithm goes through the state arrays are then passed from one round to the next as an input in the last round however we skip the mix columns portion with the rest of the process remaining unchanged but what are these byte substitution and row shifting processes let's find out regarding each step in more detail in the first step the plain text is stored in a state array and is absorbed with the k0 which is the first key in the expanded key set this step is performed only once on a block while being repeated at the end of each round as per iteration demands the state array is xored with the key to get a new state array which is then passed off as input to the sub bytes process in the second stage we have byte substitution we leverage an xbox called as a substitution box to randomly switch data among each element every single byte is converted into a hexadecimal value having two parts the first part denotes the row value and the second part denotes the column value the entire state array is passed through the s box to create a brand new state array which is then passed off as an input to the row shifting process the 16 input bytes are replaced by looking at a fixed table given in the design we finally get a matrix with four rows and four columns when it comes to row shifting each bit in the four rows of the matrix is shifted to the left an entry that is a falloff is reinserted to the right of the line the change is done as follows the first line is not moved in any way the second line is shifted to a single position to the left the third line is shifted two positions to the left and the fourth line is shifted three positions to the left the result is a new matrix that contains the same 16 bytes but has been moved in relation to each other to boost the complexity of the program in mixed columns each column of 4 bytes is now replaced using a special mathematical function the function takes 4 bytes of a column as input and outputs 4 completely new bytes we will get a new matrix with the same size of 16 bytes and it should be noted that this phase has not been done in the last round of the iteration when it comes to adding a round key the 16 bytes of the matrix are treated as 128 bits and the 128 bits of the round key are exot if it is the last round the output is the cipher text if you still have a few rounds remaining the resulting 128 bits are interpreted as 16 bytes and we start another similar round let's take an example to understand how all these processes work if our plain text is the string 2 1 9 2 we first convert it into a hexadecimal format as follows we use an encryption key which is that's my kungfu and it is converted into a hexadecimal format as well as per the guidelines we use a single key which is then later expanded into n plus 1 number of keys in which case is supposed to be 11 keys for 10 different rounds in round 0 we add the round key the plain test is xor with the k0 and we get a state array that is passed off as an input to the substitution by its process when it comes to the substitution bytes process we leverage an s box to substitute the elements of each byte with a completely new byte this way the state array that we receive is passed off as an input to the row shifting process on the next step when it comes to row shifting each element is shifted a few places to the left with the first two being shifted by zero places second row by one place third row by two places and the last by three the state array that we received from the row shifting is passed off as an input to mix columns in mixed columns we multiply the straight array with a constant matrix after which i receive a new state array to be passed on onto the next step we add the name state array as an xor with the round key of the particular iteration whatever state array we receive here it becomes an output for this particular round now since this is the first round of the entire encryption process the state array that we receive is passed off as an input to the new round we repeat this process for 10 more rounds and we finally receive a ciphertext once the final state array can be denoted in the hexadecimal format this becomes our final ciphertext that we can use for transferring information from the sender and receiver let's take a look at the applications of aes in this world aes finds most use in the area of wireless security in order to establish a secure mode of authentication between routers and clients highly secure mechanisms like wpa and wpa2 psk are extensively used in securing wi-fi endpoints with the help of rindel's algorithm it also helps in ssl tls encryption that is instrumental in encrypting our internet browser sessions aes works in tandem with other asymmetric encryption algorithms to make sure the web browser and web server are properly configured and use encrypted channels for communication aes is also prevalent in general file encryption of various formats ranging from critical documents to the media files having a large key allows people to encrypt media and decrypt data with maximum security possible aes is also used for processor security in hardware appliances to prevent machine hijacking among other things as a direct successor to the des algorithm there are some aspects that aes provides an immediate advantage in let us take a look when it comes to key length the biggest flaw in ds algorithm was its small length was easily vulnerable by today's standards aes has managed to nab up 128 192 and 256 bit key lens to bolster the security further the block size is also larger in aes leaving to more complexity of the algorithm the number of rounds in des is fixed irrespective of the plain text being used in aes the number of round depends on the key length that is being used for the particular iteration thereby providing more randomness and complexity in the algorithm the ds algorithm is considered to be simpler than aes even though aes beats des when it comes to relative speed of encryption and decryption this makes advanced encryption standard much more streamlined to be deployed in frameworks and systems worldwide when it compares to the data encryption standard if getting your learning started is half the battle what if you could do that for free visit scale up by simply learn click on the link in the description to know more with the decline of symmetric encryption algorithms or private message encryption new cypher suits came into light the glaring issue with symmetric algorithms was key exchange having to pass secret keys is a risk on its own right and that we were unable to solve in a lot of scenarios thankfully ds algorithm has managed to solve this problem the use cases for this cipher is rather niche but crucial nonetheless let's understand what digital signatures are before moving on to the algorithm the objective of digital signatures is to authenticate and verify documents and data this is necessary to avoid tampering and digital modification or forgery of any kind during the transmission of official documents they work on the public key cryptography architecture with one exception typically an asymmetric key system encrypts using a public key and decrypts the private key for digital signatures however the reverse is true the signature is encrypted using a private key and is decrypted with the public key because the keys are inked together decoding it with the public key verifies that the proper private key was used to sign the document therefore verifying the signature's provenance let's go through each step to understand the procedure thoroughly in step 1 we have m which is the original plain text message and it has passed on to a hash function denoted by h to create a digest next it bundles the message together with the hash digest and encrypts it using the sender's private key it sends the encrypted bundle to the receiver who can decrypt it using the sender's public key once the message is decrypted it is passed through the same hash function h hash to generate a similar digest it compares the newly generated hash with the bundled hash value received along with the message if they match it verifies data integrity in many instances they provide a layer of validation and security messages through non-secure channel properly implemented a digital signature gives the receiver reason to believe that the message was sent by the claimed sender digital signatures are equivalent to traditional handwritten signatures in many respects but properly implemented digital signatures are more difficult to forge than the handwritten type digital signature schemes in the sense used here are cryptographically based and must be implemented properly to be effective they can also provide non-repudiation meaning that the signer cannot successfully claim that they did not sign a message while also claiming their private key remains secret further some non-repudiation schemes offer a timestamp for the digital signature so that even if the private key is exposed the signature is valid to implement the concept of digital signature in real world we have two primary algorithms to follow the rsa algorithm and the dsl algorithm but the latter is a topic of learning today so let's go ahead and see what the digital signature algorithm is supposed to do digital signature algorithm is fips standard which is a federal information processing standard for digital signatures it was proposed in 1991 and globally standardized in 1994 by the national institute of standards and technology also known as the nist it functions on the framework of modular exponential and discrete logarithmic problems which are difficult to compute as a force brute system unlike dsa most signature types are generated by signing message digest with the private key of the originator this creates a digital thumbprint of the data since just the message digest is signed the signature is generally much smaller compared to the data that was saying as a result digital signatures impose less load on processors at the time of signing execution and they use small volumes of bandwidth dsa on the other hand does not encrypt message digest using private key or dkip message digest using public key instead it uses mathematical functions to create a digital signature consisting of two 160 bit numbers which are originated from the message digest and the private key dc's make use of the public key for authenticating the signature but the authorization process is much more complicated when compared with rsa dsa also provides three benefits which is the message authentication integrity verification and non-repardiation in the image we can see the entire process of dsf validation a plain text message is passed on to a hash function where the digest is generated which is passed on to a signing function signing function also has other parameters like a global variable g of random variable k and the private key of the sender the outputs are then bundled onto a single pack with the plain text and sent to the receiver the two outputs we receive from the signing functions are the two 160 bit numbers denoted by s and r on the receiver end we pass the plain text to the same hash function to regenerate the message digest it is passed on to verification function which has other requirements such as the public key of the sender global variable g and snr received from the sender the value generated by the function is then compared to r if they match then the verification process is complete and data integrity is verified this was an overview of the way the dsa algorithm works you already know it depends on logarithmic functions to calculate the outputs so let us see how we can do the same in our next section we have three phases here the first of which is key generation to generate the keys we need some prerequisites we select a queue which becomes a prime divisor we select a prime number p such that p minus 1 mod q equal to 0 we also select a random integer g which must satisfy the two formulas being mentioned on the screen right now once these values are selected we can go ahead with generating the keys the private key can be denoted by x and it is any random integer that falls between the bracket of 0 and the value of q the public key can be calculated as y equal to g to the power x mod p where y stands for the public key the private key can then be packaged as a bundle which comprises of values of p q g and x similarly the public key can also be packaged as a bundle having the values of p q g and y once we are done with key generation we can start verifying the signature and this generation repeat once the keys are generated we can start generating the signature the message is passed through a hash function to generate the digest h first we can choose any random integer k which falls under the bracket of 0 and q to calculate the first 160 bit number of assigning function of r we use the formula g to the power k mod p into mod cube similarly to calculate the value of the second output that is s we use the following formula that is shown on the screen the signature can then be packaged as a bundle having r and s this bundle along with a plain text message is then passed on to the receiver now with the third phase we have to verify the signature we first calculate the message i just received in the bundle by passing it to the same hash function we calculate the value of w u1 and u2 using the formulas shown on the screen we have to calculate a verification component which is then to be compared with the value of r being sent by the sender this verification component can be calculated using the following formula once calculated this can be compared with the value of r if the values match then the signature verification is successful and the entire process is complete starting from key generation to the signature generation all the way up to the verification of the signature with so many steps to follow we are bound to have a few advantages to boot this and we would be right to think so dsa is highly robust in the security and stability aspect when compared to alternative signature verification algorithms we have a few other ciphers that aim to achieve the simplicity and the flexibility of dsa but it has been a tough ask for all the other suits key generation is much faster when compared to the rsa algorithm and such while the actual encryption and decryption process may falter a little in comparison a quicker start in the beginning is well known to optimize a lot of frameworks dsa requires less storage space to work its entire cycle in contrast its direct correspondent that is rsa algorithm needs a certain amount of computational and storage space to function efficiently this is not the case with dsl which has been optimized to work with weaker hardware and lesser resources the dsa is patented but nist has made this pattern available worldwide royalty free a draft version of the speculation fips1865 indicates that dsa will no longer be approved for digital signature generation but it may be used to verify signatures generated prior to the implementation date of that standard the rsa algorithm can be used for general data encryption and decryption as well functioning on a similar public key cryptography architecture it is seen as a more complex solution to bolster security so let's go ahead and see what the rsa algorithm is supposed to do the rsa algorithm is a public key signature algorithm developed by ron rivest adi shamir and leonard edelman the paper was first published in 1977 and the algorithm uses logarithmic functions to keep the working complex enough to withstand brute force and streamlined enough to be fast post deployment rsa can also encrypt and decrypt general information to securely exchange data along with handling digital signature verification let us understand how it achieved this we take our plain text message m we pass it through a hash function to generate the digest h which is then encrypted using the sender's private key this is appended to the original plain text message and sent over to the receiver once the receiver receives the bundle we can pass the plain text message to the same hash function to generate a digest and the ciphertext can be decrypted using the public key of the sender the remaining hashes are compared if the values match then the data integrity is verified and the sender is authenticated apart from digital signatures the main case of rsa is encryption and decryption of private information before being transmitted across communication challenge this is where the data encryption comes into play when using rsa for encryption and decryption of general data it reverses the key set usage unlike signature verification it receives the receiver's public key to encrypt the data and uses the receiver's private key in decrypting the data thus there is no need to exchange any keys in this scenario there are two broad components when it comes to rsa cryptography one of them is key generation key generation employs a step of generating the private and the public keys that are going to be used for encrypting and decrypting the data the second part is the encryption and decryption functions these are the ciphers and steps that need to be run when scrambling the data or recovering the data from the ciphertext you will now understand each of these steps in a next subtopic keeping the previous two concepts in mind let us go ahead and see how the entire process works starting from creating the keeper to encrypting and decrypting the information you need to generate the public and private keys before running the functions to generate ciphertext and plain text to use certain variables and parameters all of which are explained we first used two large prime numbers which can be denoted by p and q we can compute the value of n as n equal to p into q and compute the value of z as p minus 1 into q minus 1. a number e is chosen at random satisfying the following conditions and a number d is also selected at random following the formula e d mod j equal to 1 and it can be calculated with the formula given below the public key is then packaged as a bundle with n and e and the private key is packaged as a bundle using n and d this sums up the key generation process for the encryption and decryption function we use the formula c and m the ciphertext can be calculated as c equal to m to the power e mod n and the plain text can be calculated from the cipher text as c power d mod n when it comes to a data encryption example let's take p and q as 7 and 13. the value of n can be calculated as 91. if we select the value of e to be 5 it satisfy all the criteria that we needed to the value of d can be calculated using the following function which gives it as 29 the public key can then be packaged as 91 comma 5 and the private key can then be packaged as 91 comma 29. the plain text if it is 10 which is denoted by m ciphertext can be calculated to the formula c equal to m to the power e mod n which gives us 82 if somebody receives this cipher text they can calculate the plain text using the formula c to the part d mod n which gives us the value of 10 as selected as our plain text we can now look at the factors that make the rsa algorithm stand out versus its competitors in the advantageous topics of this lesson rsa encryption depends on using the receiver's public key so that you don't have to share any secret key to receive the messages from others this was the most glaring flaw faced by symmetric algorithms which were eventually fixed by asymmetric cryptography structure since the key pairs are related to each other a receiver cannot intercept the message since they didn't have the correct private keys to decrypt the information if a public key can decrypt the information the sender cannot refuse signing it with his private key without admitting the private key is not in fact private anymore the encryption process is faster than that of the dsa algorithm even if the key generation is slower in rsa many systems across the world tend to reuse the same keys so that they can spend less time in key generation and more time on actual cipher text management data will be tamper proof in transit since meddling with the data will alter the usage of the keys the private key won't be able to decrypt the information hence alerting the receiver of any kind of manipulation in between the receiver must be aware of any third party who possesses the private key since they can alter the data mid transit the cases of which are rather low md5 algorithm was one of the first hashing algorithms to take the global stage as a successor to the md4 despite the security vulnerabilities encountered in the future md5 still remains a crucial part of data infrastructure in a multitude of environments the md5 hashing algorithm is a one-way cryptographic functions that accepts a message of any length as input and it returns as output a fixed length digest value to be used for authenticating the original messages the digest size is always 128 bits irrespective of the input the md5 hash function was originally designed for use as a secure cryptographic hash algorithm to authenticate digital signatures md5 has also been depreciated for uses other than as a non-cryptographic checksum to verify data integrity and detect unintentional data corruption ronald rivest founder of rsa data security and institute professor at mit designed md5 as an improvement to a prior message digest algorithm which was the md4 as already iterated before the process is straightforward we pass a plain text message to the md5 hash functions which in turn performs certain mathematical operations on the clear text to scramble the data the 128-bit digest received from this is going to be radically different from the plain text the goal of any message digest function is to produce digests that appear to be random to be considered cryptographically secure the hash functions should meet two requirements first that it is impossible for an attacker to generate a message that matches a specific hash value and second that it is impossible for an attacker to create two messages that produce the same hash value even a slight change in the plain text should trigger a drastic difference in the two digest this goes a long way in preventing hash collisions which take place when two different plaintexts have the same digest to achieve this level of intricacy there are a number of steps to be followed before we receive the digest let us take a look at the detailed procedure as to how the md5 hash algorithm works the first step is to make the plain text compatible with the hash function to do this we need to pad the bits in the message when we receive the input string we have to make sure the size is 64 bit short of a multiple of 512. when it comes to padding the bits we must add one first followed by zeroes to round out the extra characters this prepares the string to have a length of just 64 bits less than any multiple of 512. here on out we can proceed on to the next step where we have to pad the length bits initially in the first step we appended the message in such a way that the total length of the bits in the message was 64 bit short of any multiple of 512. now we add the length bits in such a way that the total number of bits in the message is perfectly a multiple of 512. that means 64-bit lengths to be precise are added to the message our final string to be hashed is now a definite multiple of 512 the next step would be to initialize the message digest buffer the entire hashing plain text is now broken down into 512-bit blocks there are four buffers or registers that are of 32 bits each named a b c and d these are the four words that are going to store the values of each of these sub blocks the first iteration to follow these registers will have fixed hexadecimal values as shown on the screen below once these values are initial of these 512 blocks we can divide each of them into 16 further sub blocks of 32 bits each for each of these sub blocks we run four rounds of operations having the four buffer variables a b c and d these rounds require the other constant variables as well which differ with each round of operation the constant values are stored in a random array of 64 elements since each 32-bit sub-block is run 4 times 16 such sub-blocks equal 64 constant values needed for a single block iteration the sub blocks can be denoted by the alphabet m and the constant values are denoted by the alphabet t coming to the actual round of operation we see our four buffers which already have pre-initialized values for the first iteration at the very beginning the values of buffers b c and d are passed on to a non-linear logarithmic function the formula behind this function changes by the particular round being worked on as we shall see later in this video once the output is calculated it is added to the raw value stored in buffer a the output of this addition is added to the particular 32-bit sub-block using which we are running the four operations the output of this requisite function then needs to be added to a constant value derived from the constant array k since we have four different elements in the array repeat since we have 64 different elements in the array we can use a distinct element for each iteration of a particular block the next step involves a circular shift that increases the complexity of the hash algorithm and is necessary to create a unique digest for each individual input the output generated is later added to the value stored in the buffer b the final output is now stored in the second buffer of b of the output register individual values of c d and a are derived from the preceding element before the iteration started meaning the value of b gets stored in c value of c gets stored in d and the value of d in a now that we have a full register ready for this sub block the values of abcd are moved on as input to the next sub-block once all 16 sub-blocks are completed the final register value is saved and the next 512-bit block begins at the end of all these blocks we get a final digest of the md5 algorithm regarding the non-linear process mentioned in the first step the formula changes for each round it's being run on this is done to maintain the computational complexity of the algorithm and to increase randomness of the procedure the formula for each of the four rounds uses the same parameters that is b c and d to generate a single output the formulas being used are shown on the screen right now algorithm unlike the latest hash algorithm families a 32-bit digest is relatively easier to compare when verifying the digest they don't consume a noticeable amount of disk storage and are comparatively easier to remember and reiterate passwords need not be stored in plain text format making them accessible for hackers and malicious actors when using digest the database security also gets a boost since the size of all the hash values will be the same in the event of a hack or a breach the malicious actor will only receive the hashed values so there is no way to regenerate the plain text which should be the user passwords in this case since the functions are irreversible by design hashing has become a compulsion when storing user credentials on the server nowadays a relatively low memory footprint is necessary when it comes to integrating multiple services into the same framework without a cpu overhead the digest size is the same and the same steps are run to get the hash value irrespective of the size of the input string this helps in creating a low requirement for computational power and is much easier to run on older hardware which is pretty common in server farms around the world we can monitor file corruption by comparing hash values before and after transit once the hashes match file integrity checks are valid and we can avoid data corruption hash functions will always give the same output for the similar input irrespective of the iteration parameters it also helps in ensuring that the data hasn't been tampered with on route to the receiver of the message we use our wi-fi every day for work and we use the internet for entertainment and communication the dependency on technology is at an all-time high thanks to the radical developments and innovation in these last two decades a big portion of this belongs to ensuring secure channels of communication and data transmission the secure hash algorithm are a family of cryptographic hash functions that are published by the national institute of standards and technology along with the nsa it was passed as a federal information processing standard also known as fips it has four different families of hash functions ssj 0 is a 160 bit hash function published in 1993 and it was closed down later after an undisclosed significant flaw sha1 is also a 160 bit hash function which resembles the earlier md5 algorithm this was designed by the nsa to be a part of the digital signature algorithm sga 2 is a family of two similar hash functions with different block sizes known as the sha-256 and the sha-512 they differ in the word size she-256 uses 32-bit words while sha-512 uses 64-bit words ssh3 is a hash function properly known as kcac it was chosen in 2012 after a public competition among non-nsa designers it supports the same hash lengths as hsj2 and its internal structure differs significantly from the rest of the sha family as we have already iterated the process is straightforward we pass a plain text message to the sha hash function which in turn performs certain mathematical operations on the clear text to scramble the data the 160 bit digest received from this is going to be radically different from the plain text the goal of any hash function is to produce digests that appear to be random to be considered cryptographically secure the hash function should meet two requirements first that it is impossible for an attacker to generate a message that matches a specific hash value and second it should be impossible for an attacker to create two messages producing the exactly same hash value even a slight change in the plaintext should trigger a drastic difference in the two digest this goes a long way in preventing hash collisions which takes place when two different plain texts have the same digest the sha family functions have some characteristics that they need to follow while generating the digest let's go through a few of them the length of the clear text should be less than 2 to the power 64 bits in the case of sha-1 and sh-256 this is essential to keep the plain text compatible with the hash function and the size needs to be in comparison area to keep the digest as random as possible the length of the hash digest should be 256 bits in the sha-256 algorithm 512 bits in the sha-512 algorithm and so on bigger digest usually suggests significantly more calculations at the cost of speed and space we typically go for the longest digest to bolster security but there must be a definite balance between the speed and security of a hash function by design all hash function of the sha 512 she 256 are irreversible you should neither get a plain text when you have the digest beforehand nor should the digest provide the original value when you pass it through the same hash function again another case of protection is that when the hash digest is passed into the sha function for a second time we should get a completely different digest from the first instance this is done to reduce the chance of brute force attacks to achieve this level of intricacy there are a number of steps to be followed before we receive the digest let us take a look at the detailed procedure as to how the sha algorithm works the first step is to make the plain text compatible with the hash function to do this we need to pad the bits in the message when you receive the input string you have to make sure the size is 64 bit shot of a multiple of 512. when it comes to padding the bits you must add one first followed by the remaining zeros to round out the extra characters this prepares a string to have a length just 64 bits less than any multiple of 512 here on out we can proceed to the next step where we have to pad the length bits initially in the first step we appended the message in such a way that the total number of bits in the message was 64 bit short from becoming a multiple of 512. now we add the length of bits in such a way that the total number of bits in the message is a perfect multiple of 512 that means 64 bits plus the length of the original message becomes a multiple of 512 this becomes a final string that needs to be hashed in the next step we have to initialize this chaining variables the entire plain text message can now be broken down into blocks of 512 bits each unlike other hash algorithms like md5 which use 4 registers or buffers sha family use 5 buffers of 32 bits each they are named a b c d and e these registers go through multiple rounds of operation but the first iteration has fixed hexadecimal values as can be seen in the screen moving on we have to process each of the 512-bit blocks by breaking each of them into 16 sub-blocks of 32 bits each each of them goes through 4 rounds of operation that use the entire register and have the 512-bit block along with a constant array out of those four rounds each round has 20 iterations so in general we have 80 rounds sum total the constant value of k is an array of 80 elements of those 80 16 elements are being used each round so that comes out to 80 rounds for each of those elements the value of t differs by the number of rounds as can be seen in the table below a single formula is necessary to calculate the output of each round and iteration the formula can be a b c d e register is equal to e plus a non-linear process b along with a circular shift of a plus wt plus kt in this formula abcd is the register value of the chaining variables as we discussed before p is the logical process which has a different formula for each round s5 is a circular shift by 5 bits and wt is a 32-bit string derived from the existing sub-block this can be calculated depending on the iteration at hand in kt signifies a single element of the 80 character element array which changes depending on the particular round at hand for the values of wt the first 16 values are the same as that of the sub blocks so there is no extra calculation needed for the next 64 elements the value of wt can be calculated as shown in the formula here to better understand this let's take a look at how each of this goes in a sequential process we have our initial register using the 5 words of 32 bits each in the first step we put the values of a b c and d to the subsequent register as the output next we use a non-linear process p that changes depending on the round and uses the values of b c and d as input whatever output is generated from the non-linear process it is added with the value of the e-register next the value of a is circular shifted by 5 bits and is added with the output generated in the previous step the next step is adding the value of wt and the constant element of kt the current output is then stored in the register a similarly this iteration is repeated every round and for each sub-block in the process once all the registers are complete and all the sub-blocks are joined together to form the single ciphertext message we will have our hashed output regarding the non-linear process p that uses the values of b c and d as input the formula changes every round to maintain a complexity of the program that can withstand brute force attacks depending on the round the values are passed through a logical operation which is then added with the values of wt kt and so on now that we understand how to get our hash digest from the plain text let us learn about the advantages we obtain when using the sha hash algorithm instead of relying on data in a plain text format digital signatures follow asymmetric encryption methodology to verify the authenticity of a document or a file hash algorithms like ssh 256 and the industry standard sha 512 go a long way in ensuring the verification of signatures passwords need not be stored in a plain text format which makes them accessible to hackers and other malicious actors when using digest the database security also gets a boost since the size of all hash values will be the same in the event of a hack or a breach the malicious actor will only receive the hash values with no way to regenerate the plain text in this case the plain text would be user credentials since the hash functions are irreversible by design it has become a compulsion when storing passwords on the servers the ssl handshake is a crucial segment of the web browsing sessions and it's done using sha functions it consists of your web browsers and the web servers agreeing on encryption keys and hashing authentication to prepare a secure connection it relies on a combination of symmetric and asymmetric algorithms which ensure the confidentiality of the data transmitted between a web server and a web client like the browsers you can monitor file corruption by comparing hash values before and after transit once the hashes match file integrity checks are valid and data corruption is avoided hash functions will always give the same output for the same input irrespective of the iteration parameters it also helps in ensuring that the data hasn't been tampered with on route to the receiver of the message with every passing year a dependence on digital technology increases stanford we book transport via mobiles and shop clothes on our laptops among all these nitty-gritty day-to-day activities information such as payment authentication personal data and private credentials are transported over the internet with various website partners with this reliance on the internet as gradual its impact has been every bit as definite this statistic represents the daily time spent online by internet users worldwide starting from 2011 all the way up to 2021 sorted by device according to zenith optimedia in 2018 the average daily minutes of desktop internet consumption per capita amounted to 39 minutes and it is slowly projected to decline until 2020. however the daily mobile internet consumption is set to increase to 155 minutes in 2021 if our current lifestyle is any indication of the future these statistics are just a window into what is going to be a future full of artificial intelligence and its derivatives as observed by researchers around the world this level of dependability has made us vulnerable to the worst of the internet according to the forbes magazine in 2019 the cumulative damage and costs of cyber effect are far more significant than those inflicted by natural disasters in a year every 11 seconds a business falls victim to a ransomware attack the average cost of a data breach in 2020 is 3.86 million dollars the huge increase in cyber crimes is a major contributor to the 12 percent annual growth rate of cyber security spending the united states has the world's highest average data breach crossed at 6.84 million dollars as per 2020 records opportunities breed innovation and this has been observed in the cybersecurity domain as well to safeguard against such threats the ssl protocol was developed let us learn about the ssl protocol first ssl or secure sockets layer is an encryption based internet security protocol it was first developed by netscape in 1995 for the purpose of ensuring privacy authentication and data integrity in internet communication ssl is the predecessor to the modern tls encryption news today attacks is a cryptographic layer protocol that provides privacy and security to communication between a client and a web server a website that implements ssl authentication has https in its url instead of http in order to provide a high degree of privacy ssl encrypts data that is transmitted across the web this means that anyone who tries to intercept this data will only see a garbled mix of characters that is nearly impossible to decrypt ssl initiates an authentication process called a handshake between two communicating devices to ensure that both of them are really who they claim to be ssl also digitally signs data in order to provide data integrity the internet we use today follows an osi model or even systems interconnections model it is a priority system that characterizes and standardizes the communication on the internet among its multiple layers the ssl layer functions between the application layer and the transport layer the application layer provides services for an application to ensure that effective communication with another applic program is possible the transport layer is responsible for error-free end-to-end delivery of data from the source host to the destination host since the ssl protocol functions in between these two layers the data is encrypted and is authenticated after being passed through the application and before it is transmitted over the network to further understand this priority table let us look at the table of how it works hypertext transfer protocol http is an application layer protocol that is used for transmitting information between computers on the world wide web http is based on a request response standard between a client and server once the data is ready to be shared we can pass it on to the ssl layers as you can notice the protocol being used right now is http it signifies that the data is unencrypted and hence vulnerable to malicious attacks the data is then passed on to the ssl where we have the record protocol for the confidentiality protection and the handshake protocol for the authentication of client and server the sub protocols of the ssl will be later discussed in detail after the data is encrypted and ready to be transmitted it is moved on to the transport layer where we use the tcp packets to send their data along to the internet layer and from that point forward the data can move to its destination using the internet protocol or ip addressing tables with so many sub protocols and sub of the ssl protocol the work is divided into multiple layers and aspects let's have a look at some of the ways the ssl layer make the internet safe and secure for server authentication the client uses the server's public key to encrypt the data which is used to complete the secret key the server can generate the secret key only if it can decrypt the information with the correct private key for client authentication the server uses the public key in the client certificate if any of the authentication steps fail the handshake fails and the session terminate this exchange of digital certificates during the ssl handshake is a part of the authentication process ssl provides data integrity by calculating a message digest or a hash use of ssl does ensure integrity provided that the cipher spec in your channel definition uses a hash algorithm in the ssl record protocol the hash value is generated for the data to be transmitted hence providing a necessary way to verify data corruption ssl uses a combination of symmetric and asymmetric encryption to ensure message privacy during the ssl handshake the ssl client and server agree to an encryption algorithm and a shared secret key is used for that particular session all the messages transmitted between the client and server are then encrypted using the same algorithm and key that ensure that the message will remain private even if it is intercepted regarding the multiple layers in the ssl protocol we have many sub protocols that ensure the three protection aspects of security let us learn more about these the ssl record protocol provides two services to the ssl connection confidentiality and message integrity in ssl record protocol the data is divided into fragments the fragment comprises of the encrypted sha code and the md5 code after the encryption of the data is done the last ssl header is also appended to the data the handshake protocol is used to establish sessions this protocol allows the client and server to authenticate each other by sending a series of messages handshake protocol uses four phase system to complete its cycle in phase one both the client and server send hello package to each other in phase 2 they exchange the certificates with the correct private and public keys in phase 3 they reply to each other with the encryption algorithms and the secret keys while in phase 4 the handshake is completed for the chain cipher spec protocol we use a part of the ssl record protocol unless handshake is completed the ssl record output will be in a pending state after handshake the pending state is converted into a current state the chain cipher protocol consists of a single message which is a one byte inlet and can hold only one value the ssl alert protocol is used to convey ssl related alerts to the peer entity each message in this protocol contains only two bytes let's see how the ssl handshake works in a step-by-step format we have divided the handshake into four distinct phases in phase one the client and server get acquainted with the hello signal message from each site the client sends the ssl version cipher suit and the session id the server returns an encryption algorithm which is chosen from the cipher suit and a compression algorithm which is sent from the client hello signal each this helps in setting up a common encryption algorithm and a common hash value to be used throughout the handshake process in phase 2 the server sends its authentication certificate and requests for client authentication the server also sends its public encryption key and the phrase with the server hello done message once the server sends its public key the client can use it to encrypt its own private key which will be later used to encrypt the data being exchanged between the client and a server in phase 3 the client sends its authentication certificate after verifying the server with the respective certificate authorities the client now send a private key which is encrypted using the server's previously sent public key in phase 4 the client sends the status of the cipher functions along with a finished message to end the handshake from its side the server also sends the status of the cipher algorithm and ends with a finished signal the data is now encrypted with a symmetric key client sent in phase 3. with the end of phase 4 authentication is complete and the ssl handshake has maintained the authenticity of the entire session between the client and server to reiterate the entire process let's go through each step one more time the client sends a hello request to the server the server responds with its own hello message and sends the server authentication certificate for verification at this point the server's hello signal is complete the client exchanges a secret key with the server to encrypt the data and the cipher spec parameters are changed accordingly the client has now finished its own handshake activities the server uses the secret key provided by the client to encrypt the data and alters its own cipher spec parameters as a final step before it sends a finished signal at this point the ssl handshake is complete with an encryption algorithm and a secure data channel to transmit information with ssl protocol being developed in the early 90s it was bound to be a little underpowered when pitted against the current day computers and hackers let's see its future implications and replacements ssl encryption has been depreciated now with its version 2 and version 3 being docked by the internet engineering task force in 2011 and 2015 respectively there had been far too many security vulnerabilities to carry on official activities during the ssl prayer to further enhance the security the transport layer security or tls has been developed as a successor to the ssl protocol tls is a proposed ietf standard first developed in 1999 and the current version is tls 1.3 which was defined in august 2018. even though a major part of the internet is still using tls 1.2 as a safety net the transition to the latest version should be completed before any security deficiencies are exploited a significant portion of today's internet is encrypted using secret encryption keys and for good reason apart from bolstering security encrypting communication channels maintain both the confidentiality and the integrity of the data being transmitted the exchange of these secret keys had always been an arduous task irrespective of the scope of operation since the globally standardized symmetric encryption had seen a lot of application in the iit industry exchanging the encryption and decryption key needed a safe approach to highlight the issue at hand let's take a look at henry and stella both our users want to communicate with each other in a safe and secure manner while using encrypted messages to move forward with the system though the need to have a secret key that can both encrypt and decrypt the message being transmitted the most glaring issue here is the inability to exchange the specific key before they start communicating since they don't have a secure channel anyone looking at the communication channel can easily intercept this key if the key is intercepted all the messages that are exchanged between henry and stella can easily be read by the malicious actors since they already have the decryption key this is where the diffie-hellman key exchange algorithm comes into play it made the process much more streamlined and more secure let's start by learning a bit more about this exchange process diffie-hellman algorithm is a method for securely exchanging cryptography keys over insecure channels without compromising the security and integrity of the data transmission it was developed and published in 1976 by martin hellman and whitfield defi the main usp of this algorithm was the ability to exchange such critical crees over insecure networks once the keys are exchanged without any malicious actors they can be used to transfer encrypted data and thereby establish a secure channel for communication until we receive the asymmetric encryption algorithms that never relied on any category of key exchange symmetric encryption was the only way to communicate securely a secure method to exchange the private keys for this brand of cryptography was much needed a very important part of how the diffie-hellman exchange works at the mechanism behind one-way functions where the output should not be able to be deduced back to the original input to better understand how the one-way functions work let's take a look at this exchange process using color theory initially both the users must have a private color decided on their own and it should not be made public even to the user with whom we'll be exchanging keys let's say the first user has a private color as red and the second user has the private color as blue next they should be having a common public color which can be decided over insecure channels if need be in this case we are choosing this common color to be yellow in the next step both users have to mix their private colors and the publicly decided common color after mixing you're going to get a new color mixture with each of the users once we receive a new mixture for each user this mixture is then sent to each other now this is done over an insecure channel which means this communication can easily be intercepted by any malicious actors who are listening on to the channel at this point of time even if the mixtures are exchanged the hackers have only that the mixtures they do not possess the private colors of both users after the mixtures are received they are supposed to mix their own private color with the received mixture of the opposite user once the colors are mixed they will both get a single color which will be selected as the final secret color or it can be a final secret key now even though this entire communication took place over an insecure channel because of the inability of the hacker to receive the private colors of both users he has no idea that the secret color in the case of transmission is actually brown and not green or yellow which he intercepted during the transmission this is how the convey functions work and the keys can be exchanged over insecure channels without compromising on the privacy now that we understand the theory and the root logic behind this exchange let us learn more about the mathematical side of the diffie-hellman key exchange in our next section we can break down the entire key exchange process into three distinct steps step one is choosing q and alpha first we choose a prime number queue which will stay consistent throughout the process once we have selected q we need to select a primitive root of q and denote it by the symbol alpha this alpha will be a primitive root modulo of q if every number co prime to q is congruent to the power of alpha mod q now to satisfy the primitive root requirement of a prime number the values of alpha mod q alpha square mod q all the way up to alpha q minus 1 mod q must all follow this formula where it should always be less than q which has the range of ranging from 1 up to q minus 1. if this condition is satisfied alpha can be selected as a primitive root for q moving on to step 2 we have a driving the key pair process let's say for the first user we assumed the private key to be denoted by xa where the xa's value should always be less than q after we decide on the private key the public key which will be denoted by y a can be calculated using the formula alpha to the power x a mod q now that we have both the private key and the public key settle the key pair of the first user becomes xa and ya similarly for the second user we let's say the private key is supposed to be xb which must also be less than q the public key of the second user can also be calculated using the formula alpha to the power xb mod q similar to the first user the key pair will have both the private key and public key which becomes xb and yb finally in step 3 we have the key generation steps we already have the parameters for the first user which will be his private key the receiver's public key and the value of q which will stay constant to calculate the secret key or the secret color as denoted by brown in the color theory the formula we're going to use is yb to the power xa mod q for the second user we're going to use a similar formula but with slightly different parameters it will be the private key of the user public key of the other user and the value of q the formula will be similar in the phase of y a to the power x b mod q to satisfy the entire diffie-hellman key exchange process both these generated k value must be equal to successfully generate the key these values should be equal the key generated will be used to encrypt the messages while being transmitted over insecure channels as denoted by the brown color to get a more clear idea of how it is going to work we can take a real world example with mathematical values to prove this theory in step one we have to choose q and alpha so let's say if we choose a prime number q to be 17 in choosing alpha which will be the primitive root of q we have decided we're going to take alpha as three now to satisfy the condition of the primitive root we can check that three mod 17 which is three three square more seventeen nine and all the way up to three power sixteen mod seventy they all fall below seventy so it satisfy the condition to be a primitive root of the prime number 70. in step 2 we need to settle on the private and the public key for the first user the private key which will be considered as xa let's say we decided to be 15. now that we have the value of the private key settled at 15 we can calculate the public key with the formula 3 power 15 mod 17 which is supposedly alpha to the power xa mod q the value becomes 6 so our key pair becomes 15 and 6 which is the private key and the public key of the first user for the second user let's consider the private key xb to b13 the public key can then be calculated this formula 3 to the power 13 mod 17 which has the value of 12. so in the second user's case the key pair can be 13 and 12. in step 3 which is the key generation we already have the requisite parameters for both users to calculate the secret key on the first user we're going to use the formula of k equal to 12 power 15 mod 17 which gives the output as 10. similarly for second user we're going to calculate the secret key using the formula 6 to the power 13 mod 17 which gives the value as 10. now that we have both values of k at 10 this becomes our n secret encryption and decryption key both users can encrypt the messages with this value of k before transmitting the data and the receiver can generate the key on their site too as seen in this video so unless anyone has the secret key as a supplement to the publicly decided key no amount of hacking can break the secure channel thankfully this key exchange process was very well received and is being used nowadays in a lot of places let's read about some of these occasions the public key infrastructure or pki is a set of tools and rules to enforce public key cryptography with multiple entities it governs the issuance of digital certificates around the internet to maintain data confidentiality our internet browsers are authenticated with website servers using an ssl or tls certificate and many encryption keys in between this is only possible due to this diffie-hellman key exchange algorithm which enables the secure exchange of cryptographic entities over all channels we also have secure shell access or ssh for short it is a cryptographic protocol used to access system terminals from a third party appliance or application the diffie helmet algorithm helps in exchanging the keys between both systems so that remote access can be enabled here we will focus on the popular cryptography questions that can be asked in a cyber security interview our experienced instructor bipin will take you through these questions so let's get started the first question define cryptography encryption and decryption now cryptography is used by security professionals to scramble data into non-readable format which is used in securing that information so it involves converting data from a readable format into a non-readable format and then reversing it back to readable format again for example the word computer is now scrambled into looking like an unreadable format now if you look at this word that it has been scrambled into it would be very difficult for a human to figure out what the actual word was now in this scenario we have taken an algorithm where we have made a shift of the alphabet where we have added two alphabets the current one so c plus two becomes e o plus two becomes q m plus two becomes o so we have done a shift of two and thus the key over here for this algorithm is the alphabet plus two so any person who figures that out will be able to unscramble this and convert this back into readable text the fact of scrambling uh readable text data into something that is unreadable by using a particular key is what cryptography is all about now as we discussed the decryption again is replacing the alphabet and taking it further back by two characters so e minus two becomes c q minus two becomes o o minus two becomes m and so on so forth so anybody who knows this key uh the shift key anybody will able to decrypt this particular character so this depends on the user if i want to utilize alphabet plus five then the spacing the shifting of that character will be the fifth character from that particular character and so on so forth the next question what is the difference between ciphertext and cleartext ciphertext refers to the text which is encrypted and totally undesirable the message received after decryption is known as clear text this text is comprehensible so the word computer is clear text that means that it has not been treated to any cryptographic measures it does what it is intended to be however if the moment we encrypt it that means we scramble it into unreadable text by using any of the algorithms that we'll be looking at that text is known as a cipher text and without the key this becomes unreadable the clear text as discussed is the plain word that we have utilized we are using the english language in this instance so the plain word computer is the clear text once we add the encryption layer to it we get the cipher text to it moving on to the next question what is a block cipher this refers to the method of encrypting the plain message block by block the plain message is broken down into fixed size blocks and then encrypted now a block cipher is normally used for data that is stored so a data that is stored on a hard disk and we want to encrypt that data that is known as block encryption or a block cipher so a block cipher is an algorithm that will allow you to encrypt data that is stored onto a hard disk so in this example we've got a plain text which is 64 bits in size and we have added a layer of encryption to it so plain text plus the key that we have studied in the previous questions and then the scrambled data out of it which is unreadable and thus encrypted then the next question what is public key infrastructure now the public key infrastructure is a set of policies which secures the communication between a server and a client it uses two cryptographic keys public and private so the infrastructure itself is a set of policies people procedures and techniques which are standardized in nature and are globally accepted which allow us to use digital certificates to encrypt data and decrypt the data at the other end we use asymmetric encryption over here which means that we are used two keys one is a public key to encrypt and the private key to decrypt the other part of the encryption is a symmetric encryption where the same key is used to encrypt and the same key is used to decrypt now in a public key infrastructure like i said we have standardized that so insta in the standardizing part of it these are the various players that have been defined in the public key infrastructure the first is this user or the sender in this scenario the one who requires this digital signature to digitally sign a particular transaction or a communication a registration authority with whom they are going to register for that particular key the certification authority who issues that key the verification of authority who validates the key itself and the recipient who is going to be the other party of that particular transaction so how is this utilized a sender or the user who requires this digital signature will request or apply for a digital signature with the registration authority the registration authority would validate the genuinely of the user so they might do some identity verification or proof of residence or something like that once they've identified the person and they have validated the information they will then send the request to the certification authority stating that the sender has been validated and we can and the certification authority can issue the digital certificate to the particular user they will send the public key to the sender which will be utilized by the sender for further transactions so when the sender is going to sign some data and wants to send it across to the recipient they will use the public key to sign it and send it across the recipient will then validate with the verification authority to see if the data the signed data is correct or not now while the certification authority sends the public key to the sender the certification authority updates the private key with the verification authority so whatever is signed by the sender uh received by the recipient and they want to validate it they will send it back to the verification authority the verification authority will validate using the private key once the private key is validated it will then send the ok signal back to the recipient thus allowing the validation of that particular transaction if the signature is tampered with or is not the very fiction authority is not able to validate the signature it will then send a denial message back to the recipient and the transaction will not go through so the pka enables trusted digital identities for people so the pki grants secure access to digital resources based on the infrastructure that has been created and the core of the pki is a certification authority which ensures that the trustworthiness of the digital data is ensured so going back to the previous slide these are the key players that have been standardized in the public key infrastructure the certification authority is the authority that issues the digital certificates the validation authority is the one who validates that digital certificate moving on what is rsa rsa is one of the first public key crypto systems that is used for secure data transmission it stands for reverse xiaomira and adelman now these are the three people who have created this algorithm randy west adishamir and leonard edelman who are the inventors of this technique it is a asymmetric cryptography algorithm which works on both public and private keys hence the encryption key is public and the decryption key is kept private now as we have discussed earlier symmetric and asymmetric cryptography symmetric cryptography is where the same key is used to encrypt and decrypt whereas asymmetric cryptography is where there are two keys to encrypt and decrypt the algorithm what are the few alternatives to rsa now rsa is an algorithm that is used for encryption there are a lot of other algorithms that can be utilized uh to alter or to scramble data depending on your requirements so in the previous question we have studied and we have talked about what rsa is it stands for reverse xiaomi redundant method three creators of that particular algorithm but there are a lot of alternatives to this algorithm depending on how secure you want that data to be and some of them are listed here on your screen duo security octa google authenticator and lastpass lastpass is a password manager so is duo security google authenticate is something that we all utilize it is an application that we can download and store on our mobile devices and we can set that up to authenticate ourselves with certain portals so it issues a unique id to us which once utilized will allow us access to those particular portals octa is an identity manager where you have created different digital identities and you have assigned them certain permissions and based on your authentication mechanisms octa will allow or disallow access to those different applications or different portals as you have configured it so all four are authorization authentication mechanisms which can be used as alternatives to rsa if getting your learning started is half the battle what if you could do that for free visit scale up by simply learn click on the link in the description to know more next question what are the prime objectives of modern cryptography and this is a very important question because we've so far looked at what cryptography is and what public key infrastructure is but what is the achievement out of it why are we utilizing it and what do we want to gain out of it so the main and the prime objectives of modern cryptography are as follows mentioned on your screen the first one is confidentiality the second one is non-repudiation third one is authenticity and the fourth one is integrity now if i go back to the first one confidentiality uh that is where i want to keep data confidential that means it will only be visible to the authorized users right so here i've created a list of people who have deemed as authorized users and have created a digital identity to them and have given access controls to those people now that is how confidentiality is ensured so when we want to keep data confidential we create a list of users who we are going to allow access to certain resources and we are going to define what access controls are to be utilized what access are allowed whether they got an administrative access or user level access and only those authorized users are going to be able to access these resources that is how we maintain confidentiality the next one is non-repudiation non-repetition is the prevention of denial of having been a part of a particular transaction so in the public key infrastructure that we discussed where a digital signature was utilized to sign a particular transaction and then sent to the recipient the sender would not be able to deny of having originated that transaction because it was using their digital certificate thus non-reputation comes into the picture uh one more example that we can have here is uh on our mobile phones when we use sms short messaging service and we send a message to uh to another person the person when they receive a message the number is validated by the service operator and thus the sender cannot deny having sent that message the sender at the same time can have a delivery report sent to them from that the message was delivered to the inbox of the recipient and thus if the recipient denies having received that message the delivery report becomes proof of having that message being delivered to their inbox thus both the parties cannot deny of have a of being a part of that particular transaction then comes the part of authenticity now in confidentiality we have created a digital identity assigned it to a particular person and we have given them digital signatures where they cannot deny having being a part of that transaction but authenticity is the part where they try to prove that they are who they claim to be so if i am claiming a digital identity i have to prove that i am that person who i'm trying to claim to be and an example to that is when we go to our gmail.com websites it first asks us what is our username our username is normally our email address which identifies the account that we are trying to access right so this account is confidential because it is only authorized for a particular person and once they identify themselves by identifying the email address that's when the authentication part comes into the picture where it asks for the password now it has never ever happened that we just go on to the gmail.com type in a password and then it figures out which account we are talking about so the first step is always the confidentiality part where we identify which account we are talking about and then we try to authenticate as the owner of that particular account by providing the appropriate password to that account if both of these match only then do we get access to that account and we are able to make whatever transactions we want to make now when we are making those transactions not reputation comes into the place where all our activities also being logged so we have identified our account we have authenticated ourselves by providing the password so the proof is there that it is us who are trying to access it and then whatever activity we do send an email receive an email delete something attach something all of those activities are logged and stored as proof of what actions have been done so tomorrow if we deny having sent that email gmail can still prove to us through those logs that though that that activity was done by us and the fourth part is integrity which ensures that the data received and sent and sent by the sender and received by the recipient has not been modified while in uh motion so the integrity part is the trustworthiness of that data that the data has not been modified by any hacker or any other entity and is still trustworthy so these are the four prime objectives of modern cryptography once i have scrambled that data using my public's signature it is only my private signature that is going to decrypt it right using these mechanisms i will be able to achieve all these four aspects of cryptography and security next question what is safer now safer stands for secure and fast encryption routine which is also a block cipher as we have discussed previously block cipher is a cipher that is used to encrypt data that is stored so it has a 64-bit block size and byte-oriented algorithm cipher's encryption and decryption procedures are highly secure this technology is widely used in applications like digital payment card so when you're using your a digital payment gateway to make transactions so you have you have gone on to an online portal you want to purchase a particular item and then it takes you to another payment gateway where you have to fill in your credit card information sensitive information like your expiry dates cvv information and then the otp or the password that you have created for your particular account now all of these need to be secured or highly secured based on pcidss which is the payment card industry data security standard so these standards ensure that certain protocols are utilized to attain that level of security safer is one of those block ciphers that is used under the digital payment gateway infrastructure next question how does the public infrastructure public key infrastructure work now we have already discussed this in the previous diagrams we have identified the key players the certification authority the registration authority the end user who requires the digital certificate the validation authority who's going to validate it and then the recipe and the end user with whom the transaction is going to be conducted so the first point here is the request for the digital certificate is sent to the registration authority they validate it and then they okay to the certification authority who then process the request and the digital certificate is issued to the person who has requested it so when the person wants to conduct that transaction they use that digital certificate to sign that transaction with the end user the end user validates that with the validation authority and once validated the transaction goes through and now the last question what is the blowfish algorithm it is a 64-bit symmetric encryption algorithm so this is an algorithm that uses the same key to encrypt and the same key to decrypt the same secret key is used to encrypt and decrypt the messages here the operations are based on exclusive ors and additions to the on 32-bit words the key has a maximum length of 448 bits now this is a little bit technical uh you might not want to go with this technical in an interview question you just need to identify what the algorithm is used for so whether it is a symmetric algorithm which means it uses the same key or a symmetric algorithm where it uses a public key to encrypt and a pi private key to decrypt this the blowfish algorithm is just one more algorithm which uses symmetric encryption to encrypt and decrypt data algorithms that we have seen rsa and others that we have discussed as far as the interview questions are concerned what we need to remember is which algorithms are symmetric which algorithms are asymmetric what do symmetric algorithms do and what do unsymmetric uh symmetric algorithms do and we also look at block ciphers and stream ciphers block ciphers are utilized to encrypt data that is stored stationary data data at rest and stream ciphers are utilized for data in motion while they're being streamed so ssl and tls is another algorithm that comes into the picture when you're looking at streaming data with this we reached the end of our full course on cryptography we hope you learned a lot of new concepts today and will be able to make use of your newly acquired knowledge to propel your career forward if you have any questions or queries regarding the topics explained in this video please feel free to ask us in the comments section and we will be happy to assist you in your learning stay safe and thank you for watching you
Info
Channel: Simplilearn
Views: 167,971
Rating: undefined out of 5
Keywords: DSA, algorithms, asymmetri key cryptography, crypgraphy explained, cryptography, cryptography course, cryptography for beginners, cryptography full course, cryptography tutorial, cryptography tutorial for beginners, data encryption stand, digital signatures, encryption algorithms, learn cryptography, learn cryptography for beginners, sha algorithm, simplilearn, ssl hanshake, symmetric key cryptography, what is cryptography, what is cryptography and how it works, why cryptography
Id: C7vmouDOJYM
Channel Id: undefined
Length: 135min 0sec (8100 seconds)
Published: Tue Aug 17 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.