Web scraping with Python | Learn How To Scrape Data Using Python | Great Learning

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys I welcome you all to this live session on web scraping with Python so before we start off in the session just give me a confirmation if you guys can see my screen and if I'm audible so please session will be a quick introduction to web scraping and we have a specific library for that which is known as beautiful soup and we'll be implementing web scraping concept with that library so also I'll just wait for some of the other folks to join in till then I'll inform you guys about our free learning initiative which is great Learning Academy so if launch is platform discrete Learning Academy we provide almost 100 courses related to data science AI this marketing starts cloud in a lot lot more and the best part is you'll get a completion certificate a course completion certificate who wants you and roll for this courses and complete it so that'd be the assignments or quizzes you'd have to complete those and you'll get the certificate so you can add that certificate on to your LinkedIn page or on to your resume which could be a huge value add to you guys so the link for that would be available in the chat you can check it out and also if you guys want to learn through an app we've got the great learning app so you can also find the link for that in the chart as well and to the folks who have just joined or who are new to our channel I introduce you guys to hit the subscribe button and click on the bell icon as you will be notified whenever we are coming up with new life sessions or whenever you uploading new videos and also it length adjust comfort massage tutorials and will ensure that your log down is as productive as possible and also I want this session to be as interactive as possible so today it will be an entirely coding session so we'll have two short demos where we'll be scraping some data through the creek buzz web site so I believe all of you know or all of you would have visited the Craig were swept site at some point of time because all of us are Cricut fanatics and then we'll be scraping some data of iPhone reviews so they'll be scraping data through these two websites over here so let me just see if there are any questions right now see a garage soon is asking is beautiful soup part of Python library yes it is a python library and our first task would be to install the library so we'll install beautiful soup so you just have to type in pip install beautiful soup fort and this is how we'll be installing the library so let me just start off with the session then so we'd have to start by installing this and since I already have this installed I'll not run this command over here then I would also have to install something known as the request method now when it comes to web scraping or what happens this we'd have to send a request to a particular website to get the data and that is where we would need the request library so we'd also have to install this if you guys are running this code by Larry I had requests you guys to install both of these libraries the first library would be beautiful soup or and the second library would be request so go ahead and install these two once this is done well you go ahead and and import some of the libraries which we need so first we need system then we need time then so bs4 basically stands for beautiful soup so from bs4 we are importing beautiful soup and then we are also importing requests and then we are importing pandas if we're not really using pandas on this particular session though so let me just go ahead and import all of these now that this is done well do is I will open up the craig boswell site over here and this is very interesting so this is the home page of Craig bus so what we're going to do will be will extract all of this what you see of this we've got headlines over here and we've got the summary of the headlines over here and that is what we'll be extracting today so let's say this is one such headline and let's say if I want to extract this this again is the next headline then we've got the next news article over here and then we'll extract this similarly with what this over here and this is what we'll be extracting know what I'll actually do is I will clear all of the outputs and restart my kernels so that you can actually see how this is happening now let me go ahead and run this let me see if there are any questions the user is asking can we scrape through most secure site scraping us up I believe you're confusing scraping through a bit hacking so all we are doing is we are taking this data now it's as simple as ctrl-c ctrl-v isn't it so let's see if it's even if it's the most secure website you can still go ahead and copy the text like this and paste it somewhere else so this what we're doing is we are automating this process of ctrl-c ctrl-v with a library called as beautiful soup and that is what is known as web scraping Duncan is asking can we do this in collab absolutely we can do this in collab so now to all of the folks who are asking let's say can we do this in collab or can we do this in pycharm now all of these are just IDE so irrespective of 40 or IDE are using the Python code will be the same so all of this well you know well run in respect to whatever idea using Ozma is asking what we have to download so we will start off by installing beautiful soup for so whatever ID so let's say if you are using Jupiter notebook you would have to install beautiful soup for veteran with this command similarly we would need the request command because we'll be sending requests to all of the websites to install this so you'd have to install these two to start off with right so after we install beautiful soup and this then we'll just go ahead and import all of the libraries which we need and we will be scraping this these data which you see so we've got the headlines over here and after the headlines we've got a short summary of whatever news article that s so our first task would be to use the request method to get this data from this particular website so and also we'll put this in a try-catch block over here now because there's a very good chance that let's see if we are sending a request then the website not may not accept our request so you know if the website doesn't act on request then in that case we'll get an error and we'd have to handle it so that is why we are putting this request dot get method inside the try block over you so we'll just type request dot get and inside this we would have to give in the name of the website or the name of the page from which we'd have to extract the data and I'd want to extract data from this particular website so I'll just use request dot get and here I'll give in the name of the website and I will store this in a new object and I'll call the object to be peach now once this is done we would have to handle the exception so the handle exception will type accept exception as e and these are the different types of errors which Thrun if the request is not met so error type error object error info and well you know we'll have so when we using system dot execute in for will have all of this and well just if there's an error then we'll just print out error for link URL now we're not really using this right now and then we'll print error type so whatever that is so this is how we are handling the error if we come across an arrow now during whips creeping let's say if you automate a process and if you are continuously sending requests to a website there's a good chance that you'll be blocked from that particular website so to ensure that you know are that whatever program we are creating is the website doesn't flag it off as a bot so we have to time it time order request so here when I say time dot sleep and I give them the value of two with the help of this what we are ensuring is we are you know that we will not be blocked from a particular website so that is why this is very important so let's say if you don't give in the delay over here then if you continuously send requests for websites and there's a very good chance that you will be blocked now after we add the steal a all we have to do this we will be extracting the text from this so already we have sent a request to Craig buzz calm and we will have the result in peach and then once we have that what I want to do is I would want to parse all of that with this HTML parser let me actually run this over here let's actually see what is happening now let me add a new cell over here so here let me just print in page and let's see what I'll be heading so we just get a response so if sent request to Craig buzz calm and as you see we've got a result now if there's an error view will not get this so if if you want to know if the request sent the successful or not you just have to print this out and see if you get response 200 if you get response to an that would mean that whatever would request you send that has that has been responded properly now once this is done we would have to part transform this into proper text so for this will be using beautiful soup so beautiful soup is used to parse the HTML text and as you already know this is used for web scraping and let me just bring out the swoop object over here so this beautiful soup is taking two parameters first parameter is pH dot text so whatever we are extracting so we want the text from it and this is the HTML parser because what we're extracting is actually HTML tags let me print this out and this what you see is the HTML tags of the entire web page now let me make it even more clear so this is the Craig buzz website now just go ahead and open ctrl shift I now when you type in ctrl shift I or when you press ctrl shift I you will be able to inspect the HTML code for this page so here what we've extracted as that HTML code so as you see this is what we've extracted and if you look at this closely this is the same thing which you've extracted over here now this is the entire HTML code for this entire page now let's say if you want to identify every single element I see every website now has different elements so do it now inspect every different element you will have to type or you'd have to press ctrl shift C now when you put a type control shift C you will be able to select individual elements or individual blocks over here now just have a let me just close this again over here now I'll just show you what is happening so as you see if I put my cursor over here then you see that over here the code is changing similarly if I put my cursor over here again the code is changing if I put my cursor over do again the core is changing and now what we are going we'll see what as though HTML code for these parts over here so now when I put actually let me start with the top over here so let me put my cursor over here and this is what is present over here so if you look at this particular code then though this this piece of code is present in the division tag and the class which we are using this big ceoddi mean now let me go back over here so what we are basically doing this once we have this once we pass this hedge tml text to normal text what we are doing is we would want to find all the fifth acts right so as you see this text is present in the div tag and it has some attributes the attribute s CB n WS int R so now let me just go ahead and scroll this down a bit let me click on this now right so if I click on this you will see that this is present in a div tag and the class or the attribute is equal to CB + WS int R similarly let me again let me just scroll down over here now again I'll press control shift + C now when I do that and if I put my cursor over here again I'll click on this you will see that this again is our division is present in a division tag and the class is equal to CB news intro now that is what I want to extract so what we are doing is once we extract the entire text from this so we are storing the entire HTML code in this soup object now from this soup object I would want to find all of the text which is present in division tags but as you see we've got a lot of division tags over here so from all of those division tags I'd want to extract or I'd want to filter out only those division tags which have the attribute this so where they have this attribute class attributes so again ctrl shift at the C so that division egg needs to have this particular attribute over here which has no class is equal to CB news intro now once we have that we will go ahead and again store this in this new object called as links and we have already done that so I have printed out soup and this is what we get now instead of soup let me just go ahead and print out links over here now when I print out links you will see that we have extracted all of these particular news intros over here so let me start off with this so you see that though this start this line starts off the division tag in class is equal to CB news intro and this is the news after 143 deep pandemic induced break cricket finally returns as which the funerals in Southampton so that has basically this particular line similarly let's go down over here the next line is stats preview for the upcoming Westin Trophy and this is what we have now let's go down again then the next one is former India captain and cricket BCCI president Sourav Ganguly turns 48 today similarly that is what we've extracted over here then we'll come to the next point so here England will also host Ireland for 3 OD is from July 30th the limited overs series against Australia to be rescheduled at some stage and as you see this is the next line over here so basically we have extracted this particular part from the entire website and this this what is known as so web scraping so let me see if there are any questions I believe there would be some doubts over here so I'm possum is asking can we get the source code yes so you will have the link in the chat so you can get the source code of this from that so far is asking can you tell us how to scrape with PHP so in this session we'll see how to scrape with Python so we'll keep a note of this we'll definitely see if we can take up a live session on how to scrape data with PHP the use is asking is it illegal to scrape data beta scraping or web scraping is totally legal so for all of you folks who are confusing web scraping with you know hacking these are totally different things so let's say if you want to log into a website through some fake credentials or if you want to let's say steal some private information that is illegal but here what we are doing this all of this data this is publicly available right this is the Craig buzz web site and whatever news articles you see these are publicly available all I'm doing is I am extracting some portion of this data now let's see if I just copy this particular text over here let's see if I just copy this seems like my mouse cursor is going ID and if I paste it over here this I can absolutely do this wouldn't be illegal all I'm doing is I'm automating this task right now if I want to manually select all of these new st. truths that will take up a lot of time from my side right so all I am doing is I am creating a Python program which will help me to automatically extract all of this information so I hope it answers your question as I Wilson is asking can be script able data yes so whatever you see in this particular website we can scrape all of it we can scrape image data we can scrape table data we can scrape textual data all of it can be done other she's asking how can we structure this data so all we have to do is so what you see over here is we've got lines now we'd have to remove all of these tags over here and then we'll just have text data then we can go ahead and store that text data into a simple pandas dataframe you she's asking what is the use of scraping so web scraping whatever website you have let's say if I just want to read I know I just love to you know get cricket news on a daily basis right so if that is that that is what I want and let's say if I just won't the highlights or for all of the cricket news and if I don't want to read anything else then this would be the perfect task isn't it so all I'm doing is I just write this simple piece of code and this would help me to get the gist of what is happening around the cricket world hadith is asking some of the pages refreshes or reloads frequently in this scenario when we read data sequential well the Refresh intervene with the read es absolutely so that is why we have put the request command inside the try exception block right so whenever we send a request there could be a few and also if you send a request continuously right so there are a lot of possibilities one possibility as if the website could identify this Python program as a bot and then it will block you also let's say if the server or the website server is slow and even in that case your request will not get a response so that is why it's always better that whatever request you send you put it inside a try-catch block Chandler is asking why can I get source code so if you want the source code you can we will have the link in the chat over here so you can have access to the source code through that link priyanka is asking can be scraped dynamic websites like facebook profiles these commands would only help you to you know scrape static data not really dynamic data so dynamic data that we'll cover in a different session so beautiful soup and what he these set of commands this would help you to extract only static data and what what is the difference between static and dynamic static is basically which stays constant and does not change dynamics which changes continuously alkyd Sarkar is asking can we put this kind of project end resume will it be helpful yes definitely if you are looking for oh and if you are looking for a Python developer job profile or maybe your web developer job profile this is something which will definitely help you right so let's just get on to the last piece of code over here so what we have done till now is we have extracted these lines of these lines of information from this once that is done we will go ahead and what we are going to do is you see we also have these division tags and we also have the attributes over here but I don't want that what I want is only that text now if I want only the text all I'm going to do is let's say I have this I'll just start a for loop over here and this particular you know all of this data is present in links and I'll have a for loop where I'll just type in 4i and links I'm going to just print out I dot text I don't want the tags I don't want the attributes what I want is just text now here I would basically mean each particular division tag over here right so let's say if you have around ten division tags then the I value will start from zero and go till nine so it will I trade ten times and it will print whatever is present inside those division tags so here let me run this and as you see I have successfully extracted only the text from this entire of you know entire division tags and attributes as simple as that so this is a very simple application of web scraping where we have extracted all of this Creek Buzz data right now isn't this beautiful soup over here as you see this was we had a lot of data over here we had a lot of pictures we had Lord of headings but out of all of that stuff we were able to extract only this in true new spots right so this is a very interesting application of web scraping Chandler is asking is it possible to scrape in any website on any page that would depend on if your request is met so most of the pages are public so let's say if you have any public websites so most of the public websites you can definitely scrape data and also the data has to be static so if the very data is static and if the website is public you can definitely go ahead and scrape whatever is present over there so players asking was web scraping similar to data mining not really so clear so data mining it is actually an umbrella term which covers a lot of things now data mining it is not just related to web mining web data mining right that data could be anything you can get the data from a lot of sources it could be maybe your simple Excel file it could be data from maybe a website it could be data from Hadoop file system it could be data from spa it could be data from Kafka you can get data from anywhere and then data mining what basically happens is you basically have this process or a life cycle where first you get the data and then you perform some sort of pre-processing on top of it and then you apply some machine learning algorithm so that is the entire process of T remaining and just web scraping is one part of data mining where the data comes from websites so as simple as that Viren is asking which chrome plugin is being used that highlights the tags I'm not using any plugins so here of what I'm doing is if you want to inspect a website all you have to do is press control-shift i right so keep this in mind while I'm doing this I'm just pressing ctrl shift I and here you have the elements tab over here so you've got an order of tabs you got console sources Network and relevance now if I want to inspect whatever is present over here I have to press control shift I and this will give me all of this HTML code and if I want to hide a particular block then I would have to press control-shift C so when I press control-shift C this will automatically give me the code for each of the block where my cursor is currently present so I hope that answers I'm not using any chrome plug-in over here all I'm doing is first I press control-shift I then I press control-shift C so I get a lot of requests asking if the session will be in Hindi we will definitely have this session in Hindi as well so we'll keep that in mind so Priya is asking can we scrape linkdin data ESB can't scrape length in data so again that data has to be textual so let's say you can maybe scrape some posts so whatever boosts are there or whatever you know let's say if you have any profile information all of that can be extracted so thumbies asking can use regular expressions and web scraping to get more filtered data absolutely and that is what is happening or in that is what happens after scraping the data so all of this is raw data and if you want it in a more tidy format then we can go ahead and use regular expressions to put it in a tidy format Agatha is asking can be used visual code instead of Jupiter notebook absolutely so I've been saying this again and again guys IDE is not really important that would depend on totally you guys if you know the same Python code will run on Visual Studio the same Python code well ran on pycharm the same Python code will run on Jupiter notebook the same Python code will run on Google collab as well irrespective of whatever IDE are using all of this code will run smoothly right so we are done with the you know this particular program over here now we'll just do a similar operation where we'll be you know where we'll be extracting the reviews of iphone so let me open up this put the link over here let me copy this and let me paste it over here right so as you see we've got iPhone 11 pro 256 GB space cream so this time we are going to the scrape all of the reviews over here right so we've got all of these reviews and this is what we are going to scrape this time so again because I don't know maybe if you are too jobless and maybe if you have a lot of time and if you just want to read a lot of reviews on iPhone you can do this I found it to be an interesting application so I thought let me just go ahead and extract some reviews about iPhone Pro so again before we start off with this I'd like to inform you again about create Learning Academy so guys go ahead and check that out we have a lot of courses over there we have courses related to web development or web scraping machine learning data science artificial intelligence we have about data mining cloud DevOps all of that you can check out courses of you know all of these courses over there and create Learning Academy you will find the link over there and once you complete those courses you will you will get a course completion certificate and you can go ahead and add that certificate on to your LinkedIn page or on to your resume and that will be a huge value add to you guys so go ahead and check it out and also if you want to learn through an app because of ease of learning or let's say if you don't have your laptop with you in this log down you can go ahead and also learn through the app you will find the link for that as well and also for the folks who are new to the channel so I request you guys to hit the subscribe button and click the bell icons you will get all of the notifications so for new sessions and all the live sessions that we are taking and also it'll encourage us to come up with you know massage videos on a regular basis so yeah as I was telling you guys we'll be scraping the you know a review is about this iPhone Leben so again we'll follow the same approach so we'd have to import all of the libraries this time beautifulsoup requests and pandas pandas again if they are not really using it over here and this time over here they've got requests dot get and inside this the page which we want to extract or the data which would want to extract us from this particular page so all I've done is I've copied this link and I have pasted that link over here and once that is done I just have another exception over here so for the website maybe does not allow me or does not send me or does not send me a respect response all I'm going to do is I'm going to put it in the exception and that error will be handled after that I will put this into timeout so that if I send request continuously Amazon doesn't you know flag me off as a bot and doesn't block me so that is why I am putting the dealio of two seconds over here then I will go ahead and extract all of the HTML code and to extract all of the HTML code I'll use beautiful soup inside beautiful soup again two parameters first parameter is pH dot text so P just what we've created over here the get object or the request object and here we'll just type in pH dot text and then we will have HTML dot parser which well you know with the help of this we are passing the entire HTML text over here now let me just run this now after I run this let me just print out soup and let me show you what are we actually getting over here so as you see this is the entire HTML text which is present so let me again press control shift I and when I press control shift I you would see that this is the entire text which is present now once I do that I would want to extract only the reviews who is enough you know those elements with C so this is a as you see this is a review over here I'll click on this this is present inside those pan tags see that is why I have given find oil span and the attribute if you want to find out the attribute you see this button over here just expand this and as you see you have the let me just see what we have over here so as you see this is the classes a size B's and that is what I've given over here similarly let me see Phi if I want to find out the tags and the attributes for this so I'll just type in ctrl shift and then I'll press C let me again click on this so you would see that the tag is fire in the classes a size piece so that is why what I've done over here us with the help of beautifulsoup I have extracted the entire text and then I'll just use su dot find authors so entire HTML data I would want to find all the span tags and from all of the span tags I would want only those span tags where the class is equal to a sized piece and I will just go ahead and store this in this links object now let me print out the links object over here and this is what we get so let me scroll down so as you see this is what we have over here so we've got all of these reviews right I've actually let me run this first I'll actually run the for loop so let me go ahead and actually use print and inside that's let me run links it seems like we are actually getting an empty list over here so will that leave that out for now now when I actually see this and let me go up so again we are using this for loop and with the help of this for loop we are printing whatever is present inside that let me look at the first review over here let me go to this and let's see so you have to love Apple so this is the first review as you see you have to love Apple because they normally make amazing product they make magic and that is what we have extracted similarly let me scroll down again a smartphone that is not smart and a 1-lakh phone for just a phone call so that's so funny leap you go there right so let's see where that is so let me close this it'll be easier for us to find out right so and also we have extracted most of these reviews over here so this is how we can extract through different websites and you know if it s static and if it's a public website we can extract all of these so let me see if there are any more questions over here so angiomas asking is it possible to store scrip data in a CSV file yes all you have to do is use pandas and then we can go ahead and convert this raw scrip data into a tabular format so guys keep coming up with your questions so we are done with these two demos over here so what we're basically done is there are two important things to keep in mind one as the requests library next is the beautiful soup library with the help of this we are sending a request to the web page and then we'll get a response now if we have to process that response object we will need beautiful soup so as simple as that so all you have to do is just type in these simple you know does a simple commands over sip we've got like six lines of code with these six lines of code you will be able to scrape any textual data from any website and yeah if you guys want the code file for this the link for it will be present in the chat you can check that out Rohit is asking how could we know which site as private and which is public our Rohit so from whichever site you'd want to scrape the data just run this command and if you get any error then that would mean that you know that website will not support scraping and most of the websites a 99% of the websites are public and though at least on the home page whatever information you find that you can scrape through very easily so that shouldn't be a problem Jemima is asking instead of text can be extract others like images yes absolutely so here what we'd have to do is let's say if there's an image that then that image would be present inside of the IMG tag so here instead of the div tag we'll just put in IMG and that will have some attributes over here and we'll just have to specify an attribute and once we actually give that it will give us the image or maybe it will give us the image in up or in the form of an array so we'd have to convert that array into image form again so there is some image processing involved in that just not just not web scraping so that'll be a bit more complicated so maybe we can have another session where we are actually extracting dynamic data and as well as images from different websites not as of now but I see that all most of you folks would want this session in Hindi as well so we'll definitely conduct this session in Hindi we'll keep that in mind who uses asking can you tell more use cases of web scraping sure so so let me go to maybe a new site let me see if we can extract data from Times of India let me just open this so we have Times of India over here yeah right so that's got an ID over there let's say if I want to extract those bulleted points maybe or these points let me just go ahead and copy this and what I'll do is I'll paste this over here so we are going to scrape data from Times of India and let's see now and Times of India let me again inspect what we actually want so I'll press control-shift I and when I do that we will have all of this hedge tml code and let me scroll down let's say if I want I'll just press control-shift see right now here we have this short set of news articles maybe or maybe these are we've got these one-liners over here let's see we can also take that or maybe this right so let's see if I want something like this then this is let's say present and the a tag over here this is actually present in the Li tag right so we've got all of this list over here and let me see if there's a class as well this is present and Li tag and classes main story so what I'm going to do is I'm going to find all the text where the tag is Li and class as main story so I'm going to have main story over here let me click on run and let's see if let's see what we get so we have extracted the entire HTML code this time now what I'm going to do is I'm going to print links again this is blank so let me just run this so seams like peace let me give a piece over here and see what do we get so you'd have to make sure that whatever you're giving that is actually proper only then it will work as there might be some error let me see what else can be scroll through this actually let's close this or there are a lot of links over here or maybe let's say if I want this so again I'll have control shift I got this I'll press control shift + C and let us check this so this is span and classes W underscore tle so let me just change this to span over here and I'll have W underscore tle so let me change this to W underscore tle and let's see if we've got everything set so seems like everything is proper over here now let me run this oh sorry the error was up here so here the error was actually I was we're trying to extract this from the Amazon website sorry I apologize for that this is Times of India this is span and this would be W underscore tle if I'm not wrong yes that W underscore tle now let me run this so this time well definitely get a result and let me print out links right so we have successfully extracted whatever is present over here x initiatives and now I'm going to rent out this in a for loop and let's see what are we going to get right so as you see we've got all of these individual items which were present so can't pool shoot out because boobies ate a murder to be killed in encounter than the squat Libre flooded with calls over Shu Xiang's that nepotism alias mother defense my ish path so we've got all of this interesting items over here right so whatever was present inside the span tag and class was equal to W underscore tle we have extracted all of that Briony car is asking can we share the recorded video link well once the live session is done you will have the link present I mean you will have the video and YouTube itself we will not take the video down so you don't have to worry about that so there's another question from brainwash is asking his knowledge of Python enough web scraping and what is the use of it if we then simply to work using copy and paste commands so I just shown you guys a very simple demo of web scraping but many times let's say if if you want to automate a particular task and this is very useful now again if I want to do control C control V for all of this so let's say you've got you've got maybe hundreds of you know small one-liners over here for news if I want to do this manually that will take up a lot of time so this automation work with this you know with the help of beautifulsoup the automation work it becomes very easy oh yes we want control chef tell and control chef see that as very handy actually angiogram is asking can we scrape data on other state language-related website if we are actually or if we want to maybe you know if it is present in the tags then I believe we can do it personally I have never tried to extract data which is present in other languages so you can I'm not sure so I can't really tell you guys you know if you know if we can scrape text in other languages or not so I'm just going through the questions guys if you have any more questions put them in the chat we've got a couple more minutes before we end it right and also again I just tell you before we end the session that we have a free learning platform called create learning academy you guys can go ahead and check that out so link obviously you will find it in the chat and again we have a lot of courses over there and all of those courses are taught by industry experts and some of the industry experts have that PhDs and master's degrees from Ivy League schools such as Stanford and you've got IITs and NIT is in India so I believe most of you guys have seen no tutorials by I've been in the Sarkar and professor Pacquiao on our YouTube channel so if you want their full courses if you want more courses by these esteemed faculty members you can check those out on Crete Learning Academy as well so we've got a lot more courses by Professor Mukesh now and dr. sort car on the REIT Learning Academy and also you will get a course completion certificate order you can go ahead and add that certificate on to your LinkedIn page or on to your resume and if you know that will definitely help you you know to get that advantage over other folks who are appearing for the interview and also if you will learn through an app we've got this app called the ellipse for specific app dedicated to great love also if you guys are new to our channel I'd request you to go ahead and subscribe to our channel and click the bell icon as it'll encourage us to come up with more such life so if you guys are still at home if you guys want upscale yourself will be coming up at Moss Hearts live sessions we've got more such outcome-based life's on Amazon reviews so right now we've just extracted the data we'll be doing some sentiment analysis on the Amazon reviews we'll be finding out if you know the reviews are positive negative or neutral so that is something which we'll be doing in our next live session so you can check out all of these live sessions or the great learning peach peach is asking or all the courses available on the great learning app yes whatever courses you find on great Learning Academy on the website you will find all of this courses on the app as well so for your ease of use we have created the app over there all right so I believe I've covered most of the questions today and we have covered three examples so I take a leaf for today so thank you very much for attending this session and hope you guys are keeping safe and maintaining proper social distancing so let's meet in the next
Info
Channel: Great Learning
Views: 14,118
Rating: 4.9620252 out of 5
Keywords: data scraping with python, web scraping with python, what is data scraping, what is web scraping python, data scraping tutorial, web scraping tutorial, python programming language, python for beginners, introduction to web scraping, learn web scraping with python, learn web scraping, great learning, great lakes
Id: ktNAhLp0ymg
Channel Id: undefined
Length: 50min 1sec (3001 seconds)
Published: Wed Jul 08 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.