This AI Agent can Scrape ANY WEBSITE!!!

46,888
0
Published 2024-05-23
In this video, we'll create a python script together that can scrape any website with only minor modifications

_______ ๐Ÿ‘‡ Links ๐Ÿ‘‡ _______

๐Ÿค Discord: discord.gg/jUe948xsv4
๐Ÿ’ผ ๐—Ÿ๐—ถ๐—ป๐—ธ๐—ฒ๐—ฑ๐—œ๐—ป: www.linkedin.com/in/reda-marzouk-rpa/
๐Ÿ“ธ ๐—œ๐—ป๐˜€๐˜๐—ฎ๐—ด๐—ฟ๐—ฎ๐—บ: www.instagram.com/redamarzouk.rpa/
๐Ÿค– ๐—ฌ๐—ผ๐˜‚๐—ง๐˜‚๐—ฏ๐—ฒ: ย ย ย /ย @redamarzoukย ย 
Website: www.automation-campus.com/

FireCrawl: www.firecrawl.dev/
Github repo: github.com/redamarzouk/Scraping_Agent


_______ ๐Ÿ‘‡ Content๐Ÿ‘‡ _______

Introduction to Web Scraping with AI - 0:00
Advantages Over Traditional Methods - 0:36
Overview of FireCrawl Library - 1:13
Setting Up FireCrawl Account and API Key - 1:24
Scraping with FireCrawl : Example and Explanation - 1:36
Universal Web Scraping Agent Workflow - 2:33
Setting Up the Project in VS Code - 3:52
Writing the Scrape Data Function - 5:41
Formatting and Saving Data - 6:58
Running the Code: First Example - 10:14
Handling Large Data and Foreign Languages - 13:17
Conclusion and Recap - 17:21

All Comments (21)
  • @redamarzouk
    Hey everyone! ๐Ÿ˜Š I'm curious about your thoughtsโ€”was the explanation and flow of the video too fast, or was it clear and to the point?
  • @todordonev
    Webscraping as it is right now is here to stay and AI will not replace it (it can just enhance it in certain scenarios). First of all the term "scraping" is tossed everywhere and being used vaguely. When you "scrape" all you do is move information from one place to another. For example getting a website's HTML into your computer's memory. Then comes "parsing", which is extracting different entities from that information. For example extracting product price and title, from the HTML we "scraped". These are separate actions, they are not interchangeable, one is not more important than the other, and one can't work without the other. Both actions come with their own challenges. What these kind of videos promise to fix is the "parsing" part of it. It doesn't matter how advanced AI gets, there is only ONE way to "scrape" information, and that is to make a connection to the place the information is stored(whether its HTTP request, browser navigation, RSS feed request, FTP download or a stream of data). It's just semi-automated in the background. Now that we have the fundamentals, let me clearly state this: For the vast majority(99%) of the cases "web scraping with AI" is a waste of time, money, resources and our environment. Time: its deceiving, as AI promises to extract information with a "simple prompt", you'll need to iterate over that prompt quite a few times in order to make a somewhat reliable data parsing solution. In that time you could have built a simple python script to extract the data required. More complicated scenarios will affect both the AI, and the traditional route. Money: You either use 3rd party services for LLM inference or you self-host an LLM. Both solutions in the long term will be in orders of magnitude more expensive than a traditional python script. Resources: A lot of people don't realize this but running an LLM for cases in which an LLM is not needed is extremely wasteful on resources. Ive ran scrapers on old computers, raspberry pi's and serverless functions, this is just a spec of dust of hardware requirements compared to running an LLM on an industrial grade computer with powerful GPU(s) Environment: As per the resources needed, this affects our environment greatly, as new and more powerful hardware needs to be invented, manufactured and ran. For the people that don't know, AI inference machines (whether self-hosted or 3rd party) are powerhouses, thus a lot of watt/hours wasted, fossil fuels burnt etc. Reliability: "Parsing" information with AI is quite unreliable, manly because of the nature of how LLMs work, but also because a lot more points of failure are introduced(information has to travel multiple times between services, LLM models change, you hit usage and/or budget limits, LLMs experience high loads and inference speed sucks or it fails all together, etc.) Finally: most of AI extraction is just marketing BS letting you believe that you'll achieve something that requires a human brain and workforce with just "a simple prompt". I've been doing web automation and data extraction for more than a decade for a living. Ive also started incorporating AI in some rare cases, where traditional methods just don't cut it. All that being said, for the last 1% of the cases that do make sense to use AI for data parsing, here's what I typically do (after the information is already scraped): 1. First I remove vast majority of the HTML. If you need an article from a website, its not going to be in the