• UraniumBlazer@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    1 year ago

    Sorry, I’m ignorant in this matter. Why exactly would you want to scrape websites aside from collecting data for ML? What kind of irreplaceable API are you using? Someone please educate me here.

    • coltorl
      link
      fedilink
      arrow-up
      30
      ·
      1 year ago

      API might cost a lot of money for the amount of requests you want to send. API may not include some fields in the data you want. API is rate limited, scraping might not be. API requires agreement to usage terms, scraping does not (though the recent LinkedIn scraping case might weaken that argument.)

      • Kuma@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        1 year ago

        This kinda reminds me of pirating vs paying. Using api = you know it will always be the same structure and you will get the data you asked for. Otherwise you will be notified unless they version their api. There is usual good documentation. You can always ask for help.

        Scraping = you need to scout the whole website yourself. you need to keep up to date with the the websites structure and to make sure they haven’t added ways to block bots (scraping). Error handling is a lot more intense on your end, like missing content, hidden content, query for data. the website may not follow the same standards/structuree throughout the website so you need to have checks for when to use x to get y. The data may need multiple request because they do not show for example all the user settings on one page but in an api call they would or it is a ajax page and you need to run Javascript scripts and click on buttons that may change id, class or text info and they may load data when you do x with Javascript so you need to emulate the webpage.

        So my guess is that scraping is used most often when you only need to fetch simple data structures and you are fine with cleaning up the data afterwards. Like all the text/images on a page, checking if a page has been updated or just save the whole page like wayback machine.

        • TheHarpyEagle@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          As someone who used to scrape government websites for a living (with permission from them cause they’d rather us break their single 100yr old server than give us a csv), I can confirm that maintaining scraping scripts is a huge pain in the ass.

          • Kuma@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Ooof, i am glad you don’t have to do it anymore. I have a customer who is in the same situation. The company with the site were also ok with it (it was a running joke “this [bot] is our fastest user”) but it was very sketchy because you had to login as someone to run the bot. thankfully did they always tell us when they made changes so we never had to be surprised.

      • olympicyes@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        My understanding is that the result of the LinkedIn case is that you can scrape data that you have permission to view but not to access data that you were not intended to. The end result that ClickWrap agreements are unenforceable.