With great power comes great responsibility - a large part of the web is based on data and services that scrape those data. Now that we start to apply scraping mechanisms, we need to think about how to apply those skills without becoming a burden to the internet society.
Find sources on ethical web scraping - some readings that might help you get started with that are:
Do not become a burden to the website one is scraping from. If there is a public API containing the desired data, then scraping does not need to happen. After bow to the host and get permission first, only nod is needed in the following process.
Be open about the scraper’s identity by providing a user agent string and respond to the web owner’s contact.
Respect the PI of the website. Only keep the data necessary to the project, and do not pass it as if the scraper owns it. Give credit to the website.
What is a ROBOTS.TXT file? Identify one instance and explain what it allows/prevents.
From JAMI @ EMPIRICAL: A ROBOTS.txt file indicates the web-crawling software where it is allowed (or not allowed) within the website. This is part of the Robots Exclusion Protocol (REP) which are a group of web standards created as a way to regulate how robots crawl the web.
It prevents all users (if not logged in I guess) from accessing the pages of projects, github repos, recurly.com and setting password page. It also disallows AhrefsBot, XoviBot, RankSonicBot and SMTBot from visiting the overleaf website.
Identify a website that you would like to scrape (or one an example from class) and implement a scrape using the polite package.
I want to scrape the wikipedia for hummingbirds to retrieve the scientific classification of humming birds.
library(polite)library(rvest)library(purrr)session <-bow("https://en.wikipedia.org/wiki/Hummingbird", force =TRUE)result <-scrape(session)info <- result %>%html_elements(xpath ='//table[@class="infobox biota"]') %>%html_table()info[[1]]