3  Web scraping

3.1 Getting Started

Important tool

Our approach to web scraping relies on the Chrome browser and an extension called a Selector Gadget. Download them here:

Data Acquisition

Reading The Student Life

How often do you read The Student Life?
a. Every day
b. 3-5 times a week
c. Once a week
d. Rarely

What do you think is the most common word in the titles of The Student Life opinion pieces?

Analyzing The Student Life

Using the titles of the opinion pieces from The Student Life website, we can figure out the most common words.

How do you think the sentiments in opinion pieces in The Student Life compare across authors?
Roughly the same?
Wildly different?
Somewhere in between?

Using the first paragraph of each opinion article, we can see which others have the most positive and negative sentiment articles.

All of this analysis is done in R! {.centered}

(mostly) with tools you already know!

Common words in The Student Life titles {.smaller}

Code for the earlier plot:

data(stop_words)  # from tidytext
tsl_opinion_titles |>
  tidytext::unnest_tokens(word, title) |>
  anti_join(stop_words) |>
  count(word, sort = TRUE) |>
  slice_head(n = 20) |>
  mutate(word = fct_reorder(word, n)) |>
  ggplot(aes(y = word, x = n, fill = log(n))) +
  geom_col(show.legend = FALSE) +
  theme_minimal(base_size = 16) +
  labs(
    x = "Number of mentions",
    y = "Word",
    title = "The Student Life - Opinion pieces",
    subtitle = "Common words in the 500 most recent opinion pieces",
    caption = "Source: Data scraped from The Student Life on Nov 4, 2024"
  ) +
  theme(
    plot.title.position = "plot",
    plot.caption = element_text(color = "gray30")
  )

Avg sentiment scores of first paragraph {.smaller}

Code for the earlier plot:

afinn_sentiments <- get_sentiments("afinn")  # need tidytext and textdata
tsl_opinion_titles |>
  tidytext::unnest_tokens(word, first_p) |>
  anti_join(stop_words) |>
  left_join(afinn_sentiments) |> 
  group_by(authors, title) |>
  summarize(total_sentiment = sum(value, na.rm = TRUE), .groups = "drop") |>
  group_by(authors) |>
  summarize(
    n_articles = n(),
    avg_sentiment = mean(total_sentiment, na.rm = TRUE),
  ) |>
  filter(n_articles > 1 & !is.na(authors)) |>
  arrange(desc(avg_sentiment)) |>
  slice(c(1:10, 69:78)) |>
  mutate(
    authors = fct_reorder(authors, avg_sentiment),
    neg_pos = if_else(avg_sentiment < 0, "neg", "pos"),
    label_position = if_else(neg_pos == "neg", 0.25, -0.25)
  ) |>
  ggplot(aes(y = authors, x = avg_sentiment)) +
  geom_col(aes(fill = neg_pos), show.legend = FALSE) +
  geom_text(
    aes(x = label_position, label = authors, color = neg_pos),
    hjust = c(rep(1,10), rep(0, 10)),
    show.legend = FALSE,
    fontface = "bold"
  ) +
  geom_text(
    aes(label = round(avg_sentiment, 1)),
    hjust = c(rep(1.25,10), rep(-0.25, 10)),
    color = "white",
    fontface = "bold"
  ) +
  scale_fill_manual(values = c("neg" = "#4d4009", "pos" = "#FF4B91")) +
  scale_color_manual(values = c("neg" = "#4d4009", "pos" = "#FF4B91")) +
  scale_x_continuous(breaks = -5:5, minor_breaks = NULL) +
  scale_y_discrete(breaks = NULL) +
  coord_cartesian(xlim = c(-5, 5)) +
  labs(
    x = "negative  ←     Average sentiment score (AFINN)     →  positive",
    y = NULL,
    title = "The Student Life - Opinion pieces\nAverage sentiment scores of first paragraph by author",
    subtitle = "Top 10 average positive and negative scores",
    caption = "Source: Data scraped from The Student Life on Nov 4, 2024"
  ) +
  theme_void(base_size = 16) +
  theme(
    plot.title = element_text(hjust = 0.5),
    plot.subtitle = element_text(hjust = 0.5, margin = unit(c(0.5, 0, 1, 0), "lines")),
    axis.text.y = element_blank(),
    plot.caption = element_text(color = "gray30")
  )

3.2 Where is the data coming from?

tsl_opinion_titles
# A tibble: 500 × 4
  title                                      authors date                first_p
  <chr>                                      <chr>   <dttm>              <chr>  
1 Elon Musk’s million-dollar-a-day rewards … Celest… 2024-11-01 16:27:00 have y…
2 The politics behind apolitical acts        Eric Lu 2024-11-01 16:21:00 while …
3 In Defense of the Pomona College Judicial… Henri … 2024-11-01 16:15:00 former…
4 ‘Yakking’ isn’t a canon event, party resp… Kabir … 2024-11-01 16:10:00 whirri…
5 The ‘if he wanted to, he would’ mentality… Tess M… 2024-11-01 16:01:00 ladies…
6 You can’t silence us: A united front agai… Outbac… 2024-10-25 11:23:00 in the…
# ℹ 494 more rows

3.3 Web scraping

3.3.1 Scraping the web: what? why?

  • Increasing amount of data is available on the web

  • These data are provided in an unstructured format: you can always copy & paste, but it’s time-consuming and prone to errors

  • Web scraping is the process of extracting this information automatically and transform it into a structured dataset

  • Two different scenarios:

    • Screen scraping: extract data from source code of website, with html parser (easy) or regular expression matching (less easy).

    • Web APIs (application programming interface): website offers a set of structured http requests that return JSON or XML files.

3.3.2 Hypertext Markup Language

Most of the data on the web is largely available as HTML - while it is structured (hierarchical) it often is not available in a form useful for analysis (flat / tidy).

<html>
  <head>
    <title>This is a title</title>
  </head>
  <body>
    <p align="center">Hello world!</p>
    <br/>
    <div class="name" id="first">John</div>
    <div class="name" id="last">Doe</div>
    <div class="contact">
      <div class="home">555-555-1234</div>
      <div class="home">555-555-2345</div>
      <div class="work">555-555-9999</div>
      <div class="fax">555-555-8888</div>
    </div>
  </body>
</html>

Some HTML elements

  • <html>: start of the HTML page
  • <head>: header information (metadata about the page)
  • <body>: everything that is on the page
  • <p>: paragraphs
  • <b>: bold
  • <table>: table
  • <div>: a container to group content together
  • <a>: the “anchor” element that creates a hyperlink

3.3.3 rvest

  • The rvest package makes basic processing and manipulation of HTML data straight forward
  • It is designed to work with pipelines built with |>
  • rvest.tidyverse.org

rvest hex logo

Core functions:

  • read_html() - read HTML data from a url or character string.

  • html_elements() - select specified elements/tags from the HTML document using CSS selectors.

  • html_element() - select a single element/tag from the HTML document using CSS selectors.

  • html_table() - parse an HTML table into a data frame.

  • html_text() / html_text2() - extract element’s/tag’s text content.

  • html_name - extract an element’s/tag’s name(s).

  • html_attrs - extract all attributes.

  • html_attr - extract attribute value(s) by name.

html, rvest, & xml2 {.smaller}

html <- 
'<html>
  <head>
    <title>This is a title</title>
  </head>
  <body>
    <p align="center">Hello world!</p>
    <br/>
    <div class="name" id="first">John</div>
    <div class="name" id="last">Doe</div>
    <div class="contact">
      <div class="home">555-555-1234</div>
      <div class="home">555-555-2345</div>
      <div class="work">555-555-9999</div>
      <div class="fax">555-555-8888</div>
    </div>
  </body>
</html>'
read_html(html)
{html_document}
<html>
[1] <head>\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8 ...
[2] <body>\n    <p align="center">Hello world!</p>\n    <br><div class="name" ...

3.3.4 Selecting elements

read_html(html) |> html_elements("p")
{xml_nodeset (1)}
[1] <p align="center">Hello world!</p>
read_html(html) |> html_elements("p") |> html_text()
[1] "Hello world!"
read_html(html) |> html_elements("p") |> html_name()
[1] "p"
[[1]]
   align 
"center" 
read_html(html) |> html_elements("p") |> html_attr("align")
[1] "center"
read_html(html) |> html_elements("div")
{xml_nodeset (7)}
[1] <div class="name" id="first">John</div>
[2] <div class="name" id="last">Doe</div>
[3] <div class="contact">\n      <div class="home">555-555-1234</div>\n       ...
[4] <div class="home">555-555-1234</div>
[5] <div class="home">555-555-2345</div>
[6] <div class="work">555-555-9999</div>
[7] <div class="fax">555-555-8888</div>
read_html(html) |> html_elements("div") |> html_text()
[1] "John"                                                                                  
[2] "Doe"                                                                                   
[3] "\n      555-555-1234\n      555-555-2345\n      555-555-9999\n      555-555-8888\n    "
[4] "555-555-1234"                                                                          
[5] "555-555-2345"                                                                          
[6] "555-555-9999"                                                                          
[7] "555-555-8888"                                                                          

3.3.5 CSS selectors

  • We will use a tool called SelectorGadget to help us identify the HTML elements of interest by constructing a CSS selector which can be used to subset the HTML document.

  • Some examples of basic selector syntax is below,

Selector Example Description
.class .title Select all elements with class=“title”
#id #name Select all elements with id=“name”
element p Select all <p> elements
element element div p Select all <p> elements inside a <div> element
element>element div > p Select all <p> elements with <div> as a parent
[attribute] [class] Select all elements with a class attribute
[attribute=value] [class=title] Select all elements with class=“title”

CSS classes and ids

  • class and id are used to style elements (e.g., change their color!)

  • class can be applied to multiple different elements

  • id is unique to each element

read_html(html) |> html_elements(".name")
{xml_nodeset (2)}
[1] <div class="name" id="first">John</div>
[2] <div class="name" id="last">Doe</div>
read_html(html) |> html_elements("div.name")
{xml_nodeset (2)}
[1] <div class="name" id="first">John</div>
[2] <div class="name" id="last">Doe</div>
read_html(html) |> html_elements("#first")
{xml_nodeset (1)}
[1] <div class="name" id="first">John</div>

3.3.6 Text with html_text() vs. html_text2()

html = read_html(
  "<p>  
    This is the first sentence in the paragraph.
    This is the second sentence that should be on the same line as the first sentence.<br>This third sentence should start on a new line.
  </p>"
)
html |> html_text()
[1] "  \n    This is the first sentence in the paragraph.\n    This is the second sentence that should be on the same line as the first sentence.This third sentence should start on a new line.\n  "
html |> html_text2()
[1] "This is the first sentence in the paragraph. This is the second sentence that should be on the same line as the first sentence.\nThis third sentence should start on a new line."

3.3.7 HTML tables with html_table()

html_table = 
'<html>
  <head>
    <title>This is a title</title>
  </head>
  <body>
    <table>
      <tr> <th>a</th> <th>b</th> <th>c</th> </tr>
      <tr> <td>1</td> <td>2</td> <td>3</td> </tr>
      <tr> <td>2</td> <td>3</td> <td>4</td> </tr>
      <tr> <td>3</td> <td>4</td> <td>5</td> </tr>
    </table>
  </body>
</html>'
read_html(html_table) |>
  html_elements("table") |> 
  html_table()
[[1]]
# A tibble: 3 × 3
      a     b     c
  <int> <int> <int>
1     1     2     3
2     2     3     4
3     3     4     5

3.3.8 SelectorGadget

SelectorGadget (selectorgadget.com) is a javascript based tool that helps you interactively build an appropriate CSS selector for the content you are interested in.

3.3.9 Recap

  • Use the SelectorGadget identify tags for elements you want to grab
  • Use the rvest R package to first read in the entire page (into R) and then parse the object you’ve read in to the elements you’re interested in
  • Put the components together in a data frame (a tibble) and analyze it like you analyze any other data

3.4 Plan for webscraping

  1. Read in the entire page
  2. Scrape opinion title and save as title
  3. Scrape author and save as author
  4. Scrape date and save as date
  5. Create a new data frame called tsl_opinion with variables title, author, and date

3.4.1 Read in the entire page

tsl_page <- read_html("https://tsl.news/category/opinions/")
tsl_page
{html_document}
<html lang="en-US">
[1] <head>\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8 ...
[2] <body class="archive category category-opinions category-2244 custom-back ...
typeof(tsl_page)
[1] "list"
class(tsl_page)
[1] "xml_document" "xml_node"    
  • we need to convert into something more familiar, like a data frame

3.4.2 Scrape title and save as title

tsl_page |> 
html_elements(".entry-title a") 
{xml_nodeset (10)}
 [1] <a href="https://tsl.news/opinion-on-poppin-and-lockin-how-i-developed-a ...
 [2] <a href="https://tsl.news/opinion-in-defense-of-celebrity-body-autonomy/ ...
 [3] <a href="https://tsl.news/opinion-neoliberalism-handed-a-nazis-son-chile ...
 [4] <a href="https://tsl.news/opinion-social-media-makes-you-sexless-and-bor ...
 [5] <a href="https://tsl.news/opinion-rural-america-deserves-its-place-in-th ...
 [6] <a href="https://tsl.news/opinion-maga-conservatism-has-no-place-in-chri ...
 [7] <a href="https://tsl.news/opinion-hey-jonas-brothers-you-man-children-ca ...
 [8] <a href="https://tsl.news/opinion-3-years-ago-the-claremont-colleges-opt ...
 [9] <a href="https://tsl.news/opinion-the-mexicanization-of-american-politic ...
[10] <a href="https://tsl.news/opinion-democrats-cannot-bow-to-trump-in-the-g ...
title <- tsl_page |> 
html_elements(".entry-title a") |> 
html_text()
title
 [1] "OPINION: On Poppin’ and Lockin’: How I developed a breakdancing addiction"                                                          
 [2] "OPINION: Celebrities can lose weight and still preach body positivity"                                                              
 [3] "OPINION: Neoliberalism handed a Nazi’s son Chile’s presidency"                                                                      
 [4] "OPINION: Social media makes you sexless and boring"                                                                                 
 [5] "OPINION: Rural America deserves its place in the Abundance agenda"                                                                  
 [6] "OPINION: MAGA conservatism has no place in Christianity"                                                                            
 [7] "OPINION: Hey Jonas Brothers, you man-children cannot revive the dying boy band trend!"                                              
 [8] "OPINION: Three years ago, the Claremont Colleges opted out of 100 percent renewable energy purchasing; this fall, they must opt in."
 [9] "OPINION: The Mexicanization of American politics brought to you by Trump and the libs"                                              
[10] "OPINION: Democrats cannot bow to Trump in the government shutdown fight"                                                            
title <- title |> 
str_remove("OPINION: ")

title
 [1] "On Poppin’ and Lockin’: How I developed a breakdancing addiction"                                                          
 [2] "Celebrities can lose weight and still preach body positivity"                                                              
 [3] "Neoliberalism handed a Nazi’s son Chile’s presidency"                                                                      
 [4] "Social media makes you sexless and boring"                                                                                 
 [5] "Rural America deserves its place in the Abundance agenda"                                                                  
 [6] "MAGA conservatism has no place in Christianity"                                                                            
 [7] "Hey Jonas Brothers, you man-children cannot revive the dying boy band trend!"                                              
 [8] "Three years ago, the Claremont Colleges opted out of 100 percent renewable energy purchasing; this fall, they must opt in."
 [9] "The Mexicanization of American politics brought to you by Trump and the libs"                                              
[10] "Democrats cannot bow to Trump in the government shutdown fight"                                                            

3.4.3 Scrape author and save as author

author <- tsl_page |> 
html_elements(".author") |> 
html_text()
author
 [1] "By Leili Kamali"                                                                      
 [2] "Leili Kamali"                                                                         
 [3] "By Joelle Rudolf"                                                                     
 [4] "Joelle Rudolf"                                                                        
 [5] "By Rafael Hernandez Guerrero"                                                         
 [6] "Rafael Hernandez Guerrero"                                                            
 [7] "By Kate Eisenreich"                                                                   
 [8] "Kate Eisenreich"                                                                      
 [9] "By Caleb Rasor"                                                                       
[10] "Caleb Rasor"                                                                          
[11] "By Ansley Kang"                                                                       
[12] "Ansley Kang"                                                                          
[13] "By Joelle Rudolf"                                                                     
[14] "Joelle Rudolf"                                                                        
[15] "By Annika Weber, Wilbur Moffitt, Lucy Reed and 5C Environmental Justice Campaign Team"
[16] "Annika Weber"                                                                         
[17] "Wilbur Moffitt"                                                                       
[18] "Lucy Reed"                                                                            
[19] "5C Environmental Justice Campaign Team"                                               
[20] "By Rafael Hernandez Guerrero"                                                         
[21] "Rafael Hernandez Guerrero"                                                            
[22] "By Nicholas Steinman"                                                                 
[23] "Nicholas Steinman"                                                                    
author <- tsl_page |> 
html_elements(".author") |> 
html_text() |> 
tibble() |> 
set_names(nm = "authors") |> 
filter(str_detect(authors, "By "))
author 
# A tibble: 10 × 1
  authors                     
  <chr>                       
1 By Leili Kamali             
2 By Joelle Rudolf            
3 By Rafael Hernandez Guerrero
4 By Kate Eisenreich          
5 By Caleb Rasor              
6 By Ansley Kang              
# ℹ 4 more rows
author <- author |> 
mutate(authors = str_replace(authors, "By ", "")) 

author
# A tibble: 10 × 1
  authors                  
  <chr>                    
1 Leili Kamali             
2 Joelle Rudolf            
3 Rafael Hernandez Guerrero
4 Kate Eisenreich          
5 Caleb Rasor              
6 Ansley Kang              
# ℹ 4 more rows

3.4.4 Scrape date and save as date

date <- tsl_page |> 
html_elements(".published") |> 
html_text()
date
 [1] "October 10, 2025 2:47 am"   "October 10, 2025 2:41 am"  
 [3] "October 10, 2025 1:15 am"   "October 10, 2025 12:46 am" 
 [5] "October 10, 2025 12:41 am"  "October 3, 2025 1:58 am"   
 [7] "October 3, 2025 1:17 am"    "October 3, 2025 1:05 am"   
 [9] "October 2, 2025 11:57 pm"   "September 26, 2025 2:18 am"
date <- date |> 
lubridate::mdy_hm(tz = "America/Los_Angeles")
date
 [1] "2025-10-10 02:47:00 PDT" "2025-10-10 02:41:00 PDT"
 [3] "2025-10-10 01:15:00 PDT" "2025-10-10 00:46:00 PDT"
 [5] "2025-10-10 00:41:00 PDT" "2025-10-03 01:58:00 PDT"
 [7] "2025-10-03 01:17:00 PDT" "2025-10-03 01:05:00 PDT"
 [9] "2025-10-02 23:57:00 PDT" "2025-09-26 02:18:00 PDT"

3.4.5 Create a new data frame

tsl_opinion <- tibble(
    title,
    author,
    date
)

tsl_opinion
# A tibble: 10 × 3
  title                                              authors date               
  <chr>                                              <chr>   <dttm>             
1 On Poppin’ and Lockin’: How I developed a breakda… Leili … 2025-10-10 02:47:00
2 Celebrities can lose weight and still preach body… Joelle… 2025-10-10 02:41:00
3 Neoliberalism handed a Nazi’s son Chile’s preside… Rafael… 2025-10-10 01:15:00
4 Social media makes you sexless and boring          Kate E… 2025-10-10 00:46:00
5 Rural America deserves its place in the Abundance… Caleb … 2025-10-10 00:41:00
6 MAGA conservatism has no place in Christianity     Ansley… 2025-10-03 01:58:00
# ℹ 4 more rows

3.4.6 map() over multiple pages

tsl_opinions <- function(i){
tsl_page <- read_html(paste0("https://tsl.news/category/opinions/page/",i))
  
title <- tsl_page |> 
html_elements(".entry-title a") |> 
html_text() |> 
str_remove("OPINION: ")
  
author <- tsl_page |> 
html_elements(".author") |> 
html_text() |> 
tibble() |> 
set_names(nm = "authors") |> 
filter(str_detect(authors, "By ")) |> 
mutate(authors = str_replace(authors, "By ", "")) 
  
date <- tsl_page |> 
html_elements(".published") |> 
html_text() |> 
lubridate::mdy_hm(tz = "America/Los_Angeles")

first_p <- tsl_page |> 
  html_elements(".entry-content p") |> 
  html_text() |> 
  tolower()
  
tibble(
    title,
    author,
    date,
    first_p
)  
}

tsl_opinion_titles <- 1:50 |> purrr::map(tsl_opinions) |> 
list_rbind()

3.5 Web scraping considerations

3.5.1 Check if you are allowed!

library(robotstxt)
paths_allowed("https://tsl.news/category/opinions/")
[1] TRUE
paths_allowed("http://www.facebook.com")
[1] FALSE

3.5.2 Ethics: “Can you?” vs “Should you?”

3.5.3 Challenges: Unreliable formatting

3.5.4 Challenges: Data broken into many pages

3.6 robots.txt

robots.txt is a file that some websites will publish to clarify what can and cannot be scraped and other constraints about scraping. When a website publishes a robots.txt file, we need to comply with the information in it for moral and legal reasons.

Tutorial about robots.txt.

3.7 Reflection questions

3.8 Ethics considerations