You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To navigate in this HTML document tree we can use the methods `.contents()` (to access direct children nodes), `.parent()` (to access the parent node), `.next_sibling()`, and `.previous_sibling()` (to access the siblings of a node) methods. For example, if we want to access the second row of the table, which is the second child of the table element we could use the following code.
144
+
To navigate in this HTML document tree we can use the following properties of the "bs4.element.Tag" object: `.contents` (to access direct children nodes), `.parent` (to access the parent node), `.next_sibling`, and `.previous_sibling` (to access the siblings of a node) methods. For example, if we want to access the second row of the table, which is the second child of the table element we could use the following code.
143
145
144
146
```python
145
147
# The second [1 in Python indexing] child of our table element
@@ -424,10 +426,9 @@ print(req.status_code)
424
426
425
427
::::::::::::::::::::::::::::::::::::: keypoints
426
428
427
-
- Use `.md` files for episodes when you want static content
428
-
- Use `.Rmd` files for episodes when you need to generate output
429
-
- Run `sandpaper::check_lesson()` to identify any issues with your lesson
430
-
- Run `sandpaper::build_lesson()` to preview your lesson locally
429
+
- We can get the HTML behind any website using the "requests" package and the function `requests.get('website_url').text`.
430
+
- An HTML document is a nested tree of elements. Therefore, from a given element, we can access its child, parent, or sibling, using `.contents`, `.parent`, `.next_sibling`, and `previous_sibling`.
431
+
- It's polite to not send too many requests to a website in a short period of time. For that, we can use the `sleep()` function of the built-in Python module `time`.
Copy file name to clipboardExpand all lines: episodes/dynamic-websites.md
+17-8Lines changed: 17 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,14 +6,17 @@ exercises: 5
6
6
7
7
:::::::::::::::::::::::::::::::::::::: questions
8
8
9
-
- How do you write a lesson using Markdown and `{sandpaper}`?
9
+
- What are the differences between static and dynamic websites?
10
+
- Why is it important to understand these differences when doing web scraping?
11
+
- How can I start my own web scraping project?
10
12
11
13
::::::::::::::::::::::::::::::::::::::::::::::::
12
14
13
15
::::::::::::::::::::::::::::::::::::: objectives
14
16
15
-
- Explain how to use markdown with The Carpentries Workbench
16
-
- Demonstrate how to include pieces of code, figures, and nested challenge blocks
17
+
- Use the `Selenium` package to scrape dynamic websites.
18
+
- Identify the elements of interest using the browser's "Inspect" tool.
19
+
- Understand the usual pipeline of a web scraping project.
17
20
18
21
::::::::::::::::::::::::::::::::::::::::::::::::
19
22
@@ -257,11 +260,17 @@ This scraping pipeline helps break down complex scraping tasks into manageable s
257
260
258
261
::::::::::::::::::::::::::::::::::::: keypoints
259
262
260
-
- Use `.md` files for episodes when you want static content
261
-
- Use `.Rmd` files for episodes when you need to generate output
262
-
- Run `sandpaper::check_lesson()` to identify any issues with your lesson
263
-
- Run `sandpaper::build_lesson()` to preview your lesson locally
264
-
263
+
- Dynamic websites load content using JavaScript, which isn't present in the initial or source HTML. It's important to distinguish between static and dynamic content when planning your scraping approach.
264
+
- The `Selenium` package and its `webdriver` module simulate a real user interacting with a browser, allowing it to execute JavaScript and clicking, scrolling or filling in text boxes.
265
+
- Here are the commandand we learned when we use `Selenium`:
266
+
-`webdriver.Chrome()` # Start the Google Chrome browser simulator
267
+
-`.get("website_url")` # Go to a given website
268
+
-`.find_element(by, value)` and `.find_elements(by, value)` # Get a given element
269
+
-`.click()` # Click the element selected
270
+
-`.page_source` # Get the HTML after JavaScript has executed, which can later be parsed with BeautifulSoup
271
+
-`.quit()` # Close the browser simulator
272
+
- The browser's "Inspect" tool allows users to view the HTML document after dynamic content has loaded, revealing elements added by JavaScript. This tool helps identify the specific elements you are interested in scraping.
273
+
- A typical scraping pipeline involves understanding the website's structure, determining content type (static or dynamic), using the appropriate tools (requests and BeautifulSoup for static, Selenium and BeautifulSoup for dynamic), and structuring the scraped data for analysis.
Copy file name to clipboardExpand all lines: episodes/hello-scraping.md
+13-9Lines changed: 13 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,19 +1,21 @@
1
1
---
2
2
title: "Hello-Scraping"
3
-
teaching: 30
4
-
exercises: 5
3
+
teaching: 40
4
+
exercises: 10
5
5
---
6
6
7
7
:::::::::::::::::::::::::::::::::::::: questions
8
8
9
-
- How do you write a lesson using Markdown and `{sandpaper}`?
9
+
- What is behind a website and how can I extract its information?
10
+
- What is there to consider before I do web scraping?
10
11
11
12
::::::::::::::::::::::::::::::::::::::::::::::::
12
13
13
14
::::::::::::::::::::::::::::::::::::: objectives
14
15
15
-
- Explain how to use markdown with The Carpentries Workbench
16
-
- Demonstrate how to include pieces of code, figures, and nested challenge blocks
16
+
- Identify the structure and basic components of an HTML document.
17
+
- Use BeautifulSoup to locate elements, tags, attributes and text in an HTML document.
18
+
- Understand the situations in which web scraping is not suitable for obtaining the desired data.
17
19
18
20
::::::::::::::::::::::::::::::::::::::::::::::::
19
21
@@ -320,10 +322,12 @@ To conclude, here is a brief code of conduct you should consider when doing web
320
322
321
323
::::::::::::::::::::::::::::::::::::: keypoints
322
324
323
-
- Use `.md` files for episodes when you want static content
324
-
- Use `.Rmd` files for episodes when you need to generate output
325
-
- Run `sandpaper::check_lesson()` to identify any issues with your lesson
326
-
- Run `sandpaper::build_lesson()` to preview your lesson locally
325
+
- Every website has an HTML document behind it that gives a structure to its content.
326
+
- An HTML is composed of elements, which usually have a opening `<tag>` and a closing `</tag>`.
327
+
- Elements can have different properties, assigned by attributes in the form of `<tag attribute_name="value">`.
328
+
- We can parse any HTML document with `BeautifulSoup()` and find elements using the `.find()` and `.find_all()` methods.
329
+
- We can access the text of an element using the `.get_text()` method and the attribute values as we do with Python dictionaries (`element["attribute_name"]`).
330
+
- We must be careful to not tresspass the Terms of Service (TOS) of the website we are scraping.
In this workshop you will learn how to extract data from websites, what you'd call web scraping, using Python. In Episode 1 we begin by reviewing the structure of websites in HTML and how to retrieve information from it using your browser and the `BeautifulSoup` package. In Episode 2 we'll dive deep on how to get the HTML behind any website using the `requests` package and how to parse and find information with `BeautifulSoup`. At the end,you’ll learn about the differences between static and dynamic webpages, and how to scrape the latter with the `Selenium` package.
8
6
9
-
## Data Sets
7
+
This workshop is designed for participants who already have a basic understanding of Python programming. In particular, it's best to know how to:
10
8
11
-
<!--
12
-
FIXME: place any data you want learners to use in `episodes/data` and then use
13
-
a relative link ( [data zip file](data/lesson-data.zip) ) to provide a
14
-
link to it, replacing the example.com link.
15
-
-->
16
-
Download the [data zip file](https://example.com/FIXME) and unzip it to your Desktop
9
+
- Install and import packages and modules
10
+
- Use lists and dictionaries
11
+
- Use conditional statements (`if`, `else`, `elif`)
12
+
- Use `for` loops
13
+
- Calling functions, understanding parameters/arguments and return values
1. If you already have Anaconda, Jupyter Lab or Jupyter Notebooks installed in your computer, skip to step 2. Follow Miniforge's [download](https://github.com/conda-forge/miniforge?tab=readme-ov-file#download) and [installation](https://github.com/conda-forge/miniforge?tab=readme-ov-file#install) instructions for your respective operating system. If you are using a Windows machine, make sure you mark the option to "Add Miniforge3 to my PATH environment variable".
20
+
2. If you are using Mac or Linux, open the 'Terminal'. If you are using Windows, open the 'Command Prompt' or 'Miniforge Prompt'.
21
+
3. Activate the base conda environment by typing and running the 'conda activate' command.
0 commit comments