knitr::opts_chunk$set(echo = FALSE, message = FALSE, warning = FALSE)
Many researchers rely on data that are obtained from a wide variety of online sources, including web sites, social media, and external data providers. This course introduces you to procedures for collecting, preparing, analysing, and visualising such data. Participants will learn about core ideas in data visualization, web scraping, and text analysis while gaining practice writing, debugging, and tracking changes to code in R.
The main objectives of this course are the following:
There are four sessions of 4 hours each taking place on two days. Sessions will include a mix of brief lectures, coding demonstrations, and in-class exercises. You will need to bring a laptop to these sessions on which you have the necessary rights to install software. Students will work with data sets supplied for the course, as well as obtain their own data from the Internet by applying what they have learned in the course.
git
(version control software) to track changes to this markdown file over timeggplot2
R packageSessions are both iterative and cumulative, hence attendance for all four sessions is mandatory. During sessions, you will work on exercises allowing you to practice new skills. These exercises will not be graded, but their completion is mandatory. Between sessions 2 and 3, you will complete additional exercises based on your own research interests. Students will also review and replicate each other's code.
Students are expected to satisfy the following entry requirements:
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.