Experiments and tutorials from the wide field of cultural analytics based on textual and multimodal corpora.
In this tutorial you will learn to:
- web/screen scrape relatively unstructured data from the Wikipedia
- transform unstructured data into tabular data to facilitate processing with Python
- create graph data from your data to visualize your data as networks
- export Python-created data to use it with JavaScript visualization libraries such as D3.js
Would you should already know:
- a little Python 3
- some minor HTML
- some JavaScript if you want to understand the web-based visualization at the end of the tutorial
This notebook comes with a requirements.txt file to facilitate the installation of package dependencies. To install the dependencies, launch the following command from the command line before you start the notebook:
pip3 install -r requirements.txt
This notebook eventually evolved into a TPDL publication. ATTENTION! The notebook is no longer maintained here. It has been moved to a separate repository.
- In this tutorial, you will learn to read metadata from an OAI-PMH data provider and how to convert the retrieved data from Dublin Core to a pandas data frame.
- Furthermore, you will carry out some basic data analysis on your data in order to find out if the data is corrupt or unclean. Based on an example, you will clean some aspects of your data using techniques borrowed from machine learning.
- Finally, you will visualize data with the help of a network graph.
In this tutorial, you will learn how to read from a unstructured and structured dataset, create a dataframe from this raw data, and to visualize characteristics from the data in order to find out whether the titles of a research library are truly neutral from a sentiment analysis perspective and how they compare to a sample from books sold by Amazon.