Val Healy, Nolan Essigmann, Ceri Riley
The goal of our web scroller is to reveal how humans are exacerbating the social, economic, and environmental impacts of drought because of the current structure of industrial agriculture and government subsidies that reduce the cost of meat and dairy products (which have large water footprints).
Our initial research involved laying out possible questions that interested us and focusing on the link between drought and food security. After we had a general idea of the story we wanted to tell, we researched a bunch of potentially useful datasets and compiled a document with at least 15 sources for data and 17 news articles (containing narrative ideas as well as links to alternate data sources) that we could use as starting points for our project sketches. As we iterated through versions of our final project, we created a hand-drawn rough sketch of the scroller, an abbreviated document including datasets we would still need for our revised story and tentative visualization ideas, a near-final version of our project entitled ‘Where is the Water Going’ (the black text and associated citations located in this document), and a final two revisions based on feedback from Rahul and discussions with our peers (the green and red texts located in this same document).
The very basic prototype of our visualization was the creative chart based on 2010 California Water Use data. This dataset led us to a report of 2010 United States Water Use data, where we extracted data about Domestic, Industrial, and Agricultural water use to compare water withdrawals between states and nationwide (spreadsheets located in this folder, among others). We eventually narrowed down the visualization to a visual area comparison of these three daily water consumption metrics, including a calculation about approximately how many gallons of water each person uses per day.
As we experimented with other sketches, we used map data from the U.S. Drought Monitor in addition to manually copied/pasted tabular data monitoring the weekly severity of drought across the entire United States and in all 50 individual states.This information was used to make regional maps of drought to include in our final scroller. In addition, we researched qualitative information explaining the causes of drought to help us write the narrative hook that leads into the remainder of our story. This Github repository contains the data for our west coast drought map, and this folder contains all the images that comprised our small multiples map.
In order to research the water cost of foods, we used water footprint statistics for both crops and farm animals, in addition to supplemental information from Angela Morelli’s water visualization, some statistical data from a report on The Water Footprint of Humanity, and information from the Mekonnen and Hoekstra paper entitled A global assessment of the water footprint of farm animal products. We focused on pages 24-29 of the report and manually transcribed data about the total water footprints of different animal meats and food products, the total water footprints of different feed crops, and data comparing the water footprint of animal products to their nutritional breakdown (with a focus on protein). Although we analyzed data regarding the nutritional content of different food options, we ended up omitting it in our final scroller in order to tell a more succinct story focused on the water costs of meats and crops. We also read and extracted data from part of the paper on The green, blue and grey water footprint of crops and derived crop products in order to analyze the different water footprint of specific crops both globally and in the US and create an interactive small multiples visualization with cleaned data that can be found in the ‘usfoodwaterusage’ file within this folder (some of our data analysis was compiled on c9.io and the downloadable file format is .gz for some reason).
In order to support the end of our narrative, making the connection between the water footprint of meat and the fact that the symbolic choice not to eat meat isn’t feasible for many people, we did a quick analysis on SNAP data, specifically focusing on the average monthly participation table. In addition, we researched census information about poverty in the United States and estimates of people who were in poverty at a national level. We used this information to create a visualization with data that can be found in the ‘data’ file within this same folder.
Lastly, we researched data on farm subsidies across the United States and cleaned the data such that it just presented information about food products, taking out information like disaster payments or incentive programs. We specifically used this information to create a visualization of the different food subsidies and a visualization describing the price of meat with and without subsidies. The cleaned data can be found partially in this spreadsheet and partially in the ‘viz’ file within this same folder .
We also collectively invested a lot of time in learning basic web programming and how to implement skrollr.js and D3 to create our scroller and visualizations. None of us came into this class with very much programming experience in anything but Python, so there was definitely a learning curve when developing the final project and a lot of experimentation with different scrolling webpage tools.
Links to additional cleaned datasets:
- This other folder contains some of our excel spreadsheets with downloaded/cleaned/analyzed data about drought, US crops and associated revenue (by state), US water use, California water use (by national classification and crop/land), and SNAP participants.
- As you can see in the bottom half of our final planning document, we initially did work analyzing data about personal water use (such as showers or sprinkler systems) and additional analysis on industrial water use data before cutting the information and narrowing our narrative to focus on agriculture.
- Our c9 workspace, which has restricted access (but we can grant access, I believe), and several of our cleaned datasets in the original file format (not .tar.gz)