Natural Disasters around the world from 1950-2018

By: Zahra H. Mohamed for SA8905 @RyersonGeo

You can download the code here!

Introduction

Natural disasters are major events that result from natural processes of the planet. With global warming and the changing of our climate, it’s rare to go through a week without mention of a flood, earthquake, or a bad storm happening somewhere in the world. I chose to make my web map on natural disasters, because it is at the front of lot of people’s minds lately, as well as there is reliable and historical public data available on disasters around the world. My main goal is to make an informational and easy to use web page, that is accessible to anyone from any educational level or background. The web page will display all of the recorded natural disasters around the world over the past 68 years, and will allow you to see what parts of the world are more prone to certain types of disasters in a clear and understandable format.

Figure 1. Map displaying natural disaster data points, zoomed into Africa.

In order to make my web map I used:

  • Javascript – programming language
  • HTML/CSS – front-end programming language and stylesheets
  • Leaflet – a javascript library or interactive maps
  • JQuery – a javascript framework
  • JSCharting – a javascript charting library that creates charts using SVG (Scalable Vector Graphics)

Data & Map Creation

The data for this web map was taken from: Geocoded Disasters (GDIS) Dataset, v1 (1960-2018) from NASA’s Socioeconomic Data and Applications Centre (SEDAC). The data was originally downloaded as a Comma-separated values (CSV) file. CSV files are simple text files that allow for you to easily share data, and generally take up less space.

A major hurdle in preparing this map was adding the data file onto the map. Because the CSV file was so large (30, 000+). I originally added the csv file onto mapbox studio as a dataset, and then as tiles, but I ended up switching to Leaflet, and locally accessing the csv file instead. Because the file was so large, I decided to use QGIS to sort the data by disaster type, and then uploaded them in my javascript file, using JQuery.

Data can come in different data types and formats, so it is important to convert data into format that is useful for whatever it is you hope to extract or use it for. In order to display this data, first the markers data is read from the csv file, and then I used Papa Parse to convert the string file, to an array of objects. Papa Parse is a csv library for javascript, that allows you to parse through large files on the local system or download them from the internet. Data in an array and/or object, allows you to loop through the data, making it easier to access particular information. For example, when including text in the popup for the markers (Figure 2), I had to access to particular information from the disaster data, which was very easy to do as it was an object.

Code snippet for extracting csv and creating marker and popup (I bolded the comments. Comments are just notes, they are not actually part of the code):

// Read markers data from extreme_temp.csv
$.get('./extreme_temp.csv', function (csvString) {

  // Use PapaParse to convert string to array of objects
  var data = Papa.parse(csvString, { header: true, dynamicTyping: true }).data;

  // For each row in data, create a marker and add it to the map
  for (var i in data) {
    var row = data[i];

        // create popup contents
        var customPopup = "<h1>" + row.year + " " + row.location + "<b> Extreme Temperature Event<b></h1><h2><br>Disaster Level: " + row.level + "<br>Country: " + row.country + ".</h2>"

        // specify popup options 
        var customOptions =
        {
          'maxWidth': '500',
          'className': 'custom'
        }

    var marker = L.circleMarker([row.latitude, row.longitude], {
      opacity: 50
    }).bindPopup(customPopup, customOptions);

// show popup on hover
    marker.on('mouseover', function (e) {
      this.openPopup();
    });
    marker.on('mouseout', function (e) {
      this.closePopup();
    });

// style marker and add to map
    marker.setStyle({ fillColor: 'transparent', color: 'red' }).addTo(map);
  }

});
Figure 2. Marker Popup

I used L.Circlemarker ( a leaflet vector layer) to assign a standard circular marker to each point. As you can see in Figure 1 and 3, the markers appear all over the map, and are very clustered in certain areas. However, when you zoom in as seen in Figure 3, the size of the markers adjusts, and they become easier to see, as you zoom into the more clustered areas. The top left corner of the map contains a zoom component, as well these 4 empty square buttons vertically aligned, which are each assigned a continent (just 4 continents for now), and will navigate over to that continent when clicked.

Figure 3. Map zoomed in to display, marker size

The bottom left corner of the map contains the legend and toggle buttons to change between the theme of the map, from light to dark. Changing the theme of the map doesn’t alter any of the data on the map, it just changes the style of the basemap. Nowadays almost every browser and web page seems to have a dark mode option, so I thought it would be neat include. The title, legend and the theme toggles, are all static and their positions on the web page remain the same.

Another component on the web page is the ‘Disaster Fact’ box on the bottom right corner of the page. This textbook is meant display random facts about natural disaster over a specified time interval. Ideally, i have variable that contains an array of facts in a list, in string form. Then use the setInterval(); function, and a function that generates a random number, that is the length of the array – 1, and use that as an index to select one of the list items from the array. However, for the moment the map will display the first fact after the specific time interval, when the page loads, but then it remains on the page. But refreshing the page, will cause for the function to generate another random fact.

Figure 4. Pie Chart displaying Distribution of Natural Disasters

One of the component of my web map page, that I will expand on, is the chart. For now I added a simple pie chart using JSCharts to display the total number of disasters per disaster type, for the last 68 years. Using JSCharts as fairly simple, as you can see if you take a look at the code for it in my GitHub. I calculated the total number of disasters for each disaster type by looking at the number of lines in each of my already divided csv files, and manually entered them as the y values. However, normally in order to calculate this data, especially if it was in one large csv file, I would use RStudio.

Something to keep in mind:

People view websites on different platform nowadays, from laptops, to tables and iPhones. A problem with creating web pages is to keep in mind that different platform for viewing web pages, have different screen sizes. So webpages need to be optimized to look good in differ screen sizes, and this is largely done using CSS.

Looking Ahead

Overall my web map is still in progress, and there are many components I need to improve upon, and would like to add to. I would also like to add a bar chart that shows the total number of disasters for each year, for each disaster type , along the bottom of the map, with options to toggle between each disaster type. Also I would like to add a swipe bar that allows you to see the markers on the map based on the year. A component of the map I had trouble adding was an option to hide/view marker layers on the map. I was able to get it to work for just one marker for each disaster type, but it wouldn’t work for the entire layer, so looking ahead I will figure out how to fix that as well.

There was no major research question in making this web page, my goal was to simply make a web map that was appealing, interesting, and easy to use. I hope to expand on this map and add the components that I’ve mentioned, and fix the issues I wasn’t able to figure out. Overall, making a web page can be frustrating, and there is a lot of googling and watching youtube videos involved, but making a dynamic web app is a useful skill to learn as it can allow you to convey information as specifically and creatively as you want.

Tracking the COVID-19 Pandemic in Toronto with R and Leaflet

By: Tavis Buckland

Geovisualization Project Assignment, SA8905, Fall 2020

Github Repository: https://github.com/Bucklandta/TorontoCovid19Cases.git

INTRO

Over the course of the pandemic, the City of Toronto has implemented a COVID-19 webpage focused on providing summary statistics on the current extent of COVID-19 cases in the city. Since the beginning of the pandemic, this webpage has greatly improved, yet it still lacks the functionality to analyze spatio-temporal trends in case counts. Despite not providing this functionality directly, the City has released the raw data for each reported case of COVID-19 since the beginning of the pandemic . Using RStudio with the leaflet and shiny libraries, a tool was designed to allow for the automated collection, cleaning and mapping of this raw case data.

Sample of COVID-19 case data obtained from the Toronto Data Portal

DATA

The raw case data was downloaded from the Toronto Open Data Portal in R, and added to a data frame using read.csv. As shown in the image below, this data contained the neighbourhood name and episode date for each individual reported case. As of Nov. 30th, 2020, this contained over 38,000 reported cases. Geometries and 2016 population counts for the City of Toronto neighbourhoods were also gathered from the Toronto Open Data Portal.

PREPARING THE DATA

After gathering the necessary inputs, an extensive amount of cleaning was required to allow the case data to be aggregated to Toronto’s 140 neighbourhoods and this process had to be repeatable for each new instance of the COVID-19 case data that was downloaded. Hyphens, spaces and other minor inconsistencies between the case and neighbourhood data were solved. Approximately 2.5% of all covid cases in this dataset were also missing a neighbourhood name to join on. Instead of discarding these cases, a ‘Missing cases’ neighbourhood was developed to hold them. The number of cases for each neighbourhood by day was then counted and transposed into a new data table. From there, using ‘rowSum’, the cumulative number of cases in each neighbourhood was obtained.

Example of some of the code used to clean the dataset and calculate cumulative cases

Unfortunately, in its current state, the R code will only gather the most recent case data and calculate cumulative cases by neighbourhood. Based on how the data was restructured, calculating cumulative cases for each day since the beginning of the pandemic was not achieved.

CREATING A SHINY APP USING LEAFLET

Using leaflet all this data was brought together into an interactive map. Raw case counts were rated per 100,000 and classified into quintiles. The two screenshots below show the output and popup functionality added to the leaflet map.

In its current state, the map is only produced on a local instance and requires RStudio to run. A number of challenges were faced when attempting to deploy this map application, and unfortunately, the map was not able to be hosted through the shiny apps cloud-server. As an alternative, the map code has been made available through a GitHub repository at the top of this blog post. This repository also includes a stand-alone HTML file with an interactive map.

Screenshot of HTML map produced by R Shiny App and Leaflet. Popups display neighbourhood names, population, raw count, and rate per 100,000 for the most recent case data.

LIMITATIONS

There are a couple notable limitations to mention considering the data and methods used in this project. For one, the case data only supports aggregation to Toronto neighbourhoods or forward sortation areas (FSA). At this spatial scale, trends in case counts are summarized over very large areas and are not likely to accurately represent This includes the modifiable areal unit problem (MAUP), which describes the statistical biases that can emerge from aggregating real-world phenomena into arbitrary boundaries. The reported cases derived from Toronto Public Health (TPH) are likely subject to sampling bias and do not provide a complete record of the pandemic’s spread through Toronto. Among these limitations, I must also mention my limited experience building maps in R and deploying them onto the Shinyapps.io format.

FUTURE GOALS

With the power of R and its many libraries, there are a great many improvements to be made to this tool but I will note a few of the significant updates I would like to implement over the coming months. Foremost, is to use the ‘leaftime’ R package to add a timeline function, allowing map-users to analyze changes over time in reported neighbourhood cases. Adding the function to quickly extract the map’s data into a CSV file, directly from the map’s interface, is another immediate goal for this tool. This CSV could contain a snapshot of the data based on a particular time frame identified by a user. The last functionality planned for this map is the ability to modify the classification method used. Currently, the neighbourhoods are classified into quintiles based on cumulative case counts per 100,000. Using an extended library of leaflet, called ‘leafletproxy’, would allow map users greater control over map elements. It should be possible to allow users to define the number of classes and which method (i.e. natural breaks, standard deviation, etc.) directly from the map application.

An Interactive Introduction to Retail Geography

by Jack Forsyth
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2020

Project Link: https://gis.jackforsyth.com/


Who shops at which store? Answers to this fundamentally geographic question often use a wide variety of models and data to understand consumer decision making to help locate new stores, target advertisements, and forecast sales. Understanding store trade areas, or where a store’s customers come from, plays an important role in this kind of retail analysis. The Trade Area Models web app lets users dip their toes into the world of retail geography in a dynamic, interactive fashion to learn about buffers, Voronoi polygons, and the Huff Model, some of the models that can underlie trade area modeling.

The Huff Model on display in the Trade Area Models web app

The web app features a tutorial that walks new users through the basics of trade area modeling and the app itself. Step by step, it introduces some of the underlying concepts in retail geography, and requires users to interact with the app to relocate a store and resize the square footage of another, giving them an introduction to the key interactions that they can use later when interacting with the models directly.

A tutorial screenshot showing users how to interact with the web app

The web app is designed to have a map dominate the screen. On the left of the browser window, users have a control panel where they can learn about the models displayed on the map, add and remove stores, and adjust model parameters where appropriate. As parameters are changed, users receive instant feedback on the map that displays the result of their parameter changes. This quick feedback loop is intended to encourage playful and exploratory interactions that are not available in desktop GIS software. At the top of the screen, users can navigate between tabs to see different trade area models, and they are also provided with an option to return to the tutorial, or read more about the web app in the About tab.

The Buffers tab allows for Euclidean distance and drive time buffers (pictured above)

Implementation

The Trade Area Models web app was implemented using HTML/CSS/JavaScript and third party libraries including Bootstrap, JQuery, Leaflet, Mapbox, and Turf.js. Bootstrap and JQuery provided formatting and functionality frameworks that are common in web development. Leaflet provided the base for the web mapping components, including the map itself, most of the map-based user interactions, and the polygon layers. Mapbox was used for the base map layer and its Isochrone API was used to visualize drive time buffers. Turf.js is a JavaScript-based geospatial analysis library that makes performing many GIS-related functions and analysis simple to do in web browsers, and it was used for distance calculation, buffering, and creating Voronoi polygons. Toronto (Census Metropolitan Area) census tract data for 2016 were gathered from the CensusMapper API, which provides an easy to use interface to extract census data from Statistics Canada. Data retrieved from the API included geospatial boundaries, number of households, and median household income. The Huff Model was written from scratch in JavaScript, but uses Turf.js’s distance calculation functionality to understand the distance from each store to each census tract’s centroid. Source code is available at https://github.com/mappinjack/spatial-model-viz

Limitations

One of the key limitations in the app is a lack of specificity in models. Buffer sizes and store square footage areas are abstracted out of the app for simplicity, but this results in a lack of quantitative feedback. The Huff Model also uses Euclidean distance rather than drive time which ignores the road network and alternative means of transit like subway or foot traffic. The Huff Model also uses census tract centroids, which can lead to counter intuitive results in large census tracts. The sales forecasting aspect of the Huff Model tab makes large assumptions on the amount of many spent by each household on goods, and is impacted by edge effects of both stores and customers that may fall outside of the Toronto CMA. The drive time buffers also fully rely on the road network (rather than incorporating transit) and are limited by an upper bounded travel time of 60 minutes from the Mapbox Isochrone API.

Future work

The application in its current form is useful for spurring interest and discussion around trade area modeling, but should be more analytical to be useful for genuine analysis. A future iteration should remove the abstractions of buffer sizes and square footage estimates to allow an experienced user to directly enter exact values into the models. Further, more demographic data to support the Huff Model, and parameter defaults for specific industries would help users more quickly create meaningful models. Applying demographic filters to the sales forecasting would allow, for example, a store that sells baby apparel to more appropriately identify areas where there are more new families. Another useful addition to the app would be integration of real estate data to show retail space that is actually available for lease in the city so that users can pick their candidate store locations in a more meaningful way.

Summary

The Trade Area Models web app gives experienced and inexperienced analysts alike the opportunity to learn more about retail geography. While more analytical components have been abstracted out of the app in favour of simplicity, users can not only learn about buffers, Voronoi polygons, and the Huff Model, but interact with them directly and see how changes in store location and model parameters affect the retail landscape of Toronto.

An interactive demo of Voronoi polygons that includes adding and moving stores

Ontario Demographics Data Visualization

Introduction

The purpose of this project is to visualize any kind of data on a webmap. Using open source software, such as QGIS, solves one aspect of this problem. The other part of this problem is to answer this question:

How and what data can be visualized? Data can be stored in a variety of formats, and organized differently. The most important aspect of spatial data is the spatial information itself and so we need to figure out a way to display the data using textual descriptions, symbols, colours, etc. at the right location.

Methodology

In this visualization, I am using the census subdivisions (downloaded from Statstics Canada website) as the basic geographical unit, plus the 2016 census profile for the census subdivisions (also downloaded from Statistics Canada website). Once these data were downloaded, the next steps were to inspect the data and organize them in a fashion so that they could be easily visualized by the shapefiles. In order to facilitate this task, we can use any relational database management system, however, my preference was to use SQL Server 2017 express edition. Once the 2016 census profile has been imported into SQL Server, the “SQL Queries” [1] file can be run to organize the data into a relational table that can be exported, or copied directly from the result-set on management studio and pasted, into excel/csv; the sheet/file can now be opened in QGIS and joined to the shapefile of Ontario Census Subdivisions [2] using CSDUID as the common field between the two files.

Using the qgis2web plugin, all data and instructions are manually chosen on a number of tabs. You can choose the layers and groups you want to upload, and then customize the appearance and interactivity of the webmap based on available options. There is the option to use either Leaflet, or OpenLayers styles on QGIS version 3.8. You can Update preview and see what the outcome will look like. You can then Export the map and the plugin will convert all the data and instructions into json format. The most important file – index.html – is created on the directory you have specified.

index.html [1] is the file that can be used to visualize the map on the web browser, however, you need to first download all the files and folders from the source page [1]. This will put all the files on your (client) machine which makes it possible to open index.html on localhost. If the map files are uploaded on a web server, then the map can be viewed by the world wide web.

Webmap

The data being visualized belongs to the population demographics (different age groups). The map of Ontario’s census subdivisions is visualized as a transparent choropleth map of 2016 population density. Other pieces of demographics information are embedded within the pop-up for each of the census subdivisions. If you hover your cursor on each of the census subsivisions, it will be highlighted with a transparent yellow colour so you can see the basemap information on the basemap clearer. If you click on them, the pop-up will appear on the screen, and you can scroll through it.

There are other interactive utilities on the map such as controllers for zooming in and out, a (ruler) widget to make measurements, a (magnifying glass) widget to search the entire globe, a (binocular) widget to search only the layers uploaded on the map, and a (layers) widget to turn layers and basemaps on and off.

Limitations

There are some limitations that I encountered after I created this webmap. The first, and most important limitation, is the projection of the data on the map. The original shapefile was using the EPSG code of 3347 which uses the Canada Lambert Conic projection with NAD 1983 datum. The plugin converted data into the most common web projection format, WGS 1984, which is defined globally by Longitude and Latitude. Although WGS 1984 prevents the hassle of using projected coordinate systems by using one unified geographic projection for the entire globe, nevertheless, it distorts the shapes as we move farther away from the equator.

The second limitation was the fact that my transparent colours were not coded into the index.html file. The opacities are defined as 1. In order to control the level of opacities, the index.html file must be opened in a text editor, the opacities changed to the proper levels, ranging between 0 and 1, and lastly save the edits on the same index.html file.

The next limitation is the size of files that can be uploaded on github [3]. There is a limit of 100 MB on the files that can be uploaded to github repositories, and because the size of the shapefile for entire Canadian census subdivisions is over 100 MB, when converted to json, it could not be uploaded to the repository [1] with all the other files. However, it is possible to add to geojson formatted file (of census subdivisions) to the data directory of the repository on the localhost machine, and manually add its location with a pair of opening and closing script tags on the index.html file on the body tag. In my case, the script was:

<script src=”data/CensusSubdivisions_4.js“></script>

The name of the file should be introduced as the very beginning line of the geojson file as a variable:

var json_CensusSubdivisions_4 = {

And don’t forget that the last line should be a closing curly braces:

}

Now index.html is aware where to find the data for all of the Canadian census subdivisions.

What’s Next?

To conclude with the main goal of this project, which was stated in the introduction, we now have a framework to visualize any data we want. Which data we want to visualize should change our methodology becasuase the scripts can be adapted accordingly. What is more important is the way we want the data to be visualized on the webmap. This tutorial presented the basics of qgis2web plugin. Once the index.html file is generated, other javascript libraries can be added to this file, and depending on your level of comfort with javascript you can expand and go beyond the simple widgets and utilities on this webmap.

  [1]  https://github.com/Mahdy1989/GeoVisualization-Leaflet-Webmap/tree/master  

 [2] There is a simple way to limit the extent of the census subdivisions for the entire Canada, to the Ontario subset only: filter the shapefile by PRUID = '35' which is the code for Ontario.

[3]  https://help.github.com/en/github/managing-large-files/what-is-my-disk-quota 

HexBinning Ontario

By Andrew Thompson – Geovis course project, SA8905 (Dr. Rinner)

The power of data visualization is becoming increasingly more robust and intricate in nature. The demand to deliver a variety of complex information has lead to the development of highly responsive visual platforms. Libraries such as d3 are providing increased flexibility to work along multiple web technology stacks (HTML, CSS, SVG) allowing for nearly unlimited customization and capacity to handle large datatypes.

hexbin

In this development, a combination of d3 and Leaflet is used to provide a data-driven visualization within an easy to use mapping engine framework; made possible through the developments of Asymmetrik.  This collection of plugins, has allowed the creation of dynamic hexbin-based heatmaps and dynamically update/visualize transitions.

The web mapping application is avaiable at: HexBinning Ontario

Discussion of data & techniques follows below…

Continue reading HexBinning Ontario