Missing Migrants: The Mediterranean Sea

By: Austen Chiu

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Background

The dangerous journey of migrants seeking a better life has existed as long as countries have experienced political unrest. Advancements in technology have brought greater visibility to migrant groups than ever before. However, those who failed to make the journey often go unseen. Due to the undocumented nature of migrant paths, accurate numbers of survivors and deaths is difficult to track.

The data used in this project were obtained from the Missing Migrants Project. A dashboard was created in Tableau desktop to visualize the locations of missing migrant reports across the Mediterranean Sea, and to improve awareness of the scale at which the migrant crisis is occurring.

Creating an Animated Time Series Map

Following the prompts in Tableau, import your data. The data imported from an excel file should appear like this.

Make sure the data contains a date column, and spatial coordinates. Tableau can read spatial coordinates such as latitude and longitude, or northing and easting, to create maps. You can designate a column to be read as a date, or assign its geographic role as a latitude or longitude, to draw a map.

The icon above the column reveals options for which you want to format the data.
Geographic roles can be assigned to your data, allowing Tableau to read them as a map.
Creating a new map can be done by clicking the new tab buttons at the bottom of your window.
This is a blank graph. You can create graphs by dragging data into the “columns” and “rows” fields.

Tableau will automatically generate a map if data assigned with geographic roles are used to populate the “Columns” and “Rows” fields. If the “Pages” field in the top right corner is populated with the date data, a time slider module will appear below the “Marks” module. The “Pages” field facilitates Tableau’s animation capabilities.

The “Filters” field has applied a filter to the data, so only cases that occur in the Mediterranean region are visualized in the map.

The “Pages” field in the top left has been populated by the date data and a time slider has appeared in the bottom left.
The time slider allows you to select a specific date to view. The right arrow below the slider starts the animation, and Tableau will run through each snapshot of time, much like a slideshow.

Tableau can produce many types of data visualizations to accompany the animated map. A histogram, live counter, and packed bubbles visuals accompany the map on my dashboard.

The final product of the dashboard I created has been shared to Tableau Online. However, Tableau online does not support the animation features. A gif of the animated dashboard in Tableau Desktop has been shared through google drive, and can be viewed here.

A Century of Airplane Crashes

Laine Gambeta
Geovisualization Project, @RyersonGeo, Fall 2019

Tableau is an exceptionally useful tool in visualizing data effectively.  It allows many variations of charts in which the software suggests the best type based on data content.  The following project uses a data-set obtained from the National Transportation and Safety Board identifying locations and details of plane crashes between 1908-2009. The following screenshot is a final product and a run through of how it was made.

Map Feature:

To create the map identifying accident location, a longitude and latitude is required.  Once inputted into the Columns and Rows, Tableau automatically recognizes the location data and creates a map. 

The Pages function is populated with the date of occurrence and filtered by month in order to create a time animation based on a monthly scale. When the Pages function is populated with a date the software automatically recognizes a time series animation and creates a time slide.

The size of the map icon indicates the total number of fatalities at a specific location and time.  To create this effect, the fatalities measure is inputted into the Size function.  This same measure is inserted into the label function to show the total number of occurrences with each icon appearance.

When you scroll over the icons on the map the details of each occurrence appear.  To create this tool, the measures you want to appear are inserted into the Details function.  In this function, Date, Sum Aboard, Sum Fatalities, Sum Survivors, and Summary of accident appears when you scroll over the icon on the map.

Vertical Bar Chart Feature:

To create the vertical bar chart you must insert the date on the Y axis (columns), and the X axis (rows) with people aboard and fatalities.

Next, we must create a calculation to pull the number of survivors by subtracting the two measures.  To do so, right click on a column title cell and click create calculated field.  Within this calculation you select the two columns

you want to subtract and it will populate the fields. We will use this to identify the number of survivors.

The next step is creating a dual- axis to show both values on the same chart.  Right click one of the measures in the rows field and click dual-axis.  This will combine the measures onto the same chart and overlap each other.

Following this we need to filter the data to move along the animation by month.  It tallies the monthly numbers and adds it to the chart. In order to combine the monthly tallies to show on an annual bar chart, the following filters are used.  First filter by year which tallies the monthly counts into a single column on the bar chart.  The Page’s filter identifies the time period increments used in the time slider animation, this value must be consistent across all charts in order to sync.  In this case, we are looking at statistics on a monthly basis.

To split the colours between green and red to identify survivors and fatalities, the Measure Names (which is created automatically by Tableau) is inserted into the colour function.  This will identify each variable as a different colour.

When you bring your mouse over top the bar chart it selects and identifies the statistics related to the specific year.  To create this feature, the measures must be added to the tooltip function and formatted as you please.

Horizontal Bar Chart Feature:

The second bar chart is similar to the previous one.  The sum of fatalities is put in Columns and the Date is put in Rows to switch the axis to have the date on the Y axis.  The Pages function uses the same time frame as other charts calculating monthly and adding the total to the bar chart as the time progresses.

Total Count Features:

To create the chart you must insert the date on the Y axis (columns), and the X axis (rows) with people aboard and fatalities.

Adding in running counts is a very simple calculation feature and is built into Tableau.  You build the table by putting the measure into the text function, this enable’s the value to show as text and not a chart.  You will notice below that the Pages function must be populated with a date measure on a monthly basis to be consistent with the other charts.   

In order to create the running total values, a calculation must be added to the measure.  Clicking the SUM measure opens the options and allows us to select Edit Table Calculation.  This opens a menu where you can select Running Total, Sum on a monthly basis.  We apply this to 3 separate counters to total occurrences, fatalities, and survivors.

Pie Chart Feature:

Creating a pie chart requires the following measures to be used.  Under the marks drop down you must select pie chart.  This automatically creates a function for angular measure values.  The fatality and survivor measures are used and filtered monthly.  The Measure Values which is automatically created by Tableau identifies the values of these measures and is inputted into the Angle function to calculate the pie chart.  Again, the Measures Names are inputted into the colour function to separate the values by fatalities and survivors. The Pages function is populated with date of occurrence by month to sync with the other charts.

Lastly, a dashboard is created which allows the placement of the features across a single page.  They can be arranged to be aesthetically pleasing and informative.  Formatting can be done in the dashboard page to manipulate the colors and fonts.

Limitations:

Tableau does not allow you to select your map projection. Tableau Online has a public server to publish dashboards to, however it does not support timeline animation. Therefore, the following link to my project is limited to selecting the date manually to observe the statistics.

https://prod-useast-a.online.tableau.com/t/lainegambeta/views/ACenturyofAirplaneCrashes/Dashboard2?:origin=card_share_link&:embed=n

Ontario Demographics Data Visualization

Introduction

The purpose of this project is to visualize any kind of data on a webmap. Using open source software, such as QGIS, solves one aspect of this problem. The other part of this problem is to answer this question:

How and what data can be visualized? Data can be stored in a variety of formats, and organized differently. The most important aspect of spatial data is the spatial information itself and so we need to figure out a way to display the data using textual descriptions, symbols, colours, etc. at the right location.

Methodology

In this visualization, I am using the census subdivisions (downloaded from Statstics Canada website) as the basic geographical unit, plus the 2016 census profile for the census subdivisions (also downloaded from Statistics Canada website). Once these data were downloaded, the next steps were to inspect the data and organize them in a fashion so that they could be easily visualized by the shapefiles. In order to facilitate this task, we can use any relational database management system, however, my preference was to use SQL Server 2017 express edition. Once the 2016 census profile has been imported into SQL Server, the “SQL Queries” [1] file can be run to organize the data into a relational table that can be exported, or copied directly from the result-set on management studio and pasted, into excel/csv; the sheet/file can now be opened in QGIS and joined to the shapefile of Ontario Census Subdivisions [2] using CSDUID as the common field between the two files.

Using the qgis2web plugin, all data and instructions are manually chosen on a number of tabs. You can choose the layers and groups you want to upload, and then customize the appearance and interactivity of the webmap based on available options. There is the option to use either Leaflet, or OpenLayers styles on QGIS version 3.8. You can Update preview and see what the outcome will look like. You can then Export the map and the plugin will convert all the data and instructions into json format. The most important file – index.html – is created on the directory you have specified.

index.html [1] is the file that can be used to visualize the map on the web browser, however, you need to first download all the files and folders from the source page [1]. This will put all the files on your (client) machine which makes it possible to open index.html on localhost. If the map files are uploaded on a web server, then the map can be viewed by the world wide web.

Webmap

The data being visualized belongs to the population demographics (different age groups). The map of Ontario’s census subdivisions is visualized as a transparent choropleth map of 2016 population density. Other pieces of demographics information are embedded within the pop-up for each of the census subdivisions. If you hover your cursor on each of the census subsivisions, it will be highlighted with a transparent yellow colour so you can see the basemap information on the basemap clearer. If you click on them, the pop-up will appear on the screen, and you can scroll through it.

There are other interactive utilities on the map such as controllers for zooming in and out, a (ruler) widget to make measurements, a (magnifying glass) widget to search the entire globe, a (binocular) widget to search only the layers uploaded on the map, and a (layers) widget to turn layers and basemaps on and off.

Limitations

There are some limitations that I encountered after I created this webmap. The first, and most important limitation, is the projection of the data on the map. The original shapefile was using the EPSG code of 3347 which uses the Canada Lambert Conic projection with NAD 1983 datum. The plugin converted data into the most common web projection format, WGS 1984, which is defined globally by Longitude and Latitude. Although WGS 1984 prevents the hassle of using projected coordinate systems by using one unified geographic projection for the entire globe, nevertheless, it distorts the shapes as we move farther away from the equator.

The second limitation was the fact that my transparent colours were not coded into the index.html file. The opacities are defined as 1. In order to control the level of opacities, the index.html file must be opened in a text editor, the opacities changed to the proper levels, ranging between 0 and 1, and lastly save the edits on the same index.html file.

The next limitation is the size of files that can be uploaded on github [3]. There is a limit of 100 MB on the files that can be uploaded to github repositories, and because the size of the shapefile for entire Canadian census subdivisions is over 100 MB, when converted to json, it could not be uploaded to the repository [1] with all the other files. However, it is possible to add to geojson formatted file (of census subdivisions) to the data directory of the repository on the localhost machine, and manually add its location with a pair of opening and closing script tags on the index.html file on the body tag. In my case, the script was:

<script src=”data/CensusSubdivisions_4.js“></script>

The name of the file should be introduced as the very beginning line of the geojson file as a variable:

var json_CensusSubdivisions_4 = {

And don’t forget that the last line should be a closing curly braces:

}

Now index.html is aware where to find the data for all of the Canadian census subdivisions.

What’s Next?

To conclude with the main goal of this project, which was stated in the introduction, we now have a framework to visualize any data we want. Which data we want to visualize should change our methodology becasuase the scripts can be adapted accordingly. What is more important is the way we want the data to be visualized on the webmap. This tutorial presented the basics of qgis2web plugin. Once the index.html file is generated, other javascript libraries can be added to this file, and depending on your level of comfort with javascript you can expand and go beyond the simple widgets and utilities on this webmap.

  [1]  https://github.com/Mahdy1989/GeoVisualization-Leaflet-Webmap/tree/master  

 [2] There is a simple way to limit the extent of the census subdivisions for the entire Canada, to the Ontario subset only: filter the shapefile by PRUID = '35' which is the code for Ontario.

[3]  https://help.github.com/en/github/managing-large-files/what-is-my-disk-quota 

GeoVis: Mapdeck Package in R

Gregory Huang
Geovisualization Project, @RyersonGeo, Fall 2019

Introduction

This project is a demonstration of the abilities of the mapdeck package in R, including its shiny interactive app compatibility.

Mapdeck is an R package created by David Cooley. Essentially, it integrates some of mapbox’s functionality into the R environment. Mapbox is a popular web-based mapping service that is community-driven and provides some great geovisualization functionalities. Strava’s global heat map is one example.

I am interested in looking at flight routes across global hubs and see if there are destination overlaps for these routes. Since the arc layer provided by mapdeck has impressive visualization capabilities of the flight routes, I’ve chosen to use mapdeck to visualize some flight route data around the world.

Example of a map generated by mapdeck: arcs, text, lines, and scatterplots are all available. Perspective changes can be done by pressing down Ctrl and clicking. The base maps are customizable with a massive selection of both mapbox and user-generated maps. This map is one of the results from longest_flights.R, which uses the “decimal” basemap.
The Map has some level of built-in interactivity: Here is an example of using a “tooltip” where if a user hovers over an arc, the arc highlights and shows information about that particular route. Note that mapdeck doesn’t want to draw flight routes across the Pacific – so if accuracy is key, do keep this in mind.

Software Requirements

To replicate this project, you’ll need your own mapbox access token. It is free as long as you have a valid email address. Since the code is written in R, you’ll also need R and R Studio downloaded on your machine to run the code.

Tl;dr…

Here’s the Shiny App

The code I created and the data I used can also be found on my GitHub repository, Geovis. To run them on your personal machine, simply download the folder and follow the instructions on the README document at the bottom of the repository page.

Screenshot of the shiny app: The slide bar will tell the map which flights to show, based on the longitudes of the destinations. All flights depart out of YYZ/KEF/AMS/FRA/DXB.

Details: Code to Generate a Map

The code I’ve written contained 2 major parts, both utilizing flight route data. The first part is done with longest_flights.R, demonstrating the capabilities of the mapdeck package using data I curated for the longest flights in the world. The second part is done with yyz_fra.R and shinyApp.R to demonstrate the shiny app compatibility and show how the package handles larger datasets (hint – very well). The shinyApp uses flight route data from 5 airports: Toronto, Iceland-Keflavik, Amsterdam, Frankfurt, and Dubai, pulled from openflights.org.

For the flight route data for the 5 airports, in particular, the data needed cleaning to make the data frame useable to mapdeck. This involved removing empty rows, selecting only the relevant data, and merging the tables.

Code snippet for cleaning the data. After the for loop completes, the flight route data downloaded from openflights.org becomes available to be used for mapdeck.

Once the data were cleaned, I began using the mapdeck functions to map out the routes. The basic parts of the mapdeck() function are to first declare your key, give it a style, and assign it a pitch if needed. There are many more parameters you can customize, but I just changed the style and pitch. Once the mapdeck map is created, use the “pipe” notion (%>%) to add any sort of layers to your map. For example, add_arc() to add the arcs seen in this post. Of course, there are many parameters that you can set, but the most important are the first three: Where your data come from, and where the origin/destination x-y coordinates are.

An example creating an arc on a map. In addition to the previously mentioned parameters, tooltip generates the little chat boxes when you hover over a layer entry, and layer_id is important when there are multiple layers on the same map.

Additional details on creating all different types of layers, including heatmaps, can be found on the documentation page HERE.

Details: Code to make a “Shiny” app

On top of the regular interactive functionalities of mapdeck, incorporating a mapdeck map into shiny can add more layers of interactivity to the map. In this particular instance, I added a slider bar in Shiny where the user can indicate the longitudes of the destinations they want to see. For example, I can filter to see just the flights going to East Asia by using that slider bar. Additional functions of shiny include using drop-menus to select specific map layers, and checkboxes as well.

The shiny code can roughly be broken down into three parts: ui, server, and shinyApp(ui, server). The ui handles the user interface and receives data from the server, while the server decides what map to produce by the input given by the user in ui. shinyApp(ui,server) combines the two to generate a shiny app.

Mapdeck integrates into the shiny app environment by mapdeckOutput() in ui to specify the map to display, and by renderMapdeck() and mapdeck_update() in server to generate the map (rendeerMapdeck) and appropriate layers to display (mapdeck_update).

Below is the code used to run the shiny app demonstrated in this blog post. Note the ui and server portions of the code bode. To run the shiny app after that, simply run shinyApp(ui,server) to generate the app.

Creating the UI
Snippet of the Server creation section. Note that the code listens to what the UI says with reactive() and observeEvent().

This concludes my geovis blog post. If you have any questions, please feel free to email me at gregory.huang@ryerson.ca.

Here is the link to my GitHub repository again: https://github.com/greghuang8/Geovis

Visualizing Tornado Occurrences in the United States of America from 1950 to 2018.

Kellie Manifold

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

With a large growing population in the United States of America more people are being affected by tornadoes each year.  Tornadoes are a mobile, destructive vortex of violently rotating winds with a funnel-shaped cloud and immense wind speeds. Over the past several decades tracking tornadoes has become more common, and there has been an increase in the amount of tornadoes recorded each year. With this, tornado tracking has become more frequent, as more technology becomes available.  This increase in tornado occurrences and the technology to track and record them has resulted in a large dataset of tornado occurrences throughout the United States of America. All recorded tornado occurrences from 1950 – 2018 are kept in a database on NOAA National Weather Service’s website. This data set is very large and difficult to visualize.

To help visualize the distribution of tornado occurrences in the USA, the dataset was used to make an ESRI Dashboard. ESRI Dashboard was chosen because it is a good data visualization tool. Dashboards are very user friendly and allows for the creation of many charts, tables, and indicators to display large amounts of data, on a single web page. The dashboard also allows for user interaction, so that they get a better understanding of the data at hand.  

The following steps were used to create the ESRI Dashboard on tornadoes in the United States of America.

First it important to make sure that you have an ArcGIS online account.

The next step is to gather your data. The data for this dashboard was downloaded from the NOAA National Weather Service and the National Centers for Environmental Protection. The file that was downloaded contains the location of all recorded tornadoes in the USA from 1950 – 2018.

Next; Import this data into ArcGIS Pro. The points will be shown through their Latitude and Longitude location. For the purpose of this project only the contiguous USA was looked at, so Puerto Rico, Alaska and Hawaii were removed. This was done through a select by attribute query which can be seen below.

These states were then removed from the map, and attribute table.

The next step to creating the dashboard is to upload the layer to your ArcGIS online account. This is done through publishing your map as a web map. This will then add the layers used onto your ArcGIS online account.

The following steps are through ArcGIS online. Once you are signed into your account, and the published web layer is in your contents, you can now create the dashboard. First, it prompts you to choose a theme colour, light or dark. Dark was chosen for the tornadoes dashboard as it looks more visually appealing.

From here you can add the web layer you published, in this case the tornado layer, and add any other widgets you would like. Such as serial chart, pie chart, indicator, or a list.

Once you decide which widget you would like, it is important that in the Actions tab in the configuration window that all other widgets and filters are selected.

Having these selected will allow for all other map elements and widgets to correspond to changes selected by the user. Make sure this is done for all the widgets that are added. In the case of tornados the widgets used include category selectors for each state, year, or month, an indicator to show the amount of tornados given the selection criteria, a pie chart to show the months tornados occurred, and a serial chart to display the amount of tornados per year.

Once all the widgets are added you can drag them to rearrange the appearance on the dashboard.

Once you have configured all your widgets to get the layout you want, you have successfully created a dashboard.

Thank you for following along!

The final product for this process can be seen here: http://arcg.is/1bDib10

Please take a look and interact with the dashboard.

Create a Quick Web Map with Kepler.gl and Jupyter Notebook

Author: Jeremy Singh

SA8903

GeoVisualization Project Fall 2019

Background: This tutorial uses any csv file with latitude and longitude columns in order to plot points on the web map. Make sure your csv file is saved in the same folder this notebook is saved (makes things easier).

I recommend downloading the Anaconda Distribution which comes with jupyter notebook.

There are 3 main important python libraries that are used in this tutorial

  1. Pandas: Pandas is a python library that is used for data analysis and manipulation.
  2. kepler.gl: This a FREE open-source web-based application that is capable of handling large scale geospatial data to create beautiful visualizations.
  3. GeoPandas: Essentially, geopandas is an extension of Pandas; fully capable of handling and processing of geospatial data.

The first step is to navigate to the folder where you want this notebook to be saved from the main directory when juypter notebook is launched. Then click ‘new’ -> Python 3, a tab will open up with your notebook (See image below).

Next, using the terminal it is important to have these libraries installed to ensure that this tutorial works and everything runs smoothly.

For more information on jupyter notebook see: https://jupyter.org/

Navigate back to the directory and open a terminal prompt via the ‘new’ Tab’.

A new tab will open up, this will function very similarly to the command prompt on windows. Next type “pip install pandas keplergl geopandas” (do not include quotes). This process will help install these libraries.

Below you will find what my data looks like the map before styling

With some options

KeplerGL also allows for 3D visualizations. Here is my final map:

Lastly, if you wish to save off your web map as an HTML file to host somewhere like GitHub or AWS this command will do that for you:

Link to my live web map here:

https://jeremysingh21.github.io/

The code and data I used for this tutorial is located on my GitHub page located here:

https://github.com/jeremysingh21/GeoVizJeremySingh

Monitoring Water Level Changes Using High Spatial and High Temporal Resolution Satellite Imagery

Author: Menglu Wang

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Introduction

The disappearing of the once world’s fourth largest lake, Aral Sea, was a shocking tragedy to the world, not only just the shrinkage of lake volume from 1,093.0 km3 in 1960 to 98.1 km3 in 2010 ( Gaybullaev et al., 2012), but also, the rate of shrinkage. Impacts on environment, local climate, citizen’s health, and agriculture are irreversible. This human made disaster could have been prevented in some degree if close monitoring of the lake was made and people are more educated about the importance of ecosystem. One efficient approach to monitor lake water level changes is the utilizing of satellite imagery .The spreading of free high spatial and high temporal resolution satellite imagery provides excellent opportunity to study water level changes through time. In this study, spatial resolution in 3  and 5 meters and temporal resolution as high as 3 days per visit PlanetScope Scene Satellite Imagery are obtained from Planet website. Iso-Cluster Unsupervised Classification in ArcGIS Desktop and Animation Timeline in ArcGIS Pro are used. Study area is set to Claireville Reservoir and 10 dates of imagery starting from April to late June are used to study water level changes.

Data Acquisition

To download the satellite imagery, a statement of research interest needed to be submitted to Planet Sales personal on their website (https://www.planet.com/). After getting access, go on typing in the study area and select a drawing tool to determine an area of interest. All available imagery will load up after setting a time range, cloud cover percentage, area coverage, and imagery source. To download a imagery, go select a imagery and click “ORDER ITEM” and items will be ready to download on the “Orders” tab when you click on your account profile. When downloading a item, noticing that there is a option to select between “Analytic”, “Visual”, and “Basic”. Always select “Analytic” if analysis will be made on the data. “Analytic” indicating geometric and radiometric calibration are already been made to imagery.

Methodology

ArcGIS desktop is used to implement classification and data conversion. Following after, ArcGIS Pro is used to create a animated time slide. Steps are list below:

  1. After creating a file geodatabase and opening a map, drag imagery labeled letter ending with “SR” (surface reflectance) into the map .
  2. Find or search “Mosaic To New Raster” and use it to merge multiple raster into one to get a full study area (if needed).
  3. Create a new polygon feature class and use it to cut the imagery into much smaller dataset by using “Clip”. This will speed up processing of the software.
  4. Grab “Image Classification” tool from Customize tab on top after selecting “Toolbars”.
  5. On “Image Classification” toolbar, select desired raster layer and click on “Classification”. Choose Iso Cluster Unsupervised classification. Please see Figure 1. for classified result.
  6.  Identify classes that belong to water body. Search and use “Reclassify” tool to set a new value (for example: 1) for classes belong to water body, leave new value fields empty for all the rest of classes. Check “ Change missing values to NoData” and run the tool. You will get a new raster layer contain only 1 class: water body as result (Figure 2. and Figure 3.).
  7. Use “Raster to Polygon” tool to convert resulted raster layer to polygons and clean up misclassified data by utilizing editor tool bar. After select “Start editing” from Editor drop down menu, select and delete unwanted polygons (noises).
  8. Use resulted polygons to cut imagery in order to get a imagery contain water bodies only.
  9. Do the above process for all the dates.
  10. Open ArcGIS Pro and connect to the geodatabase that has been using in ArcGIS Desktop.
  11. Search and use “Create Mosaic Dataset” tool to combine all water body raster into one dataset. Notes: Select “Build Raster Pyramids” and “Calculate Statistics” in Advanced Options.
  12. After creating a mosaic dataset, find “Footprint” under the created layer and right click to open attribute table.
  13. Add a new field, set data type as “text” and type in dates for these water body entries. Save edited table.
  14. Right click on the layer and go to properties. Under time tab, select “each feature has a single time field” for “Layer Time”, select the time field that just has been created for “Time Field”, and specify the time format same as the time field format.
  15. A new tab named “Time” will show up on first line of tabs in the software interface.
  16. Click on the “Time” tab and specify “Span”. In my case, the highest temporal resolution for my dataset is 3 days, so I used 3 days as my “Span”.
  17. Click the Run symbol in the “Play Back” section of tabs and one should see animated maps.
  18. If editing each frame is needed, go to “Animation” tab on the top and select “Import” from tabs choose “Time Slider Step”. A series will be added to the bottom and waiting to be edited.
  19. To export animated maps as videos, go to “Movie” in “Export” section of Animation tabs. Choose desired output format and resolution.  
Figure 1. Classified Satellite Imagery
Figure 2. Reclassify tool example.
Figure 3. Reclassified satellite imagery

Conclusion

A set of high temporal and high spatial resolution imagery can effectively capture the water level changes for Claireville Reservoir. The time range is 10 dates from April to June, and as expected, water level changes as time pass by. This is possibly due to heavy rains and flood event which normally happens during summer time. Please see below for animated map .

Reference

Gaybullaev, B., Chen, S., & Gaybullaev, D. (2012). Changes in water volume of the Aral Sea after 1960. Applied Water Science2(4), 285–291. doi: 10.1007/s13201-012-0048-z

Automobile Collisions Involving TTC Vehicles

Eric Lum
SA8905 Geovis Project, Fall 2019

Toronto is the largest metropolis in Canada, attracting people from far and wide. As such, there are many forms of transportation that pass through the city including cars, bicycles, public transit, regional trains and many more. The Toronto Transit Commission (TTC) is one of the main methods that people rely on, as millions ride their services each and every day. All of these forms of transportation must share the roads, and from time to time collisions occur. This project aims to animate collisions between TTC surface vehicles such as a bus or streetcar, with another form of transportation (not including pedestrians). This visualization will be on the web-mapping service Carto, where a time series map will be produced on the various TTC related collisions.

The collision data for this project was obtained from the Toronto Police open data portal. The “TTC Municipal Vehicle” dataset that was used is a subset of the “Killed and Seriously Injured” dataset, as these are the specific types of collisions that were collected. The data is available for the years 2008-2018, but only the past five years from 2014-2018 were used for the sample size of the project. Information on the collisions provided in the dataset include the latitude, longitude, intersection, vehicle collision type, time, date, year and neighbourhood it occurred in.

The first step of getting the time series web map to work is to create a map and import the data into Carto. The collisions data was downloaded from the Toronto Police as a .csv file, which can easily be uploaded to Carto. Other supporting data used for this map includes the City of Toronto boundary file retrieved from the City of Toronto open data portal and the TTC routes which were retrieved from Scholars Geoportal. In order for these shapefiles to be imported into Carto, they must either be uploaded as .ZIP files or converted to another supported format such as JSON file. Once all the data was ready, it was uploaded through the “Connect Dataset” label shown below.

The next step was to geocode the collision locations with the latitude and longitude provided in the collisions .csv file. This was done through Carto’s geocode feature shown below. To do this, the layer with the data was selected and the geocode option was chosen under the “Analysis” tab. The fields for latitude and longitude were then input.

Once geocoded, the “Aggregation” method for the data needed to be chosen. As this is a visualization project over a span of years, the time series option was chosen. The “Style” also needed to be set, referring to how the points would be displayed. The dataset contained information on the different vehicle types that were involved in the collisions, so the “point colour” was made different for each vehicle. These functions are both shown below.

The same “Style” method for visualization was also applied to the TTC Routes layer, as each type of transportation should be shown with a unique colour. The last part in animating the data is to choose the field for which the timer series is to be based on. The date field in the collisions .csv file was used in the “Widgets” function on Carto. This allows for all the data to be shown on a histogram for the entire time span.

To finalize the map, a legend and basemap were selected. Once happy with my map, I made it public by enabling the “Publish” option at the bottom of the interface. This generated a shareable link for anyone to view.

A snapshot of the final time series map is shown below.

Thank you for viewing my blog post!

To access the full web map on Carto, the link is provided here:

https://ericlum24.carto.com/builder/00c16070-d0b8-4efd-97db-42ad584b9e14/embed

Creating a Noise Model and a Facade Noise Map of the King Street Area Pilot using SoundPlan

Geovisualization Project Assignment, SA8905, Fall 2018 @RyersonGEO

By: Cody Connor

The city of Toronto is large and still growing, the influx of new people brings more vehicles and as a result more vehicular traffic. The increase noise across the city is highly correlated with vehicular traffic in the city will inevitably be higher as more vehicles drive on our roads. To monitor this change in noise, vehicular traffic counts are collected by Toronto public health along with the University of Toronto and Ryerson University. The traffic counts can be assigned to specific road networks which can be used to create a model of the noise in city.

SoundPlan is a noise modelling software used to take the traffic data and estimate noise levels across the city. The program can create different types of maps for example a grid level map or a Façade map. The Facade map was used for this project to help distinguish noise levels on the faces of buildings.

The first step in creating a Façade noise model is to insert a Shapefile that include all building assets of the selected study area. This includes information about the shape and size of the buildings (specifically the king street area in this case) and even includes data as specific as the number of floors in each building and the number of residents who live there. This allows the program to assess the noise levels that will affect the residents of each building over time. In SoundPlan the user can visualize the building assets in a three-dimensional environment which helps to distinguish errors in building size and height.

The user has to connect the imported Shapefile attributes to the assignment table within Soundplan. This is critical as if some buildings properties are not imported properly, the model will have errors and likely wont run. The image above shows the connected Shapefile to the Soundplan properties.

After importing the building attributes, we can now move on the the road network Shapefile. This file has the physical characteristics of the road network in Toronto as well as the traffic information that is used to calculate the noise in the city. The physical characteristics of the roads can be as simple as the shape and size of the road and can become more specific like the type of pavement and the incline or decline of the road. This becomes important in the noise model as noise will vary based on these variables. An engine in a vehicle will need to work harder if there is a significant incline in the road and if the road has a different pavement type like brick, it can increase the noise as well.

The traffic noise also includes what types of vehicles are traveling on the road at the time of measurement like cars, larger trucks, large transport vehicles, buses and even bicycles. Larger vehicles will cause more noise and therefore it is important to make the distinction. Finally the speeds of the vehicles are taken into account as this is the primary reason for the noise levels. It is well known that engine noise in vehicles increases as the speed increases. Interestingly as vehicles approach 60 km/h, the noise associated with the tires on the pavement become louder than the engine noise. This means that when working with highways this type of information is vital. Because this project only involved city streets and the speeds for the most part are under 60 km/h, it was not necessary to import these properties in detail.

Once both of these variables are imported to the program there are still a few steps before a noise model can be run. Firstly a digital ground model must be calculated and associated with both variables. A digital ground model is essentially the plain in which the variables can be attached so as to ground them to a common point. The Soundplan software allows the user to either import a DGM or calculate one based on the files imported.

 

 

 

 

 

 

 

Once the buildings and roads are set to the ground model there is only one step remaining in order to run a noise model.

The last step before running a model in Soundplan is to create a calculation area. This area defines the boundaries where the calculation will take place. The image below shows the buildings, roads and the area defined for the calculation.

The roads can be seen highlighted in red while the buildings are shaded as green with a blue outline. The calculation area that defines where the program will focus the model is the green box located in the middle of the King Street Area. This area is smaller than the total size of the study area because of the time it takes to calculate a Facade map. For just the area contained within the green box, the model ran for over 16 hours. To run the entire study area, it could take a week to make that calculation.

After importing all the necessary files to run the model which can be as simple as the one I ran, but can include much more specific data, the program needs to know what type of calculation should be run. This is where the user will indicate the Facade or Grid level map.

The way the calculation is run for the Facade map can be complicated. First each building is assigned noise calculation points which are spread across the Facades at a distance that the user sets. In this case the points were every 2 meters. The number of points has a direct influence in the scale of the calculation being run. Because this is a three dimensional environment, the points are not only across the Facade but are located at those points at every floor.

The final map as shown above shows the distribution of noise as modeled on the Facades of the buildings in the study area. The building faces closest to the road are seen to be the noisiest and as the faces get further away the noise decreases. Overall the downtown area is very noisy and this map demonstrates this.

 

 

 

Visualizing Freshwater Resources: A Laser Cut Model of Lake Erie with Water Volume Representations

Author: Anna Brooker

Geovisualization  Project Assignment @RyersonGeo SA8905, Fall 2018

Freshwater is a limited resource that is essential to the sustenance of all life forms. Only 3% of the water on earth is freshwater, and only 0.03% is accessible on the surface in the form of lakes, streams, and rivers. The Great Lakes, located in Southern Ontario and along the US border, contain one fifth of the surface freshwater. I wanted to visualize this scarcity of freshwater by modelling Lake Erie, the smallest of the Great Lakes. Lake Erie is 6th largest freshwater lake in the world, but is has the smallest water volume out of the Great Lakes. I decided to create a laser cut model of the lake and use water spheres to represent its proportion of the world’s surface freshwater resources. I used the infographic from Canadian Geographic for reference.

Process:

  • Retrieve bathymetric imagery and import into ArcScene
  • Generate contours lines for every 20m of depth and export them each into individual CAD files
  • Prepare the CAD files in an Adobe Illustrator layout file to optimize them for laser printing
  • Paint and assemble the laser cut layers
  • Create spheres out of clay to scale with the model

The following images show the import of the bathymetric imaging and contour retrieval:

The bathymetry data used was collected in 1999 by the National Oceanic and Atmospheric Association and comes in a raster file format. They were retrieved from Scholar’s Geoportal. I used a shapefile of the Lake Erie shoreline from Michigan’s GIS Open Data as a mask to clip the raster imaging to only the extent of the lake surface. I then created 20m contours from the raster surface. I exported each of the 3 contour vectors into individual shapefiles. These were added to the scene and exported again as CAD files to be able to manipulate them in Adobe Illustrator and prepare them on a template for laser cutting.

The screenshots above show the template used for laser cutting. The template was downloaded from the Hot Pop Factory homepage. Hot Pop Factory is the service I used for laser cutting the plywood layers. I used their templates and arranged my vector files to reflect the size I want the model to be, 18″x7″. I added the rectangles around each contour to ensure a final product of a rectangular stacked model. I then sent this to the Factory for cutting. The photos below show what I received from Hot Pop.

Lake Erie is incredibly shallow with maximum depth of 64m. In order to show the contours of the lake I needed to exaggerate the depth. Limited by the thickness of the materials available to me, the final model had an exaggerated depth of approximately 130% at its deepest point. The final result of this exaggeration allowed me to create three layers of depth to Lake Erie and make it more visually engaging. I included as a part of my model a flat cut out of Lake Erie, which is what the model would have looked like if I had not exaggerated it.

The water volume spheres were created using a material called porcelain clay. This air dry medium has a slightly translucent finish. I stained the clay with blue oil paint so that it would intuitively represent water. The size of the spheres is based on the information in the Canadian Geographic infographic linked in the introduction to this tutorial. The diameter of the spheres was made to scale with the scale bar on the models. A limitation with this model is that the scale bar only refers to the lateral size of the lake and spheres, and does not refer at all to the depth of the model.

The photos above show the final product. The photo on the right shows the scale bar that is included on both parts of the model. I painted the interior layers in blue, the top two layers in the same shade. The third layer was slightly darker, and the deepest layer was the darkest shade of blue. I chose to paint the layers in this way to draw attention to the deepest part of the lake, which is very small area. I attached the layers together using wood glue and laid them beside each other for display.  I painted the 3D and 2D models in slightly different hues of blue. The 2D model was made to better match the hue of the water spheres to visually coordinate them. I wanted the spheres to be distinct from the 3D model so that they would not be interpreted as being representative of the water volume of an exaggerated model.