Invasive Species in Ontario: An Animated-Interactive Map Using CARTO

By Samantha Perry
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

My goal was to create an animated time-series map using CARTO to visualize the spread of invasive species across Ontario. In Ontario there are dozens of invasive species posing a threat to the health of our lakes, rivers, and forests. These intruding species can spread quickly due to the absence of natural predators, often damaging native species and ecosystems, and resulting in negative effects on the economy and human health. Mapping the spread of these invasive species is beneficial for showing the extent of the affected areas which can potentially be used for research and remediation purposes, as well as awareness for the ongoing issue. For this project, five of the most problematic or wide-spread invasive species were included in an animated-interactive map to show their spatial and temporal distribution.

The final animated-interactive map can be found at: https://perrys14.carto.com/builder/7785166c-d0cf-41ac-8441-602f224b1ae8/embed

Data

  1. The first dataset used was collected from the Ontario Ministry of Natural Resources and Forestry and contained information on invasive species observed in the province from 1982 to 2012. The data was provided as a shapefile, with polygons representing the affected areas.
  2. The second dataset was downloaded from the Early Detection & Distribution Mapping System (EDDMapS) Ontario website. The dataset included information about invasive species identified between 2010 and 2018. I obtained this dataset to supplement the Ontario Ministry dataset in order to provide a more up-to-date distribution of the species.

Software
CARTO is a location-intelligence based website that offers easy to use mapping and analysis software, allowing you to create visually appealing maps and discover key insights from location data. Using CARTO, I was able to create an animated-interactive map displaying the invasive species data. CARTO’s Time-Series Widget can be used to display large numbers of points over time. This feature requires a map layer containing point geometries with a timestamp (date), which is included in the data collected for the invasive species.

CARTO also offers an interactive feature to their maps, allowing users control some aspects of how they want to view the data. The Time-Series Widget includes animation controls such as play, stop, and pause to view a selected range of time. In addition, a Layer Selector can be added to the map so the user is able to select which layer(s) they wish to view.

Limitations
In order to create the map, I created a free student account with CARTO. Limitations associated with a free student account include a limit on the amount of data that can be stored, as well as a maximum of 8 layers per map. This limits the amount of invasive species that can be mapped.

Additionally, only one Time-Series Widget can be included per map, meaning that I could not include a time-series animation for each species individually, as I originally intended to. Instead, I had to create one time-series animation layer that included all five of the species. Because this layer included thousands of points, the map looks dark and cluttered when zoomed out to the full extent of the province (Figure 1). However, when zoomed in to specific areas of the province, the points do not overlap as much and the overall animation looks cleaner.

Another limitation to consider is that not all the species’ ranges start at the same time. As can be seen in Figure 1 below, the time slider on the map shows that there is a large increase in species observations around 2004. While it is possible that this could simply be due to an increase in observations around that time, it is likely because some of the species’ ranges begin at that time.

Figure 1. Layer showing all five invasive species’ ranges.

Tutorial

Step 1: Downloading and reviewing the data
The Ontario Ministry of Natural Resources and Forestry data was downloaded as a polygon shapefile using Scholars GeoPortal, while the EDDMapS Ontario dataset was downloaded as a CSV file from their website.

Step 2: Selection of species to map
Since the datasets included dozens of different invasive species in the datasets, it was necessary to select a smaller number of species to map. Determining which species to include involved some brief research on the topic, identifying which species are most prevalent and problematic in the province. The five species selected were the Eurasian Water-Milfoil, Purple Loosestrife, Round Goby, Spiny Water Flea, and Zebra Mussel.

Step 3: Preparing the data for upload to CARTO
Since the time-series animation in CARTO is only available for point data, I had to convert the Ontario Ministry polygon data to points. To do this I used ArcMap’s “Feature to Point” tool which created a new point layer from the polygon centroids. I then used the “Add XY Coordinates” tool to get the latitude and longitude of each point. Finally, I used the “Table to Excel” conversion tool to export the layer’s attribute table as an excel file. This provided me with a table with all invasive species point data collected by the Ontario Ministry that could be uploaded to CARTO.

Next, I created a table that included the information for the five selected species from both sources. I selected only the necessary columns to include in the new table, including; Species Name, Observation Date, Year, Latitude, Longitude, and Observation Source. This combined table was then saved as an excel file to be uploaded to CARTO.

Finally, I created 5 additional tables for each of the species separately. These were later used to create map layers that show each species’ individual distribution.

Step 4: Uploading the datasets to CARTO
After creating a free student account with CARTO, I uploaded the six datasets as excel files. Once uploaded, I had to change the “Observation Date” column from a “string” to “date” data type for each dataset. A “date” data type is required for the time-series animation to run.

Step 5: Geocoding datasets
Each dataset added to the map as a layer had to be geocoded. Using the latitude and longitude columns previously added to the Excel file, I geocoded each of the five species’ layers.

Step 6: Create time-series widget to display temporal distribution of all species
After creating a blank map, I added the Excel file that included all the invasive species data as a layer. I then added a Time-Series Widget to allow for the temporal animation. I then selected Observation Date as the column to be displayed, meaning that the point data will be organized by observation date. I chose to organize the buckets, or groupings, for the corresponding time-slider by year.

Since “cumulative” was not an option for the Time-Series layer, I had to use CARTCSS to edit the code for the aggregation style. Changing the style from “linear” to “cumulative” allowed the points to remain on the screen for the duration of the animation, letting the user see the entire species’ range in the province. The updated CSS code can be seen in the screenshots below.

Step 7: Creating five additional layers for each species’ range
Since I could only add one Time-Series Widget per map, and the layer with the animation looks cluttered at some extents, I decided to create five additional layers that show each of the species’ individual observation data and range.

Step 8: Customizing layer styles
After adding all of the layers, a colour scheme was selected where each of the species’ was represented by a different colour to clearly differentiate between them. Colours that are generally associated with the species were selected. For example, the colour purple was selected to represent Purple Loosestrife, which is a purple flowering plant. The “multiply” style option was selected, meaning that areas with more or overlapping occurrences of invasive species are a darker shade of the selected colour.

A layer selector was included in the legend so that users can turn layers on or off. This allows them to clearly see one species’ distribution at a time.

Step 9: Publish map
Once all of the layers were configured correctly, the map was published so it could be seen by the public.

Traffic.me: Mapping live traffic with ArcGIS Runtime SDK and HERE Technologies using Android App Developer

by Nicholas Pulsone
Geovis Class Project @RyersonGeo, SA8905, Fall 2018

Topic & Background

Driving through congested parts of Toronto is a tedious and troubling problem that many people would like to avoid. The goal was to create a mobile application using android app developer that can use traffic data as a live input to map traffic patterns across North America. Many companies such as HERE Technologies record traffic information that updates regularly and can be used to map and observe traffic patterns across the entire world. Using android app developer, it is easy to add software developer packages such as ArcGIS Runtime SDK to develop new tools that can be used on a day-to-day basis.

Data

The first problem when creating an app for a purpose or goal is where to find the data. As previously mentioned, HERE technologies is a company owned by NOKIA and currently has its headquarters in Amsterdam. HERE technologies records live weather, routing and traffic information using a combination of both geolocation and intelligence algorithms. Geolocation services that HERE tracks include:

  • Devices with location or GPS tracking
  • Tables or other devices with WIFI and signal strength
  • Phones while measuring varying strength of reception via cell tower signals

HERE technologies contains a global database of over 93 million cellular towers and over 2.3 billion Wi-Fi hotspots which record and store data. The data needed to be able to map varying levels of traffic or traffic density as well as potential collisions or other disruptions affecting driving conditions. The data would need to be able to be displayed visually on a mapping platform and accessible by android app developer software.

Methods

There are multiple ways a live traffic application can be created using data from HERE technologies:

  1. Creating a live traffic app using HERE API and map creator
  2. Creating a live traffic app using HERE data in ArcGIS Runtime SDK (requires ArcGIS developer license)

The methods in this blog will be describing how to create an application using the data from HERE technologies with ArcGIS Runtime SDK.

The first step is to download the needed requirements. First, is to download the newest version of android app developer studio. Currently, the newest version of Android App developer studio is 3.2.1 and available online for Windows, Mac and Linux. Once android app developer is downloaded, the next step is download the second part of the software that will be used in this creation, which is the ArcGIS Runtime SDK for Android 4.0.

The second step is to set the back-end of the application. After specifying the operating system the application will work on, and inputting the name of the application, the first thing to set up is include the ESRI bintray for ArcGIS.

As ESRI’s repository is not open source, the url must be specified to manually add the url for the ESRI bintray. Then the app dependencies need to updated to include the ArcGIS Runtime SDK.

Once the Gradle scripts were synced, the next step was to add a map view for the app. By default, we can remove the text view element and manually create a map view with the following syntax:

After adding the map view for the data, the next step was to specify a basemap then access the data:

The above syntax is a sample of how a basemap and starting location can be specified upon opening. The final step was to be able to access the data. HERE technologies has collaborated with ESRI to develop a world traffic service that can be accessed from mobile and desktop services using the url:

http://traffic.arcgis.com/arcgis/rest/services/World/Traffic/MapServer

Additionally, ESRI and HERE technologies have also created a layer available on the ArcGIS developer portal to users with an ArcGIS developer license. Once the layer is accessible, it is important to open and save the layer in ArcGIS online and enable sharing and public access permissions. As layers used in ArcGIS require a login to be viewed, the next step is to setup a proxy to bypass the login error that would prevent the data from being used, even if permissions are set to public.

To setup a proxy using the ArcGIS developer server proxy, the application must be authenticated and registered in the ArcGIS developer platform. Once registered, the user has access to many services such as a proxy service which will be used along with the traffic layer.

To enable this proxy under services, we must specify what type of proxy service and request limit the proxy will use. Once the requirements are specified, the service outputs a URL which contains a proxy service from ArcGIS.

To use the proxy, simply add the link as a layer from web in ArcGIS online, and the proxy should be active.

Figure 1: Adding Services to ArcGIS Online Map

The final step was to add in the url for the ArcGIS online webmap which contains the traffic data, into android app developer.

Once the url was added into the Android App developer; just click Sync & Run and the app will appear on your device similar to the picture below!

Figure 2: Example of Traffic App

Limitations

A limitation that was experienced while coding the application was ease of use. Without using a legend or slider, it is very hard to distinguish which areas of Toronto are being affected by what kind of problem. The symbology can be changed, however integrating a legend as a button feature in android app developer was more useful and ultimately was included in the final iteration of the app shown in Figure 3.

Figure 3: Final Iteration of Traffic.me

 

Global Impacts of Earthquakes and Natural Disasters

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

Author: Katrina Mavrou

Date: Tuesday November 13th 2018

Topic: Global Impacts of Earthquakes and Natural Disasters

Link: Natural Disasters: Earthquake

Purpose:

To educate an audience with a wide variety of educational backgrounds on natural disaster impacts and side effects through visualization of spatial data using Esri’s Story Map Applications. As well, to gain a personal understanding  and experience using ArcGIS online and available tools.

Data Source: USGS Earthquake Database, 2018; Global International Displacement Database, 2018.

Software: ArcGIS Pro, ArcGIS Online

Outline:

The topic of this project is Natural Disaster impacts, in specific Earthquakes, and analyzes the global displacement and infrastructure loss. The platform used is ArcGIS Online, using hosted feature layers, web maps and mapping applications, such as a variety of Esri’s story map applications. The data is provided from USGS Earthquake database and Global International Displacement Database. The purpose of this exercise is to utilize multiple Esri story map functions, and other online plat forms into one map.

The first step was to clean up the datasets. The CSV files were imported into Microsoft SQL Server Management Studio to preform spatial queries and extract data needed from each dataset for the displacement dataset data was extracted by year and imported into a new table. The subset data was imported into ArcGIS Pro, where is was joined to the global shapefile and exported as new layers. Each year created a new layer so that the animation tool could be used on the time enabled field to create a time lapse of displacement. Each new layer was exported as a shapefile zipped folder so that it could be imported into ArcGIS Online as a hosted feature layer.

The first section, monthly earthquake report, of the story map is created by using a live link to publish data. To do this a web map is created using the live updating CSV from USGS. The data is collected and published every 5 minutes, therefore each time the story map is opened or refreshed the map will look different from the previous time. The data is displayed using the magnitude and depth of the earthquake (Image 1). This process was repeated for the second section, the weekly earthquake report. Another feature that was used for this map is pop-ups. Each instance displayed on the map when clicked will introduce a pop-up window that gives the user more information on that specific earth quake.

Live data of Monthly Earthquakes

Image 1: Live data of Monthly Earthquakes

The next section introduces the user to fault lines, specifically San Andreas Fault located in California, USA (Image 2). Using KML and KMZ files LiDAR layers of the fault lines are displayed on a satellite image map. Data of the most tragic earthquakes are also displayed on the map. This is historic data of earthquakes with magnitude greater than 5. By clicking on each earthquake location a pop-up is enabled and gives historical facts about each instance.

San Andreas Fault and Earthquakes

Image 2: San Andreas Fault and Earthquakes

The following section of the story map regards displacement caused by earthquakes. The main screen includes a heat map of displaced people from 1980-2018. The side panel includes a story map slide show. This was created from a simple web map with time enabled layers. The presentation function was then used to create a time lapse of global displacement for each year from 2010 to 2017 (Image 3). The presentation was then embedded into the side panel of the displacement section of the story map.

Presentation of Timelapse

Image 3: Presentation timelapse

The slide displacement VS population includes a swipe and slide story map embedded into the main story map. This is a global map that swipes two layers global displacement due to natural disasters for 2017 and world population for 2017 (Image 4). The swipe and slide map allows the user to easily compare the two layers side-by-side.

Swipe and Slide Map

Image 4: Swipe and Slide Map

The last section uses a story map shortlist to display images of earthquake impacts. This uses geocoded images to place images on a map (Image 5). The main panel holds with map with pointers indicating the coordinates of the geocoded images. The side panel displays the images. When an image is clicked on it pops up with the map and information regarding the image and earthquake can be found.

Geocoded image and reference map

Image 5: Geocoded image and reference map

The purpose of this technology is to easily display spatial data for individuals who are unfamiliar with utilizing other Esri products. Story maps are an easy and interactive way for users will all backgrounds of knowledge to interact with spatial maps and learn new information on a topic.

Limitations and Summary:

The final project is a story map journal with easy navigation and educational purpose. In the future I would like to incorporate even more of Esri’s online functions and applications, to expand my understanding. As well, a limitation to the project is the amount of space allotted to each student at Ryerson with ArcGIS online. Some processing functions were unavailable or used too much space, therefore these procedures had to be processed using ArcPro and then import the final product into ArcGIS online. Overall, the project reached its purpose of providing contextual information on spatial impacts of earthquakes.

Earthquake Browser Web Application for the Country of Iran

Author: Atanaz Dorrani Arab
GeoVis Project Assignment @ RyersonGeo, SA 8905, Fall 2018

The website of Iranian Seismological Center (ISC) is popular not only among the geologist, geophysicist and seismotectonics specialists, but also the general public. When an earthquake happens, people who felt the temblor tend to know its location and magnitude in the shortest time possible. The ISC website is the quickest way to obtain information regarding the latest earthquake. It gives details of time and location of the earthquakes and visualizes earthquakes spatial distribution across the country. However, the website is not user-friendly and provides date and time location for a limited number of earthquakes and maps the location of earthquakes for a period of only 24 hours.

The Earthquake Browser is an effort to make a user-friendly web app, which is easy to use and can serve both the general public and seismotectonics specialists.

The web app is designed to have three main section:

  • A section to display the spatial distribution of the earthquakes on map
  • A section that provides a list of earthquakes, which provides detailed information regarding each earthquake including magnitude, location, longitude, latitude, time.
  • A section which visualizes the frequency distribution of earthquakes by year in form of a vertical bar chart. This part is more beneficial to the specialists, as this chart can give general information about the seismic structure of earth within political borders of Iran.

To use this app users need to enter a time period for which they want to obtain information about the occurred earthquakes. They can also narrow down their search by selecting a magnitude range.

The earthquakes data are being retrieved online from U.S. Geological Survey (USGS) website each time a user select a time period and a magnitude range.

HTML, JavaScript and CSS are the markup, programming and style languages that are used to create the earthquake browser web application. HTML is used to describe the structure of information on the web application; CSS is used for controlling the appearance; and the functionality is programmed by JavaScript. Four libraries were utilized in designing this web application:

  • Mapbox GL JS used for the web mapping tool, due to the variety of visually pleasant map designs.
  • D3.js used for the dynamic data visualization in the chart section of the app.
  • jQuery used for sending Ajax request to USGS server.
  • Font-Awesome Library used for adding icon fonts which are displayed in the sidebar of the web app. Icon fonts are just fonts. But, instead of containing letters, they contain symbols and glyphs. They give the web app a more graphic design.

 

Welcome to the app

When a user opens the web application on the browser, a map centered to the Iran’s political border would be displayed. A side bar with three icon fonts designed on the left side of the app let user to navigate to different sections of the web application. Each icon font presents one of the web app’s sections and its functionality.

The hamburger menu directs users to the place where the time period and the magnitude range should be inputted. The date-time field is set on the current local date-time of the user’s time zone as the default date-time in the app. The web app also requests user to not to enter dates prior to January 01, 1900; since there is no instrumental record of the earthquakes before the year 1900.

It is probable that users made some mistakes in filling up the fields of this section, such as entering a start date greater than the end date, entering maximum magnitude smaller than minimum magnitude, entering a date exceeding the current data-time, or entering a date prior to January 01, 1900.

When a user submits the request containing any of these mistakes, an alert with an appropriate message, which corresponds to the specific occurred error pops up and warn the users to correct it. Aside from the alerts, when users input a date-time which exceeds the current date-time or is prior to 1900, the box around the field of date-time changes from solid black line to dashed orange line to let the user know that something is not right before submitting a request. The figure below shows examples of these alerts and how they look on screen.

Additionally, another alert is also designed to notify user when there is no earthquake occurred for the selected time period and magnitude range. When no data received from USGS server this alert would pop up.

USGS provided a web API for access to its earthquake data and sending requests to its server. When a user fills out the date-time and magnitude fields, a URL based on the parameters defined by the USGS API documentation and the values entered by the user will be created and  will be sent to the USGS server.

In that URL request, an area which matches the Iranian borders needed to be defined; so the USGS only sends data for earthquakes that occurred in Iran. The area needs to be defined by a rectangular extent, a boundary box defined by 4 lats & longs, which contains the country of Iran. Although when USGS sends earthquake data the, information contains the country names in the location parameter; there is no way to narrow down the URL request based on the name of the countries. Thus, the received earthquakes data from USGS contains earthquakes that occurred out of the borders of the country.

After receiving the data form USGS server in the GeoJSON format, the data will be filtered by the name of country available in the location parameter. Those earthquakes which does not include the name “Iran” in their location will be removed from the data.

Then, user can view the spatial distribution of the earthquakes on map based on the filtered data. User is able to zoom in to places where there are many events occurred. A legend is also displaying on bottom right corner of the map to help reader get an idea of the earthquakes’ magnitude. A color is assigned to each magnitude with a round number (the magnitude ranges from 2 to 8). The color selection is inspired by USGS color scheme for earthquakes magnitudes. The color of the points representing earthquakes changes relative to the magnitude and changes within the range of selected colors. Earthquakes with greater magnitudes are also have larger points. This helps making the larger earthquakes, which are kind of more important, more visible. The figure below show the map after the earthquakes data had been received,  filtered, and displayed on map.

The exact magnitude of the earthquake can be found by clicking on the its point on the map. In addition to the magnitude, user can find a row number, which corresponds to the row number in the earthquakes’ list section.

By going to the list section, user can find extra information about that exact earthquake. This list is also beneficial when user is just looking for the latest earthquakes in order to find out the earthquake exact location and magnitude.

In the chart section of the app, a vertical bar chart is showing the frequency of events by year for the selected time period and magnitude range. The Y axis shows the earthquakes frequency and the X axis shows the years. For making this chart, the earthquakes data received from USGS will be categorized by year. The years will be displayed on the X axis and the count of earthquakes in each  category will be displayed on the Y axis. When the user places the mouse on each bar of the chart, the exact number of earthquakes in that year will be displayed. This makes the chart more readable when the X axis includes a wide range of years.

ArcPro Animation of 1923 Canoe Trip in Algonquin Park

By Sarah Medland

Geovis Course Project @RyersonGeo, SA8905, Fall 2018

Context

While searching the web for historic maps to inspire this project I came across the personal website of Bob and Diane McElroy. Their website includes an extensive personal collection of present and historic records of the natural environment within Ottawa Valley and Algonquin Park. The collection of thoughts and logs on their site consist of those of their ancestors – dating back many decades from now. The following map is a section of the one which was chosen for the purpose of this assignment. It dates back to 1921:

In July of 1923, a group of 4 men led by a guide embarked on a 12-day canoe-trip, creating a log of their route as they traveled. The map log included handwritten details by W. H. McConnell about wildlife, weather, and their experience in the Park.

Purpose:

 to animate an artistic rendering of a historic canoeing route which…

 – brings to life a historic map by integrating it with modern GIS technology

– reveals information from approx. a hundred years prior about an ever-popular canoeing area

Methods

To begin, the map was download as a JPEG and brought into ArcMap. A DMTI Spatial minor water bodies Shapefile was added. Using this present-day layer, labelled by lake name, it was fairly easy to align this with the lakes from the historic map. Some challenges arose as the map is from 1921 therefore its accuracy is questionable, however, I was able to geo-reference the map fairly well.

Historic Map in ArcMap where it was georeferenced to a present-day water bodies layer

Next, DEM tiles were downloaded from Scholar’s Geoportal. These were converted into a TIN using the raster to TIN tool in ArcMap, and then into TIN nodes using the TIN node tool. This allowed the tiles to be combined into one continuous TIN using the Create TIN tool which could be clipped to the extent of the map surface. Once the elevation surface was made, the map could be given height.

The map surface after it was draped over an elevated TIN surface and atmospheric effects were applied

To visualize the canoe route, a line Shapefile was created over the route drawn on the map. Campsites were also added as a point Shapefile which included a ‘Date’ field in the attribute table. In the ArcGIS Pro Global setting the map was draped over the TIN surface and campsites symbolized in 3D with the dates labelled.

An example of some of the original annotations on the map

Lastly, a animation following the canoe route was created in ArcGIS Pro. The animation was created to guide the viewer along the route of the 1923 trip and included annotations such as those above and historic pictures from the time period.

Results: The following video is the final product:

Lexical Distance and Linguistic Diversity in the Balkans: A Network Map

By Zeljko Bavcevic

Geovis Class Project @RyersonGeo, SA8905, 2018

  1. Introduction

The purpose of this series of posts is to serve as a record of my work on the SA8905 Geovisualization project. The broad aim of this project is to explore the complex relationship between language and geography, and each serves as a mediating factor on the other. I have long been fascinated by how geography has an impact on language and on populations, borders and culture by proxy. Specifically, I was hoping to understand how changes in the traversability of landscapes would impact migrations of people and thus impact language.

Over the course of research phase of this project, I realized that conducting the project on all of the languages of the world would require a large amount of data collection and cleaning, such that was considerably beyond the temporal scope of a single class. As such, I narrowed my goal to examining only the geography of language in the Balkan Peninsula, and how they related to one another in terms of linguistic diversity, lexical distance and speaker population.

  1. Research

The first stage in this process required finding data to operationalize my target variables. This proved more difficult than I had first expected for a number of reasons. Firstly, there is no single, global set of agreed upon variables for understanding language. Instead there are a number of competing variable sets each maintained by different organizations (with very different incentives).

Eventually I narrowed my focus on three primary linguistic variables for visualization, these are:

  1. Speaker Population: The number of individuals in the world estimated to speak a certain language. The value is largely based on a projection.

 

  1. Lexical Distance: A linguistic variable measuring the conceptual distance between languages by comparing each along a number of criteria such as common words, verb formations and other comparative measures.

 

  1. Linguistic Diversity: An index score that measures the different types of languages, dialects or variations spoken within the regions of the primary language.

 

  1. Data:

The data for these variables are generated and maintained by two primary organizations, SIL International and Unesco. SIL Interational compiles data on a number of relevant linguistic variables for sale to organizations. Unesco on the other hand is an international non-profit organization. The two data sets are very different in their methodologies and as such cannot be combined or used in conjunction. For this purpose, I elected to only use one of the data sets. Although the Unesco data was free, the format it was kept in would have required a laborious process of cleaning and transformation before it could easily be used in my model. As such, I reached out to SIL international for a quote and acquired the data I needed. It must be noted, for the purposes of transparency, that SIL is a Christian organization and there have been several concerns about its methodology and incentive structure. To assess the impact of thee on my outcome, I did a brief comparison between a sub-sample of the UNESCO and SIL data sets and was satisfied that it was within acceptable parameters.

During my research, I had found a number of illustrations of lexical distance. Most often these would take the form of node or network charts depicting the different languages (Figure 1).

Lexical Distance
Figure 1: Lexical Distance Network

However, these were all static, non-spatial and often did not take into account other relevant linguistic variables such as linguistic diversity within a language class. As such, I wanted my own visualization to be dynamic, interactive, spatial and containing other relevant linguistic variables. To this end I needed to find a technology or platform that would allow me sufficient customizability and interactions. Inspired by the network or node maps I had consistently seen throughout my research phase, I knew that I wanted to build on this concept, adding a spatial and interactive component.

  1. Technology

I considered a number of options, the first and most obvious was using Python to code a network map using the Gephi platform. While pure coding would offer the most freedom and customizability, hosting the various tools I needed would prove very tedious and costly. As such, I set out in search of a hosted node or network analysis platform. After considerable research into a number of possible candidates, I opted for kumu.io.

Kumu was selected because it allowed me the freedom of coding most of my map to my specifications (On top of having a very user friendly UI), while also hosting all of my data and tools natively. This reduced the technical “surface area” of the project, which reduces opportunities for code breaking bugs and cross platform communications errors. Paying the modest membership fee, I began adding my data to Kumu.

  1. Execution

The first stage of development was loading my SIL data into Kumu. This was made easy using Kumu’s data cleaning tool. This allows the user to make sure all the input data meets kumu’s formatting requirements and even allows the user to dynamically change spreadsheet documents before upload.


Kumu’s Upload Wizard

After this was complete I created a bi-directional connection between each language (or element in network analysis parlance). This resulted in an ugly and incomprehensible visual bundle of connections. The next stage of the process would be coding the various variable symbolizes, interface options (adding a search, zoom and selection toolbar).


Kumu’s Advanced Code Based Editor

This was done using Kumu’s advance coding editor and I encountered no issues during this stage. However, when I attempted to add the polygon of the various countries of the Balkan Peninsula, the map visualization would simply vanish and I was not able to trouble shoot this with any success. As such, I had to ultimately abandon the spatial component of the project due to the constraints of time. I was still very satisfied with the resulting output.


    The Final Output

  1. Challenges

A number of challenges were encountered during the course of this project. The primary issue was that the geographic overlay failed to load. My every attempt to fix this was unsuccessful and ultimately this radically undermined completing the project as I had conceived it at the design stage. Nonetheless, I still believe that the other elements of the project still satisfied the project requirements of producing a novel and interesting geovisualization.

Literacy Percentages and Global Prevalence of HIV Rates

by Anwar Abu Ghosh

Geovis Project Assignment @ryersonGeo,

SA8905,  Fall 2018

 

For my geo-visualization project, I have decided to use  ArcGIS Online which is a web-based interactive program to visualize the data. The other dataset was extracted from the world health organization was in the format of a comma delimited values (CSV) file which contains the rates and ranks of some countries in the worlds suffering from HIV. Another data set that was used on this project was literacy rates across different countries in the world, which was downloaded as a CSV from United Nations International Children’s Emergency Fund (UNICEF) website.

ArcGIS Online Definition

ArcGIS Online is a cloud-based application that is used to visualize and map data in a dynamic and interactive method. GIS stands for Geographic Information System, and the method this application works is by adding different layers to create different maps and visualize data. It can be used for 2D and 3D mapping, and in this project, I will be using the proportional mapping. It is a great collaborative web-based application with a secure infrastructure to store data, view data, add layers, manipulate data and share maps with others. This web-based application has base maps and data layers integrated into it, therefore many preinstallations may not be required.  I will be using the 30-day free trial of ArcGIS Online with limited features and storage.

ArcGIS Desktop

Due to the limited features in the ArcGIS Online free trial, many properties are disabled such as the join feature and so using a different software to join two datasets was required. To be able to create a choropleth map in ArcGIS Online, the data must be provided in a shapefile (SHP) extension format. Therefore, the need to use ArcMap to join a country shapefile to the literacy rates CSV was needed. The layer was first added by right clicking ‘Layers’ in the table of content and browsing to the country shapefile location that was originally downloaded from thematicmapping.org. Then right-click the added layer and selecting ‘Joins and Relates’ then ‘joins’. For the ‘1. Choose the field in this layer that the join will be based on field select the country column in the shapefile. As for the ‘2. Choose the table to join this layer or load the table from disk’ select the literacy CSV file.  Meanwhile for the ‘3. Choose the field in the table to base the join on’ which should be referring to the country name column in the literacy data. Allow the join to retain all records and then press okay. The altered country shapefile will have the literacy columns appended to the end of the shapefile. Now that the shapefile is generated some data cleanup such as deleted unnecessary columns may be done before exporting the shapefile. Shapefile exporting is the was ArcGIS saves the new shapefile and this simple step can be done by right clicking the layer and selecting ‘Data’ then ‘Export Data’ to save the updated shapefile to the desired location. Note to have the shapefile ready for ArcGIS online; the shapefile should be saved in its own folder as multiple files will automatically be generated when creating the file and then the folder should be zipped.

ArcGIS Online Application

Adding the first layer

The first layer in this map will be the literacy data. Start by successfully logging in to ArcGIS Online and selecting the map option in the top bar. Then browse to the map option, click the ‘Add’ option and select the ‘Add Layer from File’. Then browse to the zipped file that was created from the previous step in ArcMap. To create the choropleth map, the percentage column is to be selected as ‘Shows an attribute to show’ and then counts and amounts (Color). Selecting blue from the symbols settings then ‘fill’, this step is done to represent the literacy data where the light shade indicates a low literacy rate and a dark blue indicates a high literacy rate in the country.

Adding Second Layer

Selecting from the toolbar ‘Add layer by file’ from the add setting.  Browse to the csv file stored on the computer. Many options will appear on the way the file is to be imported. Using the located feature by selecting ‘by World’ from the drill down option. Match the country column in the HIV data to the country from the preexisting data in ArcGIS Online. The data will then be imported to the application in point form. Create a proportion map by using the proper setting of selecting the proportion based on HIV rates. The color used in displaying the data points is red since the red ribbon is an awareness symbol of HIV/AIDS.

Since many colors and shades are being used in this interactive map a simple light grey canvas was used as a basemap to make the colors and symbols more visible to the reader. As for the legend, it can be found on the left-hand side of the application right under the toolbar. The legend consists of two components the proportionate layer for the HIV/AIDS data where the symbol refers to the percentage of HIV/AIDS rates. The other component refers to the literacy rates where the shade of blue refers to the rate of literacy rate in a country.

 

Feel free to take a look at the map through this link https://arcg.is/1uPu14

References

http://thematicmapping.org/downloads/world_borders.phphttps://data.unicef.org/topic/education/literacy/

http://apps.who.int/gho/data/node.main.617?lang=en

Surfer 15 Whistler-Blackcomb Geovisualization Using Data Retrieved From Google Earth

By :Ryan Wilkinson

Geovizualization Project Assingment, @RyersonGEO, SA8905, Fall 2018

 

In this project a 3D Surface Map of Whistler-Blackcomb in British Columbia was created using XYZ data retrieved from Google Earth and the geovisualization software program Surfer 15. Surfer is an excellent geovisualization software program capable of creating 2D contour maps and 3D surface maps from XYZ and DEM data. The following method can work for any terrain location in the world that can be viewed on google earth and is certainly not limited to my chosen location.

Collection of Data from Google Earth:

  

The path tool on google earth was used to drop points on the Whistler-Blackcomb area, each red square represents a point that has corresponding latitude, longitude, and elevation values.

The image above shows the trace of the path that was drawn in order to collect the XYZ data from the Whistler area necessary for adequate creation of an accurate 3D surface map in Surfer.

Once the desired path was drawn it was saved under “My places” in google earth as a .kml file.

Data Conversion:

The .kml file was then uploaded into TCX converter. The altitude values are commonly not present during this stage therefore TCX converter can be used to add the altitudes using its “update altitude tool”. Once the altitudes were successfully calculated TCX converter was used to convert the file from a .KML to a .CSV in preparation for visualization in Surfer.

 

Grid File and 3D Surface Map Creation:

The .CSV File was then uploaded into Surfer’s grid data tool which is capable of creating grid files (.grd) from XYZ and DEM data. Grid files can be used to create 2D contour maps and 3D surface maps in Surfer.

The grid file was then used by the 3D Surface tool to create a 3D surface map of the Whistler area. Colour scales and variations can be easily changed in Surfer to achieve desired effect and convey information in the way the user chooses. The above colour scheme is called “terrain” and effectively visualize elevation change. The model can also be rotated and viewed from any desired angle in surfer using the “trackball” tool, multiple angles of the 3D surface map above can be seen in the finish product at the beginning of this blog post.

 

 

 

Table-Top AR – Explore New York City in Your Living Room

By: Ben Simak

Geovisualization Class Project @RyersonGeo, SA8905, Fall 2018

Introduction:

For my geo-visualization project I decided to create a map that takes advantage of new interactive technologies. Augmented reality (AR) has been around since 1990’s and has been growing in use and capabilities. Augmented Reality was originally developed and implemented in Air Force Research and then heavily developed in the gaming industry to add a new way to interact with our surroundings to try and immerse the viewer on a new level. This same technology has migrated into different industry applications such as education and navigation.

The type of augmented reality that I decided to use is called “Table Top” augmented reality. It essentially allows the camera on the device you are using to find a flat surface and showcase a 2D or 3D model or rendering on that surface. You are then able to anchor it and walk around the model to see all the sides and get closer or further away as if it was actually there.

I utilized MapBox, Unity, ARKit, and Xcode to create an app that allows you to use an iPhones camera to render a 3-D scrollable model of  New York City (And anywhere in the world you want to scroll too). Mapbox provides the feed of building and elevation models for the building heights as well as terrain through its SDK. Unity is the platform that pulls together the MapBox SDK and generates the augmented world and allows for any physical altering of how the maps look and the starting point for the map. Unity is where the majority of the app components get bundled up before it is ready to be processed. The ARKit has coding that enables the app to be created on the iPhone and enables the use of the camera. Xcode is the final step that takes the bundled Unity file and generates an app that can be opened up on your personal iPhone for use.

How I Built the App:

*Requires Mac OSX and an iPhone (Android variant also available)

Step 1: Download MapBox SDK for Unity from https://www.mapbox.com/unity/.

Step 2: Download Unity (For Personal Use Free) https://store.unity.com/download?ref=personal

Step 3: Download Xcode on the Mac App Store https://itunes.apple.com/ca/app/xcode/id497799835?mt=12

Step 4: Open Unity and create a new 3-D project

Step 5: Go to Assets menu and go to import package and click custom package…

Navigate to you downloaded Mapbox SDK and click open. After it loads and opens click Import all and wait for it to load. A Mapbox setup window should open and looks like the below.

Step 6: Click the mapbox.com link highlighted in blue. It will take you to the Mapbox website to generate your access token. Click the copy button and then go back to your Unity Project and paste it in the field and click submit.

Step 7: Some layers that aren’t included by default in a Unity project and are needed to run this scene. Select ARTabletopKit in the Hierarchy view. Add the following layers by clicking Layer and selecting Add Layer

Specify the following layers:

  • ARGameObject
  • Map
  • Path
  • Both

Step 8: Click on MapRoot and find the Latitude Longitude settings in the GENERAL settings of the Abstract Map script. Click Search and enter the coordinates of anywhere in the world. For my example, I used New York City.

Step 9: Now when you press the Play button at the top center of the Unity window you should see a rendering appear on the Game tab

Step 10: go to edit, project settings, player. Make sure under the Camera Usage Description and Location Usage Description you put the following details.

 

Step 11: Go to File, Build Settings and open up the window seen below. Then click IOS (Or Android if you were making it on an Android device) and click Switch platform. Then make sure you click add open scene and select the Table Top AR with a check and uncheck Scene/main. Then click build. Save it to where ever you want.

Step 12: Download Xcode. Plug in Your Device. Before we can build to a device, we need to set up an Apple ID and add it to Xcode. Once you have obtained an Apple ID, you must add it to Xcode.

  • Open Xcode.
  • From the menu bar at the top of the screen choose Xcode > Preferences. This will open the Preferences window.
  • Choose Accountsat the top of the window to display information about the Apple IDs that have been added to Xcode.
  • To add your Apple ID, click the plus sign at the bottom left corner and choose Add Apple ID.

  • A popup will appear, requesting your Apple ID and password. Enter these.
  • Your Apple ID will then appear in the list. Select your Apple ID to see more information about it.
  • Under the heading Team, you will see a list of all Apple Developer Program teams that you are a part of. If you’re using a free Apple ID that isn’t enrolled in the Apple Developer Program, you will see your name followed by “(Personal Team)”.

Step 13: Go back to that original Unity file that you saved after pressing the build button. Double click the .xcodeproj file to open the project with Xcode.

  • In the top left, select Unity-iPhone to view the project settings. It will open with the General tab selected.
  • Under the topmost section called Identity, you may see a warning and a button that says Fix Issue. This warning doesn’t mean we’ve done anything wrong – it just means that Xcode needs to download or create some files for code signing.
  • Click the Fix Issue
  • Make sure that the correct team is shown in the dropdown – if you’re using a free Apple ID, it should be your name followed by “(Personal Team)”.

Step 14: Make sure the bundle identifier seen above is not a default name. if it is just change the default name to what ever you want.

Step 15: Click the play button and make sure your iPhone is still connected. The device must stay on and not lock during this process.

Step 16: Once completed you will see the app on your iPhone and you can open it and point at a flat surface and watch the magic happen!

Telling a Story through a Time-series animation using Open Data

By: Brian Truong.

GeoVis Project @RyersonGEO SA8905, Fall 2018

Context 

As a student and photographer, I have frequently walked around the streets of Toronto. I would often see homeless individuals in certain neighborhoods in Toronto. While at the time I was aware of some shelters across of Toronto, I never fully understood the Toronto shelter system as I thought organizations in Toronto were one and the same in terms of providing shelters to those who are at-risk.  I also noticed that the City of Toronto updates their shelter occupancy data on a more less daily basis, which led me to choose this topic for my GeoViz project. My lack of knowledge of the shelter system and the readily available data, motivated me to choose to make a Time Series map along with incorporating ESRI’s Story Maps into the project. This was to ensure that whoever wanted to see my project could be told the story of Toronto Shelters as well.

Process

Toronto open data provides shelter occupancy data in multiple formations, however, a JSON data format was chosen due to previous experience with working with JSON data in Alteryx. JSON data was provided through a link from Toronto Open Data. Using Alteryx a scrip was created to download the live(ish) data, parse it, put it in a proper format, then filter the data, and along with creating appropriate columns to work with the data.

Above is an example of the JSON data that was used, the data itself is semi-structured as the data is organized in a specific format. The data consisted of multiple of entries for Shelter location, those were filtered out so that only organizational program was present for each shelter location. this usually went down too the program that housed the largest number of people. In order for a proper time series to be created, a date/time column must be present, columns were created through the use of the formula tool where columns such as date/time and occupancy rates were created.

Above is the final Alteryx strip that was used to get the data from a JSON format to a .xlsx format.  However, there was one problem with the data. The data itself wasn’t geocoded, so I had to manually geocode each shelter location by running the address of each shelter through Google Maps and copy and pasting the (x and y) values of the shelter locations. These coordinates were then put into the same file as the output of the Alteryx script, except it was in a different sheet. Using VLOOKUP, shelters were assigned their coordinates through matching shelter names.

Time Series Map

The time series portion of this geoviz project was created using Arcmap Pro, the excel file was brought into ArcGIS Pro and points were created using x and y coordinates. A shapefile was created, in order to create a time series map, the time field had to be enabled. Below shows the steps needed to be taken in order to enable time as a field on a shapefile.

In order to actually enable time, a time column must already exist in the format of dd/mm/yyy XX:XX. From that point, the change in shelter occupancy could be viewed through a time-slider going at any interval that the user required. For this project, it went by a daily basis on a 3-minute loop. In order to capture it as a video and export it, the animation function was required. Within the animation tab, the tool append was used.

What the append feature does is that it follows the time series map from the first frame, which is on the first day of the time series map (Jan 1, 2018 00:00) to the last day of the time series map (Nov 11, 2018 00:00). The animation would then be created as per specifications of the settings. The video itself is exported through a 480p video at 15 frames a second. It was then uploaded on YouTube and embeded on the storymaps.

ESRI Story Maps

The decision to use ESRI’s story maps was in part due to what motivated me to work on this. I wanted to tell the story of shelters and who they serve and is affected by them. Especially after two major events in the past year that has led to shelters in Toronto showing up on the news. Both the cold snap in early 2018 and the large influx of migrants has had a huge effect on Toronto’s Shelters.