Visual Story of GHG Emissions in Canada

By Sharon Seilman, Ryerson University
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

Background

Topic: 

An evaluation of annual Greenhouse Gas (GHG) Emissions changes in Canada and an in-depth analysis of which provinces/ territories contribute to most of the GHG emissions within National and Regional geographies, as well as by economic sectors.

  • The timeline for this analysis was from 1990-2015
  • Main data sources: Government of Canada Greenhouse Gas Emissions Inventory and Statistics Canada
Why? 

Greenhouse gas emissions are compounds in the atmosphere that absorbs infrared radiation, thus trapping and holding heat in the atmosphere. By increasing the heat in the atmosphere, greenhouse gases are responsible for the greenhouse effect, which ultimately leads to global climate change. GHG emissions are monitored in three elements -its abundance in the atmosphere, how long it stays in the atmosphere and its global warming potential.

Audience: 

Government organizations, Environmental NGOs, Members of the public

Technology

An informative website with the use of Webflow was created, to visually show the story of the annual emissions changes in Canada, understand the spread of it and the expected trajectory. Webflow is a software as a service (SaaS) application that allows designers/users to build receptive websites without significant coding requirements. While the designer is creating the page in the front end, Webflow automatically generates HTML, CSS and JavaScript on the back end. Figure 1 below shows the user interaction interface of Webflow in the editing process. All of the content that is to be used in the website would be created externally, prior to integrating it into the website.

Figure 1: Webflow Editing Interface
The website: 

The website it self was designed in a user friendly manner that enables users to follow the story quite easily. As seen in figure 2, the information it self starts at a high level and gradually narrows down (national level, national trajectory, regional level and economic sector breakdown), thus guiding the audience towards the final findings and discussions. The maps and graphs used in the website were created from raw data with the use of various software that would be further elaborated in the next section.

Figure 2: Website created with the use of Webflow
Check out Canada’s GHG emissions story HERE!

Method

Below are the steps that were undertaken for the creation of this website. Figure 3 shows a break down of these steps, which is further elaborated below.

Figure 3:  Project Process
  1. Understanding the Topic:
    • Prior to beginning the process of creating a website, it is essential to evaluate and understand the topic overall to undertake the best approach to visualizing the data and content.
    • Evaluate the audience that the website would be geared towards and visualize the most suitable process to represent the chosen topic.
    • For this particular topic of understanding GHG emissions in Canada, Webflow was chosen because it allows the audience to interact with the website in a manner that is similar to a story; providing them with the content in a visually appealing and user friendly manner.
  2. Data Collection:
    • For the undertaking of this analysis, the main data source used was the Greenhouse Gas Inventory from the Government of Canada (Environment and Climate Change). The inventory provided raw values that could be mapped and analyzed in various geographies and sectors. Figure 4 shows an example of what the data looks like at a national scale, prior to being extracted. Similarly, data is also provided at a regional scale and by economic sector.

      Figure 4: Raw GHG Values Table from the Inventory
    • The second source for this visualization was the geographic boundaries. The geographic boundaries shapefiles for Canada at both a national scale and regional scale was obtained from Statistics Canada. Additionally, the rivers (lines) shapefile from Statistics Canada too was used to include water bodies in the maps that were created.
      • When downloading the files from Statistics Canada, the ArcGIS (.shp) format was chosen.
  3. Analysis:
    • Prior to undertaking any of the analysis, the data from the inventory report needed to be extracted to excel. For the purpose of this analysis, national, regional and economic sector data were extracted from the report to excel sheets
      • National -from 1990 to 2015, annually,
      • Regional -by province/territory from 1990 to 2015, annually
      • Economic Sector -by sector from 1990 to 2015, annually
    • Graphs:
      • Trend -after extracting the national level data from the inventory, a line graph was created in excel with an added trendline. This graph shows the total emissions in Canada from 1990 to 2015 and the expected trajectory of emissions for the upcoming five years. In this particular graph, it is evident that the emissions show an increasing trajectory. Check out the trend graph here!
      • Economic Sector -similar to the trend graph, the economic sector annual data was extracted from the inventory to excel. With the use of the available data, a stacked bar graph was created from 1990 to 2015. This graph shows the breakdown of emissions by sector in Canada as well as the variation/fluctuations of emissions in the sectors. It helps understand which sectors contribute the most and which years these sectors may have seen a significant increase or decrease. With the use of this graph, further analysis could be undertaken to understand what changes may have occurred in certain years to create such a variation. Check out the economic sector graph here!
    •  Maps:
      • National map -the national map animation was created with the use of ArcMap and an online GIF maker. After the data was extracted to excel, it was saved as a .csv files and uploaded to ArcMap. With the use of ArcMap, sixteen individual maps were made to visualize the varied emissions from 1990 to 2015. The provincial and territorial shapefile was dissolved using the ArcMap dissolve feature (from the Arc Tool box) to obtain a boundary file at a national scale (that was aligned with the regional boundary for the next map). Then, the uploaded table was joined to the boundary file (with the use of the Table join feature). Both the dissolved national boundary shapefile and the river shapefile were used for this process, with the data that was initially exported from the inventory for national emissions. Each map was then exported a .jpeg image and uploaded to the GIF maker, to create the animation that is shown in the website. With the use of this visualization, the viewer can see the variation of emissions throughout the years in Canada. Check out the national animation map here!
      •  Regional map -similar to the national one, the regional map animation was created in same process. However, for the regional emissions, data was only available for three years (1990, 2005 and 2015). The extracted data .csv file was uploaded and table joined to the provinces and territories shapefile (undissolved), to create three choropleth maps. The three maps were them exported as .jpeg images and uploaded to the GIF maker to create the regional animation. By understanding this animation, the viewer can distinctly see which regions in Canada have increase, decreased or remained the same with its emissions. Check out the regional animation map here!
  4. Final output/maps:
    • The graphs and maps that were discussed above were exported as images and GIFs to integrate in the website. By evaluating the varied visualizations, various conclusions and outputs were drawn in order to understand the current status of Canada as a nation, with regards to its GHG emissions. Additional research was done in order to assess the targets and policies that are currently in place about GHG emissions reductions.
  5. Design and Context:
    • Once the final output and maps were created, and the content was drafted, Webflow enables the user to easily upload external content via the upload media tool. The content was then organized with the graphs and maps that show a sequential evaluation of the content.
    • For the purpose of this website, an introductory statement introduces the content discussed and Canada’s place in the realm of Global emissions. Then the emissions are first evaluated at a national scale with the visual animation, then the national trend, regional animation and finally, the economic sector breakdown. Each of the sections have its associated content and description that provides an explanation of what is shown by the visual.
    • The Learn More and Data Source buttons in the website include direct links to Government of Canada website about Canada’s emissions and the GHG inventory itself.
    • The concluding statement provides the viewer with an overall understanding of Canada’s status in GHG emissions from 1990 to 2015.
    • All of the font formatting and organizing of the content was done within the Webflow interface with the end user in mind.
  6. Webflow:
    • The particular format that was chosen in for this website because of story telling element of it. Giving the viewer the option to scrolls through the page and read the contents of it, works similarly as story because this website was created for informative purposes.

Lessons Learned: 

  • While the this website provides informative information, it could be further advanced through the integration of an interactive map, with the use of additional coding. This however would require creating the website outside of the Webflow interface.
  • Also, the analysis could be further advanced with the additional of municipal emissions values and policies (which was not available in the inventory it self)

Overall, the use of Webflow for the creation of this website, provides users with the flexibility to integrate various components and visualizations. The user friendly interface enables uses with minimal coding knowledge to create a website that could be used for various purposes.

Thank you for reading. Hope you enjoyed this post!

Visualizing Urban Land Use Growth in Greater Sào Paulo

By: Kevin Miudo

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

https://www.youtube.com/watch?v=Il6nINBqNYw&feature=youtu.be

Introduction

In this online development blog for my created map animation, I intend to discuss the steps involved in producing my final geovisualization product, which can be viewed above in the embedded youtube link. It is my hope that you, the reader, learn something new about GIS technologies and can apply any of the knowledge contained within this blog towards your own projects. Prior to discussing the technical aspects of the map animations development, I would like to provide some context behind the creation of my map animation.

Cities within developing nations are experiencing urban growth at a rapid rate. Both population and sprawl are increasing at unpredictable rates, with consequences for environmental health and sustainability. In order to explore this topic, I have chosen to create a time series map animation visualizing the growth of urban land use in a developing city within the Global South. The City which I have chosen is Sào Paulo, Brazil. Sào Paulo has been undergoing rapid urban growth over the last 20 years. This increase in population and urban sprawl has significant consequences to climate change, and such it is important to understand the spatial trend of growth in developing cities that do not yet have the same level of control and policies in regards to environmental sustainability and urban planning. A map animation visualizing not only the extent of urban growth, but when and where sprawl occurs, can help the general public get an idea of how developing cities grow.

Data Collection

In-depth searches of online open data catalogues for vector based land use data cultivated little results. In the absence of detailed, well collected and precise land use data for Sào Paulo, I chose to analyze urban growth through the use of remote sensing. Imagery from Landsat satellites were collected, and further processed in PCI Geomatica and ArcGIS Pro for land use classification.

Data collection involved the use of open data repositories. In particular, free remotely sensed imagery from Landsat 4, 5, 7 and 8 can be publicly accessed through the United States Geological Survey Earth Explorer web page. This open data portal allows the public to collect imagery from a variety of satellite platforms, at varying data levels. As this project aims to view land use change over time, imagery was selected at data type level-1 for Landsat 4-5 Thematic Mapper and Landsat 8 OLI/TIRS. Imagery selected had to have at least less than 10% cloud cover, and had to be images taken during the daytime so that spectral values would remain consistent across each unsupervised image classification.

Landsat 4-5 imagery at 30m spectral resolution was used for the years between 2004 and 2010. Landsat-7 Imagery at 15m panchromatic resolution was excluded from search criteria, as in 2003 the scan-line corrector of Landsat-7 failed, making many of its images obsolete for precise land use analysis. Landsat 8 imagery was collected for the year 2014 and 2017. All images downloaded were done so at the Level-1 GeoTIFF Data Product level. In total, six images were collected for years 2004, 2006, 2007, 2008, 2010, 2014, 2017.

Data Processing

Imagery at the Level-1 GeoTIFF Data Product Level contains a .tif file for each image band produced by Landsat 4-5 and Landsat-8. In order to analyze land use, the image data must be processed as a single .tiff. PCI Geomatica remote sensing software was employed for this process. By using the File->Utility->Translate command within the software, the user can create a new image based on one of the image bands from the Landsat imagery.

For this project, I selected the first spectral band from Landsat 4-5 Thematic Mapper images, and then sequentially added bands 2,3,4,5, and band 7 to complete the final .tiff image for that year. Band 6 is skipped as it is the thermal band at 120m spatial resolution, and is not necessary for land use classification. This process was repeated for each landsat4-5 image.Similarly for the 2014 and 2017 Landsat-8 images, bands 2-7 were included in the same manner, and a combined image was produced for years 2014 and 2017.

Each combined raster image contained a lot of data, more than required to analyze the urban extent of Sào Paulo and as a result the full extent of each image was clipped. When doing your own map animation project, you may also wish to clip data to your study area as it is very common for raw imagery to contain sections of no data or clouds that you do not wish to analyze. Using the clipping/subsetting option found under tools in the main panel of PCI Geomatica Focus, you can clip any image to a subset of your choosing. For this project, I selected the coordinate type ‘lat/long’ extents and input data for my selected 3000×3000 pixel subset. The input coordinates for my project were: Upper left: 46d59’38.30″ W, Upper right: 23d02’44.98″ S, Lower right: 46d07’21.44″ W, Lower Left: 23d52’02.18″ S.

Land Use Classification

The 7 processed images were then imported into a new project in ArcPro. During importation, raster pyramids were created for each image in order to increase processing speeds.  Within ArcPro, the Spatial Analyst extension was activated. The spatial analyst extension allows the user to perform analytical techniques such as unsupervised land use classification using iso-clusters. The unsupervised iso-clusters tool was used on each image layer as a raster input.

The tool generates a new raster that assigns all pixels with the same or similar spectral reluctance value a class. The number of classes is selected by the user. 20 classes were selected as the unsupervised output classes for each raster. It is important to note that the more classes selected, the more precise your classification results will be. After this output was generated for each image, the 20 spectral classes were narrowed down further into three simple land use classes. These classes were: vegetated land, urban land cover, and water. As the project primarily seeks to visualize urban growth, and not all types of varying land use, only three classes were necessary. Furthermore, it is often difficult to discern between agricultural land use and regular vegetated land cover, or industrial land use from residential land use, and so forth. Such precision is out of scope for this exercise.

The 20 classes were manually assigned, using the true colour .tiff image created from the image processing step as a reference. In cases where the spectral resolution was too low to precisely determine what land use class a spectral class belong to, google maps was earth imagery referenced. This process was repeated for each of the 7 images.

After the 20 classes were assigned, the reclassify tool under raster processing in ArcPro was used to aggregate all of the similar classes together. This outputs a final, reclassified raster with a gridcode attribute that assigns respective pixel values to a land use class. This step was repeated for each of the 7 images. With the reclassify tool, you can assign each of the output spectral classes to new classes that you define. For this project, the three classes were urban land use, vegetated land, and water.

Cartographic Element Choices:

 It was at this point within ArcPro that I had decided to implement my cartographic design choices prior to creating my final map animation.

For each layer, urban land use given a different shade of red. The later the year, the darker and more opaque the colour of red. Saturation and light used in this manner helps assist the viewer to indicate where urban growth is occurring. The darker the shade of red, the more recent the growth of urban land use in the greater Sào Paulo region. In the final map animation, this will be visualized through the progression of colour as time moves on in the video.

ArcPro Map Animation:

Creating an animation in ArcPro is very simple. First, locate the animation tab through the ‘View’ panel in ArcPro, then select ‘Add animation’. Doing so will open a new window below your work space that will allow the user to insert keyframes. The animation tab contains plenty of options for creating your animation, such as the time frame between key frames, and effects such as transitions, text, and image overlays.

For the creation of my map animation, I started with zoomed-out view of South America in order to provide the viewer with some context for the study area, as the audience may not be very familiar with the geography of Sào Paulo. Then, using the pan tool, I zoomed into select areas of choice within my study area, ensuring to create new keyframes every so often such that the animation tool creates a fly-by effect. The end result explores the very same mapping extents as I viewed while navigating through my data.

While making your own map animation, ensure to play through your animation frequently in order to determine that the fly-by camera is navigating in the direction you want it to. The time between each keyframe can be adjusted in the animation panel, and effects such as text overlays can be added. Each time I activated another layer for display to show the growth of urban land use from year to year, I created a new keyframe and added a text overlay indicating to the user the date of the processed image.

Once you are satisfied with your results, you can export your final animation in a variety of formats, such as .avi, .mov, .gif and more. You can even select the type of resolution, or use a preset that automatically configures your video format for particular purposes. I chose the youtube export format for a final .mpeg4 file at 720p resolution.

I hope this blog was useful in creating your very own map animation on remotely sensed and classified raster data. Good luck!

Creating a 3D Holographic Map Display for Real-World Driving and Flight Navigation

By: Dylan Oldfield

Geovis Class Project @RyersonGeo, SA8905, Fall 2018

Introduction:

The inspiration for this project came from the visual utility and futuristic look of holographic maps from the 2009 movie Avatar by James Cameron. Wherein there were multiple uses for holographic uses in several unique scenarios; such as within aerial vehicles, conference tables, and on air traffic control desks. Through this, the concept to create, visualize and present a current day possibility of this technology began. This technology is a form of hologram that visualizes geographically, where the user is, while operating a vehicle. For instance, the use of a hologram in a car for the everyday person displaying their navigation in the city guiding them to their destination. Imagine a 3D hologram real-time version replacing the 2D screen of google maps or any dashboard mounted navigation in a car. This application can even be used in aerial vehicles as well, imagine planes landing at airports close to urban areas, but fog or other weather conditions making safe landing and take-off difficult. With the use of the 3D hologram, visualization of where to go and how to navigate the difficult weather would be significantly easier and safer. For these 2 unique reasons, 2 scenarios or maps, were recorded into videos and made into 3D holograms to give a proof of concept for the use of the technology in cars and planes.

Data:

The data to make this project possible was taken from the City of Toronto Open Data Portal and consisted of the 3D massing and Street .shp files. It is important to note that in order for the video to work and be seen properly, the background within the video and in the real-world had to have been as dark as possible otherwise the video will not appear fully. To make this effect, features were created in ArcGIS-Pro that ensured that the background, base, and ceiling in the 3D scene of the map were black. These features were, a simple polygon for the ceiling given a different base height, and the ‘walls’ for the scene was a line surrounding the scene and extruded to the ceiling. The base of the scene was an imported night-time base map.

Methodology:

  1. Map / Scene Creation Within ArcGIS-Pro

Within the mapping program ArcGIS Pro the function to visualize 3D features was used to extrude the aforementioned .shp files for the scene. All features were extruded in 3D from the base height with meters as the measurement. The buildings were extruded to their real-world dimensions and given the colour scheme of fluorescent blue so as to provide contrast for buildings in the video. The roads were extruded in such a way so as to give the impression that sidewalks existed. The first part for making this was with buffering the roads to a 6 meter buffer, dissolving it to make it seamless, and extruding it from the base, creating the roads. The inverse polygon from the newly created roads was created and extruded slightly higher than the roads. The roads were then given differing shades of grey so as to adhere to the darkness of the scene but also to provide contrast to each other. This effect is seen in the picture below.

 

  1. Animation Videos Creation and Export

Following the creation of the scene the animation or videos of “driving” through the city and “flying” into Billy Bishop Airport were created. Within ArcGIS-Pro the function to create Animations through the consecutive placements of key frames allows for the seamless running of a video in any 3D scene created. The key frames are essentially checkpoints in a video and the program fills the time and space between each frame by traveling between the frames as a video. The key frames are the boxes at the bottom of the image below.

Additionally, as seen in the image above, is the exporting options ArcGIS-Pro makes available for the user. The video can be exported at differing qualities to YouTube, Vimeo, Twitter, MP4, and as a Gif among other options. The 2 videos created for this project were at 1080p, 60 frames a second in MP4 format. Due to the large size of the videos with these chosen options, the exporting process took over 2 hours for each video.

  1. PowerPoint Video Transposition and Formatting

The way the hologram functions is by refracting the videos through each of the lenses into the center creating the floating effect of an image. For this effect to work the video exported from ArcGIS-Pro was inserted into PowerPoint and transposed 3 times into the format seen in the image below. Once the placements were equal and exact the background, as mentioned previously, was turned black. The videos were made to play at the same time and then was exported for a second time into a MP4 as the final products.

  1. Hologram Lenses Template Creation

The hologram lenses were created out of 4 clear CD cases. The templates for the lenses needed to be physically compatible with the screen display of the video created. The screen used was from a 5th Generation iPad. After the template was defined they were cut out of the 4 CD cases with a box cutter and lightly sanded at all cut edges so as to ensure they would not cut anyone, and so that the surfaces in contact with the epoxy would bond without issue. After this an epoxy resin was used to glue the 4 lenses into their final shape. While the epoxy had a 10 setting time, it was left for 3 hours to ensure it was fully set. After this the lenses was complete and ready for use. The final lens and the iPad used for the display are seen in the image below.

Finally, here is a screen shot of the City of Toronto “Driving Navigation” video:

Using LiDAR to create a 3D Basemap

By: Jessie Smith
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

INTRO

My Geovisualization Project focused on the use of LiDAR to create a 3D Basemap. LiDAR, which stands for Light Detection and Ranging, is a form of active remote sensing. Pulses of light are sent from a laser towards the ground. The time it takes for the light pulse to be returned is measured, which determines the distance between where the light touched a surface and the laser in which it was sent from. By measuring all light returns, millions of x,y,z points are created and allow for 3D representation of the ground whether it be just surface topography or elements such as vegetation or buildings etc. The LiDAR points can then be used in a dataset to create DEMs or TINs, and imagery is draped over them to create a 3D representation. The DEMs could also be used in ArcPro to create 3D buildings and vegetation, as seen in this project.

ArcGIS SOLUTIONS

ArcGIS solutions are a series of resources made available by Esri. They are resources marketed for industry and government use. I used the Local Government Solutions which has a series of focused maps and applications to help local governments maximize their GIS efficiency to improve their workflows and enhance services to the public. I looked specifically at the Local Government 3D Basemaps solution. This solution included a ArcGIS Pro package with various files, and an add-in to deploy the solution. Once the add-in is deployed a series of tasks are made available that include built in tools and information on how to use them. There is also a sample data set included that can be used to run all tasks as a way to explore the process with appropriate working data.

IMPLEMENTATION

The tasks that are provided have three different levels: basic, schematic and realistic. Each task only requires 2 data sources, a las(LiDAR) dataset and building footprints. Based on the task chosen, a different degree of detail in the base map will be produced. For my project I used a mix of realistic and schematic tasks. Each task begins with the same steps: classifying the LiDAR by returns, creating a DTM and DSM, and assigning building heights and elevation to the building footprints attribute table. From there the tasks diverge. The schematic task then extracted roof forms to determine the shape of the roofs, such as a gabled type, where in the Basic task the roofs remain flat and uniform. Then the DEMS were used in conjunction with the building footprints and the rooftop types to 3D enable buildings. The realistic scheme created vegetation points data with z values using the DEMs. Next, a map preset was added to assign a 3D realistic tree shape that corresponds with the tree heights.

DEMs Created

DSM

DTM

Basic Scene Example

Realistic Scene

 

ArcGIS ONLINE

The newly created 3D basemap, which can be seen and used on ArcGIS Pro, can also be used on AGOL with the newly available Web Scene. The 3D data cannot be added to ArcGIS online directly like 2D data would be. Instead, a package for each scene was created, then was published directly to ArcGIS online. The next step is to open this package on AGOL and create a hosted layer. This was done for both the 3D trees and buildings, and then these hosted layers were added to a Web Scene. In the scene viewer, colours and basemaps can be edited, or additional contextual layers could be added. As an additional step, the scene was then used to create a web mapping application using Story Map template. The Story Map can then be viewed on ArcGIS Online and the data can be rotated and explored.

Scene Viewer

Story Map

You can find my story map here:
http://ryerson.maps.arcgis.com/apps/Styler/index.html?appid=a3bb0e27688b4769a6629644ea817d94

APPLICATIONS

This type of project would be very doable for many organizations, especially local government. All that is needed is LiDAR data and building footprints. This type of 3D map is often outsourced to planners or consulting companies when a 3D model is needed. Now government GIS employees could create a 3D model themselves. The tasks can either be followed exactly with your own data, or the general work flow could be recreated. The tasks are mostly clear as to the required steps and processes being followed, but there could be more reasoning provided when setting values or parameters specific to the data being used inside the tool. This will make it easier to create a better model with less trial and error.

 

 

 

Creating Effective Landsat Timelapses

When asked to describe a satellite time-lapse, it is extremely likely that one would extrapolate some sort of time-series creation that shows gradual change with the use of natural colour images. It is however in this method of displaying satellite imagery, that a great deal of information is lost. To elaborate, individuals viewing such time-series creations cannot interpret quantifiable descriptors that may aid in conveying a message. Take for example the time-lapse provided below.

For what it is, a simple visual attempt at identifying change, the time-lapse above does an effective job at conveying change. Having said this however it does not provide the viewer with quantifiable observations, like the area developed per year, or even the ratio of land to water, as an example. Such statistics would be extremely useful in not only further illustrating a point, but also further encapsulating a viewer in the creation made. With this being said there are methods in which information can be derived from satellite imagery to provide viewers with a greater level of detail.

Within the realm of remote sensing, or the field in which data are collected without having a physical presence, there are a number of methods that allow individuals to interpret quantifiable data from the images one may capture. The most significant method being, a change detection. As a brief summary of what a change detection is, it is a temporal analysis between two, or more, periods of time, where either multi-spectral, or hyper-spectral images, and subsequent band combinations, are used to identify quantifiable areas of change between temporal foci. A change detection is of significant importance as it imparts the viewer with a definitive measure of change, far from the interpreted observation of change found within a regular timelapse. Taking a step back, one could quite easily identify, and even argue that in order to create a more effective timelapse, one could combine aspects of a change detection, creating an enhanced, and ever effective geo-visualization. Such a creation would bridge the gap between the lay, and the academic.

But such a thought begs the question; how could this be accomplished? Better yet, how could a timelapse, comprised of hundreds of composite spectral images, that displays a quantifiable characteristic, be created in an timely, and effective manor?

Put simply, programming is the answer to such questions. 

By creating a number of programs, an effective timelapse can be created that can be an effective evolution, from what was previously used. In an attempt to elaborate such a plan, such an attempt follows three steps:

  1. Create a program, a web-driver, that downloads an entire dataset, in this case all Landsat 4 – 5, imagery for a specific row, and path, in the absence of an API, or database. For an effective geo-visualization to be created, a great deal of data are needed.
  2. Instantiate a program, that opens each file, in this case Landsat image, and pulls three bands to be used to create a composite image that will better show a specific type of geographic change..
  3. Develop a program, that classifies imagery based on a predefined signature class, a file containing information on predetermined training classifications, that will then find, and append, the total area of a classification to a CSV file.

Put simply, the series of programs that will be created will create the intended geo-visualization, as well as do so in an efficient manor.

Attached are two links to a Dropbox folder, both links are direct connections to both programs needed to, download, and compose the images in focus.

Image Downloader:

https://www.dropbox.com/s/63ig83vd30ydk0a/Composite_Composer.py?dl=0

Image Composer:

https://www.dropbox.com/s/63ig83vd30ydk0a/Composite_Composer.py?dl=0

*Note, that in-depth comments have been provided for each stage, within each program. A general idea, as well as flow of each program should be quite easy to interpret.

It should be noted however, in the process of creating such a geo-visualization, a significant error arose. To elaborate, in the process of creating the final program, the GIS software, in this case ArcGIS, through the manipulation of ArcPY, failed on multiple attempts to read a signature file, and create a classification for each image. After much research it was concluded that an internal, however still unresolved issue was the main cause. As such, the final, and most important program could not be created.

The images created, were then edited in Adobe Lightroom, a image manipulation software, to remove any significant errors, and stitched together using Adobe Premiere Pro. Additional peices of information, such as titles, summaries, as well as dates were added within Adobe After Effects. The final geo-visualization can be seen below.

In summary, the Landsat timelapse created is extremely unique, it can safely be argued that there are no other geo-visualization methods like it currently present. In regards to what was attempted to be solved, it however did not accomplish all the tasks set out. From the perspective of the viewer, it is an easier method of data interpretation, however, it still lacks the quantifiable data aspect.

With all of this being said however, it is hoped that in the future, when previously mentioned bugs in software are fixed, another, more successful attempt at this geo-visualization can be made.

Time-series Animation of Power Centre Growth in the Greater Toronto Area for the Last 25 Years

By: Jennifer Nhieu
Geovisualization Class Project @RyersonGeo, SA8905, Fall 2018

Introduction:

In 1996, there were 29 power centres with 239 retail tenants accounting for just under five million square feet of retail space (Webber and Hernandez, 2018). 22 years later, in 2018, there are 125 power centres with 2,847 retail tenants accounting for 30 million more square feet of retail space (Webber and Hernandez, 2018). In addition, power centres expand in an incremental manner, either through the purchase and integration of adjoining parcels or the conversion of existing parking space into new stores (Webber and Hernandez, 2018). This development process often leads to retail centres becoming “major clusters of commercial activity that significantly exceed the original approved square footage total” (Webber and Hernandez, 2018, pg. 3).

Data and Technology:

To visualize this widespread growth of power centres from 1996 to 2017, a time-series animation map was created on Kepler.gl. (beta version) using power centre growth data provided by the Centre for the Study of Commercial Activity (CSCA) at Ryerson University, who undertakes an annual field survey-based inventory of retail activity in the Greater Toronto Area. Kepler.gl was created by Uber’s visualization team and released to the public on the summer of 2018. It is an “open source, high-performance web-based application for visual exploration of large-scale geolocation data sets. Kepler.gl can render millions of points representing thousands of trips and perform spatial aggregations on the fly” and is partnered with Mapbox, a “location data platform for mobile and web applications that provides open-source location features”. (Uber Technologies Inc., 2018 and Mapbox, 2018).

Methodology:

The data provided by the CSCA includes information on the shopping centre’s name, a unique identification code for each power center, the longitude and latitude coordinates for each power centre, its square footage information from the year it was built to 2017 and etc. The data had to be restructured on Microsoft Excel to include a pseudo date and time column, which would include the years 1992 to 2017, as well as 1-hour time intervals to allow Kelper.gl to create an animation based on date and time. The table below is an example of the data structure required to create a time animation on Kepler.gl.

Table 1: Data structure example*

Table 1: Data structure example*
*The data in this table has been modified due to confidentiality reasons.

Below is the time series animation set up for visualizing power centre growth on Kepler.gl. This process can also be replicated to produce another time-series animations:

  1. Visit https://kepler.gl/#/
  2. Click Get Started.
  3. Drag and drop .csv file onto application.
  4. Hold and drag to navigate to the Toronto GTA.
  5. On the contents bar, click + Add Layer on the Layers
  6. Under Basic Layer Type, click Select A Type, then select Point.
  7. Under Columns, Lat* = COORDY and Lng* = COORDX.
  8. Under Color, click Color Based On, then select SQ FT.
  9. Under Color, click the color scheme bar, select a preferred light to dark colour scheme.
  10. Under Color, Color Scale, select quantize.
  11. Under Color, Opacity, set to 4.
  12. Under Radius, Radius Based on, Select a field, select
  13. Under Radius, Radius Range, set the range from 1 to 60.
  14. On the contents bar, click + Add Layer on the Filters
  15. Click Select a field, then select
  16. In the slider, drag the rightmost square notch to highlight only 2 bars with the left square notch.
  17. Press the play button the start the animation.

Notes:

  • The speed of the animation can be adjusted.
  • The legend can be shown by clicking the bottom circular button located to the top right corner of the screen.
  • Hover your mouse over a point to see the metadata of the selected power centre.
Figure 1: Power Centre Growth in the Toronto GTA (1992 – 2017)

Limitations:

During the implementation process, it became apparent that Kepler.gl is more focused on graphics and visuals than it does on cartographic standards. The program does not allow the user to manually adjust class ranges on the legend, nor does it accurately display continuous data. The proportional symbols used to represent power centre growth displays flashing or blinking symbols rather than a gradual growth in the symbols. There was an attempt to correct this problem by duplicating the values in the date and time column, and then adding additional pseudo date and time values between each year. However, when tested, the animation exhibited the same flashing and blinking behaviour, therefore it became apparent that is problem exists in the programming of Kepler.gl and not in the data itself. Furthermore, by duplicating these values, the file would exceed the maximum file size on chrome (250mb), and limit performance on Safari, the two web browsers it runs on.

Conclusion:

Regardless of the limitations, as the current Kepler.gl is still in its early beta version, it still has a lot of potential to incorporate user feedback from industry professionals and run additional testing before the final release.

References:

Webber, S. and Hernandez, T. (2018). Retail Development and Planning Policy.   Centre for the Study of Commercial Activity, Ryerson University. Toronto, CA.Uber Technologies Inc. (2018). Kepler.gl. Retrieved from http://kepler.gl/#/
Mapbox. (2018). About Mapbox. Retrieved from https://www.mapbox.com/about/

App Building for the Technically Challenged Cartographer

By Kevin Duffin

Geovis Project Assignment @RyersonGeo SA 8905 Fall 2018

Creating a custom web mapping application can seem like a daunting task for an individual without a whole lot of technical experience. These individuals, myself included, can feel as though they must first learn computer science before they are able to perform web mapping. However, there are a variety of web services which allow for the creation of powerful geovizualization application without the detailed knowledge of computer programming.

One such service is ESRI’s Web Appbuilder for ArcGIS.

Web Appbuilder is an application hosted on ArcGIS online which allows users to add functionality to custom web maps. An exciting feature of the Web Appbuilder is the ability to create maps not only in 2D, but also in 3D via the scene view in Web Appbuilder.

The scene view environment allows users to create a variety of 3D interactive mapping applications on a virtual globe. Navigation around the virtual world is incorporated, and various customization are possible via a variety of out of the box functionalities which can be easily incorporated to any application.

Pretty great right? Follow along and learn how to make a 3D web app!

Introduction to my data

The economies of many nations around the world, including Canada, rely very heavily on natural resource use. In recent times there has been a push by many countries to decouple their economies from natural resource use to both increase the sustainability of their economy, and to decrease their environmental impact.

Material footprint is a measure of domestic material use developed by Wiedmann, T. O., Schandl, H., Lenzen, M., Moran, D., Suh, S., West, J., and Kanemoto, K in 2015. The material footprint(MF) of a nation is the total amount global raw materials extraction that can directly attributed to the final demand of that nations economy.

When developing the metric, Weidmann et al determined the MF per capita of every nation in the world. The group also calculated the MF per capita for various material types, such as biomass materials, construction materials, fossil fuels, and metal ores. I joined Weidmann et al.’s MF data to a country shapefile using ArcMap and the resulting shapefile was exported and saved to my local computer. This is the data I used to create my geovisualization project.

The goal of my project was to create a web mapping application which allows the user to view the MF data on a virtual globe, and toggle between material types. By viewing which material types are important to various nations, the user can then make inferences about sustainability of those nations.

Getting Started: Publishing Layers to ArcGIS Online

In order to create the web application, I first needed to publish the spatial data to ArcGIS Online. To do this I first logged in to my ArcGIS online account. If you do not have an ArcGIS Online account, you can create a free personal account or sign up through your organization, such as university or business. Once signed in, navigate to the “My Content” tab and click the “Add Item” button. For my project I added the MF data from my computer. If your data is saved on your local computer, ensure that it is in ZIP format. I added the MF layer six times, as I had six layers I wanted to display on my application. I named one layer the Material Footprint, four layers using the material groups, and one representing the select few countries which did not have data.

Creating the Map

Once the layers were uploaded, I navigated to the “Create” button and hit “map”. This opened a map viewer tab where I was able to create the basemap for my application on a 2D surface. I added the six layers to the map viewer and began making the basemap. The “change style” tab was used to classify and select a colour scheme for each layer. I then configured pop- ups to display the MF value of a nation when a country is selected. Once I was happy with all the layers, I then saved the layers and the map, and navigated back to “My Content”.

Creating the Web Scene

I next needed to display the layers I just created in a 3D environment. From the “Create Tab”, I selected “Create Scene”. A scene viewer page opened and the virtual 3D globe which I used to display my layers was generated.  Using the “Modify Scene” tab, I added my six layers that I formatted in the map view from my content.  As this scene will become the base of the mapping application, it is important that I configured all the desired setting in the scene view, as these settings will not be able to be changed in the application itself. For example, I altered the order of the layers in my legend, chose a basemap, specified the suns position in the sky, and optimized the performance of the scene in scene settings by ensuring the 3D graphics slide bar was set to Performance rather than quality.  I then saved the scene and navigated back to “My Content”.

Creating the Web App

In the “Create” tab, I then created an App using the Web Appbuilder. In the following “Create a web app” pop up, I specified 3D and gave the app a title, a tag, and a brief summary.

I then needed to specify to Scene which I wanted to use as the baselayer for my app. I navigated to the Scene I just created through the “Choose web scene” button in the scene setting window. This then projected my 3D MF layers onto the virtual globe into the map. I then navigated to the theme window  and chose a graphic theme for my application.

Adding Functionality

Functionality is added to Web Appbuilder applications through widgets. Widgets are tools that can be added to the application. These tools perform a variety of functions. Also, the number of widgets you can incorporate into an app is based on the theme you have selected, so choose wisely. In my application I chose four main widgets, Legend, About, Layer List and 3D Extrusion. The legend widget simply adds a legend to the app which updates depending on the layer being displayed. I configured the About widget to display text introducing the application. The layer List is the widget which enables the toggling between MF layers. Fnally, the 3D widget allows for several different 3D functionalities. I selected the “Area Extrusion” visualization type to extrude the countries based on their Material Footprint per Capita Values. Along with the 3D extrusion, a display bar is added the app which displays the MF values of each nation. By clicking on a nation name in the display bar, the view automatically zooms to that country. Neat!

Finishing Touches

After all the functionality was added to the application, I added a few finishing touches. A proper summary and description were added to the Web App page, and a simple splash widget was added to introduce the application.

Try it out here!

Try the application out here, and thanks for following along!

https://ryerson.maps.arcgis.com/apps/webappviewer3d/index.html?id=b72c5f9cb9194a1abbff695a7b5b275f

 

New York City Automobile Collisions

Creating An Interactive Web Map

By: Joshua Ali

Geovis project Assignment @RyersonGeo, SA8905, Fall 2018

Data

The data used for this map was retrieved from New York City Open Data (https://opendata.cityofnewyork.us/) and automobile collisions data set, it has information on collision from 2011 to present.  This will have all information needed for the map.

Using Mapbox

The interactive map will be using map box GeoLibrary JavaScript, so an account must be created with map box.  This is a free sign up and a pay as you go account (pretty much if you use it a lot you have to pay) (https://www.mapbox.com/signup).

Creating the Base Map

The next step is to create the base map that was used to display my data.  To write the code I used a text writing software.  The two I switched between using is Sublime (https://www.sublimetext.com) and Codepen, (https://codepen.io) they are both free software’s that can be used.  Now you will need to write a html doc that will be used to display your map.  The doc was written below for optimal settings and will be built upon with more code later to customize the map.

Now that the setting style for the map functions have been in place, the map needs to be linked with a mapbox access token that is created from my account.  By doing this the html doc will be linked to my account.

A script was created using the var function to create a new map that will use as style option that is linked to your account.  In my case I decided to use the dark map background as my style for the map.  Also, in the script below the latitude and longitude was selected so when the map is opened it will be looking at New York City.

With all the current script within the text editor this document can be opened with chrome browser to show the base map.  The image below shows what would come up.

Customizing the Base Map

Now that the base map of the map is created, I can begin adding and customizing NYC automobile collisions to the map.  To connect the data downloaded to the map created they first need to be in the same project folder as the base map html doc made above.

To do this a local server has to be made on the computer so the base map made can draw information from the NYC database to be projected. The api also needs this to continuously update the projected data to the interactive tool that will be added later.  This was done by downloading python and running Python’s SimpleHTTPServer. Using the command control panel, the local server was run on my laptop.  This is useful because changes made in the coding on the text editor of the map can be seen immediately on the html map doc since the local host is constantly updating the files.

To connect the collision data to the base map, a map load function was used to link the id called collisions along with the data file url and the settings to display the collisions on the map as circles. Also the circle radius based on 0 to 5 was linked with their own selected colours and circle-opacity to 0.8 so depending on how many casualties occur they will have their own colour and partially transparent so they will not bock each other.

With the data now linked to the base map, a legend was created in the code by making a div section inside of the console.  Additionally, to this some CSS was added to style the colour gradient so it matches the colours of the circles.

This is what the map will look like with the data and legend.

Adding Time Slider and Interactivity

To add a time slider the slide bar function was added as a div to the body of the html document.  This will pull information of the time of accidents and display them on the map. To add the interactive a filter was added to obtain the time of collisions from the database.  The coding will be shown below along with a screen shot of the functional map.

Final Touches

The map is almost complete, the last function that was added to it was a filter that looks at the automobile collisions that occurred during the week compared to during the weekend.  To create this an if function was added to the text editor so that if a collision occurred on a Saturday it would be false, true.  This allows the data to show for week days compared to weekends. This is seen in the coding below. To add the function that allows you to decide on what part of the data to look at another div class session was added to filter the days.

The below script shows the div class session that created the slider bar and button selection for the final map legend.

Final Map

Below is screen shots different settings selected for the interactive map.

Mapping Toronto Green Space in Android

By Jacob Lovie | GeoVis Project Assignment                @RyersonGeo | SA8905 | Fall 2018

Introduction

With today’s technology becoming more and more mobile, and the ability to access everything you need on your mobile device, it is more important than even to ensure that GIS is evolving to meet these trends. My GeoVisualization project focused on designing an android application to allow users to explore Toronto’s green space and green initiatives, making layers such as parks and bike stations accessible in the palm of your hand. However, it is not just having access to this that is important. What’s important when working with these technologies is that a user can explore the map and retrieve the information seamlessly and efficiently.

Data and Hosting Feature Services

All the data for the project was retrieved from the city of Toronto’s open data portal. From there, all the data was uploaded to ArcGIS Online and set up as hosted feature services. A base map was also designed using ArcGIS for Developers and hosted. The application was able to target these hosted feature layer and use them in the map, making the size of the app small. The symbology and setup of the hosted feature layers was also done in ArcGIS online, so the app didn’t have to make any changes or set symbology when it wasn’t necessary.

Methods

The developer environment that I worked in to design my app was Android Studio, the baseline for designing any android apps. The programming language used in Android Studio is Java. Within Android Studio, the functionality of ArcGIS Runtime Software Developer Kit (SDK) for Android can be brought in, bringing in all the libraries and functions associated with ArcGIS Runtime SDK for Android. With this I was able to use ArcGIS functionality in android, designing maps, accessing hosted feature services, and perform geoprocessing.

Understanding how ArcGIS SDK for Android worked with in Android Studio was an important key in designing my app. When creating a map, I first had to create a Map object. An object is a variable that is of a certain datatype. If you were talking about having a text object as a variable that could be called, it would be of datatype string, and the word itself would be an object that is callable and referential. The Map object is what is displayed in an activity window (more on this later), which is what the user visualizes when using the app. The map can be set to view a certain area, which was Toronto in my app. A user can pan around the map like they would on any interactive map without having additional coding in Android Studio (it is natural to the Map datatype). The Map also has associated Layer objects that have their own set of parameters.

While designing my app, any time I would want something done in my design, such as creating a map object or adding a layer to a map object, I created a function that wold performs an action. This reduces repetition in the code when I attempted to do something complex multiple times. I designed 3 functions. The first was to create a Map, the second was to add a Layer that could be activated and deactivated in the Map through a switch that would be displayed in the main Activity Window. The final function added a layer that could be queried and would extract information from that layer.

When designing an android app, there are many fine details that are not necessarily considered when using an app on your phone. Simple things like having a window or text appear, opening a second window, or displaying information were things I very much appreciated after designing the app. Within my app, I wanted to display a second activity window to display information on neighbourhoods in Toronto when the user touched them. Within Android Studio this required creating a second activity window, and transferring the information obtained in the map to the second activity. This was done through my displayInformation function. I was then able to create a second activity and display this information using a custom list display to show the attribute data of a selected neighbourhood.

     >>>>>>>>>>>     

Setting up the display in Android Studio is relatively simple. There is an interface that allows you to anchor different objects to parts of the screen. This allows the app to run smoothly across all devices, not based on the size and ratio of the device. The switches in my Main Activity window were anchored to the top left, and to each other. My Map is in the background, but appears as white in this activity window.

The Application

Once all the coding and testing was completed, running the app was simple. I was able to bundle my code and send it to my personal phone, a Galaxy S9. The functions called the hosted service layers and displayed them in the map (Wifi or internet connection was required). I was also able to click on neighbourhoods and it would open my second activity that displayed the attribute information of that neighbourhood. If you want a more in-depth look at my code, it is available at https://github.com/jclovie/GeoVis-Ryerson/.

Urban Development of San Francisco

By Hannah Burdett

SA8905 Geovisualization Project, Ryerson University

The Development of San Francisco

San Francisco is located in the center of Northern California. It started as a base for the gold rush of 1849, the city quickly became one of the most populated cities in the United States. Shortly thereafter, San Francisco was devastated by the 1906 earthquake. Development peaked in the 1900’s as San Francisco rebuilt areas demolished by the earthquake and fires to compensate the growing population. During the 1930’s the San Francisco-Oakland Bay Bridge and the Golden Gate Bridge were opened. Additionally, during World War II, San Francisco was a major mainland supply point and port of embarkation for the war in the Pacific. Both factors led to another peak in construction. After World War II, many American military personnel who had fallen in love with the city while leaving for or returning from the Pacific settled in the city. This led to promoting the development of the Sunset District, Visitacion Valley, and the total build-out of San Francisco. Starting in the latter half of the 1960’s, San Francisco became most recognized for the hippie movement. Currently, San Francisco has become known for finance and technology industries. There is a high demand for housing, driven by its close proximity to Silicon Valley, and a low supply of available housing has led to the city being one of America’s most expensive places to live.

Data

The data used for the time series animation was imported from data.gov. Data.gov is a repository for the US Governments open source data. The imported data included a Land use Shapefile for San Francisco. The shapefile included information such as land use, shape area, street address, street number, etc. The land use shapefile also included the year the building was built. The building years range from 1848 to 2016 displaying 153 years of urbanization. The buildings were represented as polygons throughout San Francisco. Additionally, a grey scale base map from ArcGIS Pro was displayed to create a more cohesive map design.

 

 

Time Series Animation

To develop the reconstruction of San Francisco throughout the years, both QGIS and ArcGIS Pro were utilized. Both platforms were used so to provide a comparison between time series animation tools from an open source application and a non-open source application.

QGIS is an open source geographic information systems application that provides data visualization, editing, and analysis through functions and plugins. To create the time series animation the Time Manager plugin was utilized. The Time Manager plugin animates vector features based on a time attribute. For this study the time attribute was the years built.

ArcGIS Pro is the latest professional desktop GIS from Esri. ArcGIS Pro enables users to view, explore, analyze, edit and share maps and data. Unlike QGIS, no additional plugins are required to create the animated time series.

QGIS Methodology

To generate the time series in QGIS, the land use shapefile was downloaded and opened in QGIS. The attribute table from the land use shapefile was then exported and opened in Excel so that the yrbuilt column could be reformatted to meet QGIS Time Manager requirements. The yrbuilt column had the data presented as YYYY format for building dates. QGIS Time Manager requires timestamps to be in YYYY-MM-DD. To correct the format, -01-01 was added to the end of each building year. The modified values were then saved into a new column called yrbuilt1. The Excel sheet was then imported into QGIS and joined to the land use shapefile.

In QGIS, each of the buildings was presented as polygons. The shapefile symbology was changed from single symbology to quantified symbology. In other words, the symbology for each of the polygons was broken down to seven classes defined by years. Each class was then distinguished by color, so that one may differentiate the oldest building from the newest buildings. Furthermore, a grey scale basemap was added to create a more cohesive map.

Furthermore, in the Time Manager settings, “Add Layer” was selected. The land use shapefile was chosen as the Layer of interest. The start time was set to the yrbuilt1 attribute, whereas the end time was set to “No end time – accumulate features”. This allows newer buildings to be added without older buildings being removed from the map. For the animation, each time frame will be shown for 100 milliseconds. The Time Manager plugin was then turned on so that the time series may run.

 

In order to export the time series animation, Time Manager offers an “Export Video” option. However, this exports the animation as an image series, not as an actual video. To correct this, the image series was uploaded to Mapbox where additional Mapbox styles were used to render the map. It was then exported as a Gif from Mapbox.

ArcGIS Pro Methodology

In ArcGIS Pro, the land use shapefile was imported. The symbology for each of the polygons was then broken down to seven classes defined by years. The same colours utilized in QGIS were applied to the classes in ArcGIS Pro to differentiate between the building years. Within the layer’s properties, the Layers Time was selected as “each feature has a single time field”. Furthermore, the start and end times were set to the newest and oldest building years. The number of steps were assigned a value of sixteen. In View, the animation was added, and the Time Slider Steps were imported. The time frames were set to match the QGIS animation so that both time series animations would run at the same speed. The time series animation was then exported as a Gif.

Final Animated Map

Finally, to create a cohesive animated map the exported Gif’s were complied together in PowerPoint. Additional map features, such as a legend, were designed within PowerPoint. A bar graph was added along the bottom of the map to show years of peak building construction. The final time series map was then exported as a .mp4 and upload to YouTube.