The 100 largest wildfires in the province of Quebec from 1976 to 2019.

Author: Samuel Emard

Source: Forest fires – Open Government Portal (canada.ca)

Project link: Top 100 Fires in the Province of Quebec (1976-2019) (arcgis.com)

Web Experience Direct link: https://experience.arcgis.com/experience/b7a0987afdb1486fb97532788261cfd6/

Project background

The idea for this project originated from a curiosity about the numerous environmental catastrophes that the populace is often unaware of. Especially wildfires. In the last few years, every summer’s news cycle is dominated with terrible reportages about wild fires rampaging in California, British Columbia or Alberta, and rightly so, but it is often only the largest that get mentioned on TV.

Myself being from the province of Quebec, I became curious about the wildfires that happen in my home province because I haven’t heard of them quite as often as the ones in the US or the Canadian West. Fortunately, a dataset compiling data on the wildfires in the province was available on the Federal Government open data website. However, since 1976, which I assume is the year the government started compiling data on the phenomena, 60 799 wildfires happened. Since this project focuses specifically on the online aspect of things, this amount of polygons would either be impossible to draw completely or it would take too much time to draw each of the 60 799 polygons. I juggled with multiples possible solutions to remediate the issue, such as using a smaller temporal scale, but it all ultimately depended on the platform I would choose to portray the data on. Speaking of which, here’s a small description of ArcGIS Web Experience Designer.

Technology

Finding a platform to portray the data depended on my familiarity with it. Unfortunately, online GIS wasn’t my forte and I only knew of ArcGIS Online and its Story Maps. However, I felt that Story Maps were not novel enough. That’s when I happened upon the Dashboard and the Web Experience creators available on ArcGIS Online. After fiddling with both, I decided to settle on the Web Experience to portray the data.

The ArcGIS Online Web Experience is, according to their own website, a tool that allows the “creation of unique web experiences using flexible layouts, content, and widgets that interact with 2D and 3D data”. It creates a mobile-friendly output built from scratch without coding. It creates interactive maps that are formatted to be viewable and interactable on desktop, tablet and phones. It has 26 widgets available to put on the map, going from a legend to a 3D data viewer tool. For this project, I used a few simple widgets that would enhance the experience for users, which are going to be described further down.

Data and Methods

The data and methodology for this project are pretty straightforward and most of the work went into the web experience designer (to assure the optimal experience on desktop and mobile alike). The data for this project came from a vast dataset on forest fires available on the federal government’s open data website. On their page (link provided above), it is mentioned that the data was made available by multiple municipalities and government (see figure 2). However, they do also mention that the creator of the dataset is the “Secteur des Forêts-Direction des inventaires forestiers” and “Direction de la protection des forêts”, which mean “The Forest Sector-Forestry inventory direction” and “Direction of the protection of forests” respectively.

Figure 2: Warning on data source on Open Data Website.

Anyway, the dataset contains data on every forest fires that occurred in the province of Quebec between 1976 and 2019. That includes geometric data on each of the polygons, the year the fire started, the way it started, the year it was “extinguished” and the superficies of the fire in hectare. Sadly, some of the variables are abbreviated and their meaning wasn’t mentioned on the website and couldn’t be used in this project, but I didn’t need them for what I intended to accomplish.

At first, I wanted to map all polygons, all 60 799, but I decided otherwise due to the sheer size of the dataset. Then, I filtered the data by the year the fires started and extracted all the data from 2013 to 2019. I hoped to display all the fires of the last few years, but even that was too big. There were a bit less than 10 000 polygons and ArcGIS Online was already giving a warning about it not being able to draw the entire thing. So, I was looking for a solution to remediate the problem of having too many polygons to draw and I figured that showing the 100 largest fires since 1976 would be indeed a very interesting, and informative, way to show what I wanted.

To that end, I filtered by the area burnt by the fire, which is in hectare, and extracted the top 100 fires. The data extraction part was done offline, on ArcGIS Pro, because it was simply faster and easier to manipulate the dataset. I then uploaded the 100 largest fires to the ArcGIS Online Platform to make a map because the Web Experience Designer couldn’t create its own map, I had to make one beforehand and then upload it to the Web Experience Designer.

Once the map was done, I could then start working toward the creation of the web experience. Figure 3 shows the user interface of the Web Experience Designer.

Figure 3: Web Experience Desktop U.I.

The Web Experience Designer is fairly straightforward to use and is designed to be usable by people without experience in coding. All of its widgets and tools are available on the left side of the screen and usable with a simple drag and drop. Every widgets/tools are then adjustable in their settings that appears to the right side of the screen. For this project, I used the following widgets/tools: Image (which is, in fact, the legend), table, share and the button widget. Here’s a small description of each and how I used them;

Image/Legend: Sadly, legends on ArcGIS Online are very hard to modify without modifying the entire dataset and its variables and the Web Experience Designer could only use the legends from ArcGIS Online. In my case, the original legend only said “SUPERFICIE” as the field for the superficies of the fires. That wasn’t exactly what I wanted and the workaround that I used was to simply create the legend I wanted in ArcGIS Pro and then screenshot it and upload it as an image to the web experience. The result (Figure 4) shows the end result.

Figure 4: Example of the W.E.D. Legend on the image widgets.

Table: The table widget is simple. It allows the users to see and interact with the data table of the dataset. It allows them to see almost everything there is to see in the data. For simplicity’s sake, I hid some of the more technical columns, especially those populated with the geometry data. The table only shows the fire ID, the size and the year it started. The goal was to make the experience most straightforward. The table also allows selecting specific fires without selecting them on the map (even though you also can select directly from the map).

Share: The share widget is a simple share button that any good online experience should have nowadays. I allow the users to share the link to the web experience on a multitude of social media.

Button: This widget was put on the web experience to allows the users to directly go to the source of the dataset. The link to the open data portal was already available in the web experience’s description, but this button makes it easier to use on mobile devices since you only need to click it and the link to the dataset’s source is made available.

So after making sure every widget works, the next step was to make sure that the web experience is good for each device (computers, tablets and phones). That means changing the formatting of the web experience to fit the resolution and screen sizes of each device.

Finally, the last step of the creation process was to make sure that the map was correctly interactable. That means that I tested my own web experience and verified that the polygons were selectable and that the information for the polygon appeared on screen. I made sure the data table was correct (though it seems to bug a bit as it in beta stage still) and that the polygons were drawn correctly.

Then there it was. The Web Experience was made. Only needed to write descriptions and other small paragraphs on the info page of the web experience and then publish it. I thoroughly enjoyed using the Web Experience Designer to create an interactive map, but, as much I as liked it, there were many limitations that I had to overcome.

Limitations

The limitations of this project were many, but minor. The very first one I encountered was the lack of a clear description of the variables and the abbreviations used in the data. Maybe I haven’t seen it on their page or missed it in the metadata, but I couldn’t find an explanation for some of the abbreviations they use in the data to describe the origin point (human-caused or naturally caused forest fires) and in some other variables. Knowing those could’ve led me to display the data in a much different way.

Another limit I encountered were the online capabilities of ArcGIS Online, such as the inability to draw large amounts of data, and the lack of modification to legend’s title. I could easily find other solutions by doing it offline in ArcGIS Pro, but not everyone has that ability, so I’d count that as a limitation encountered in this project.

The Web Experience Designer, while quite advanced and easy to use, was a bit of a chore to understand its intricacies and has a steep learning curve for the more in-depth features of the platform. By that, I mean that this project only uses a fraction of the options available in the Web Experience Designer. There are more widgets available, but every object part of the experience can be given actions to perform set by specific triggers. For example, If the user clicks on a polygon for a fire, it is possible to set the data table of this specific polygon to appear (in a multitude of ways) on the map. There were also many other actions and triggers to use, but the platform doesn’t make it easy for the new users to utilize the full potential of the designer.

Future Work

In a perfect world where unlimited resources were available for this project, I would make it so the web experience would display the 100 largest forest fires of the province of Quebec for every year since the start (1976).

In other words, I would set-up a year button for each of the years in the dataset. Then, the users would simply click on one (i.e. 2012) and the web experience would display the 100 largest forest fires of that chosen year. That way, the users could see a much larger dataset that would be much more informative. The top 100 forest fires show also focuses on the southern half of the province since most of the population (about 95%) lives there. So, with unlimited resources, the dataset would also include the forest fires that occurred in the northern half of the province.

In a perfect world, the dataset could include the entirety of Canada so that a top 100 forest fires could be done for each province and for every year since 1976. That would be a massive dataset however.

Utility of the project

The goal of this project was to inform the population on the locations and sizes of the wildfires in the province of Quebec. Specifically, it aims to inform fellow Quebecers of the largest forest fires that occurred in their own province. This dataset can be updated every year, if needed, to display a more up-to-date version of the wildfires. Its interactive aspects allow the users to see the information of every fire that occurred (ID, year, size, etc..). It can also be used for forestry companies and environmental agencies that wish to visualize the largest forest fires.

Marie Kondo told us to spark joy…but where does our used clothing really go?

Janelle Lee
Geo-visualization project, SA 8905, Fall 2020

Project Link: click here (use full screen mode for optimal viewing)

Background and Inspiration

In 2015, I lived in Nairobi, Kenya for eight months to participate in a work abroad internship program. On weekends, the other interns and I would explore the city—or, as they say in Nairobi, “go into town”. One of our usual excursions was to go to Toi Market, an open market beside Kibera, which is one of the largest slums in Africa. The market primarily sells clothing, shoes, and miscellaneous household items. I dug up my old travel blog documenting one of our weekend trips to Toi Market:

“The market is essentially a maze—any turn left or right takes you deeper into the layers of Toi and once inside, it’s difficult to find a way out unless you can retrace your steps. Although hot, sunny, and noisy on the outside, most of Toi is shaded and traffic is muffled by the metal-sheeted roofs and make-shift walls that divide each stall from the next. Piles of clothing extend as far as you can see and you begin to wonder where all of it came from. Many items still have their Value Village, Saver Thrift Store, or Salvation Army tags on them which gives a clue.”

I recall having a conversation with a local who told me that a clothing shipment came in every Wednesday—a seemingly infinite supply of second-hand clothing, much of which isn’t even climate appropriate. One of my most memorable images from the market is an endless wall of shelves filled with used Ugg boots (apologies for the blurry photo):

Used clothing doesn’t just stay in Nairobi. For about two weeks, I was working in Kisumu, a city in western Kenya about 450 km north-west of Nairobi. While walking around Kisumu one day, I saw a lady selling some shirts and jeans, of which included a bright red Tim Hortons t-shirt:

Seeing the volume and frequency of used clothing shipments into Kenya left an impression on how I view consumption and consumer waste, particularly in a time when minimalism and “sparking joy” by getting rid of things that we no longer use or wear have become lifestyle trends. To be fair, I haven’t read Marie Kondo’s book or watched her Netflix show, so perhaps the practices she advocates for are more nuanced and thoughtful about how and where people should get rid of their stuff. In any case, my goal for this geo-visualization project is to encourage us to be more aware and mindful of where our used clothing goes before we decide to donate it to “benefit others” (or before we even purchase new items in the first place).

Project Description

My geo-visualization includes two interactive maps and one graph:

Screenshot of my geo-visualization

The first map illustrates trade flow lines between countries—users can select an import and/or export country of interest to see where used clothing is shipped around the world. The other map is a choropleth map that shows whether a country is a net exporter or net importer of used clothing. Users can view the results for different years between 1995 and 2019.

The scatterplot compares each country’s GDP per capita and its net trade value of used clothing. A positive net trade value indicates that a country is a net exporter, while a negative trade value indicates that a country is a net importer. Users can press a play button to see how the scatterplot changes between 1995 and 2019. As a whole, the maps and scatterplot show a pattern in used clothing trade flows. Richer countries tend to export the most used clothing and poorer countries are the primary recipients of these shipments.

Technology

I used Tableau as the data visualization software for this project. My primary motivation for using Tableau was to learn how to use the software having no prior experience with it. I also knew that it was an effective tool for visualizing and interacting with data, and I wanted users to be hands-on with the data in my geo-visualization.

Data & Methods

The UN Comtrade Database provides international trade data for thousands of different commodities. One of these commodities is “Clothing; worn, and other worn articles”. Unfortunately, I was unable to find a more detailed description of this commodity, so I assumed that it referred to second-hand/used clothing or any kind. I retrieved the trade value (in USD) of used clothing exports between 1995 and 2019. The database also provides the weight (in kg) of used clothing shipments, but unfortunately most countries do not record their exports/imports in weight so most of this data was missing in the database (and therefore not useable).

Screenshot of the UN Comtrade Database

After collecting the data from the UN Comtrade Database, I added the data into ArcMap to create a shape file of trade flow lines between countries using the “XY to Line” tool (see screenshot below). I then added this shape file to a new worksheet in Tableau where I was able to adjust the width of each line based on the trade value of used clothing shipments between countries. This formed the basis for my first map.

Trade flow lines created in ArcMap

For my choropleth map, I first summarized the trade value data in Excel. More specifically, I calculated the following for each country: 1) the total value of used clothing exports, 2) the total value of used clothing imports, and 3) the net trade value (calculated by subtracting total imports from total exports). The net trade value data was added to Tableau and used as the variable for the choropleth map. A divergent colour scheme was applied to the map to differentiate between countries with a positive versus negative net trade value (i.e. net exporters and net importers, respectively). A filter was added to the map so users can view the results for different years between 1995 and 2019.

For the scatterplot, data on GDP per capita for each country were retrieved from the World Bank’s open data catalogue. The data were added to Tableau and a scatterplot was created using GDP per capita on the x-axis and net trade value of used clothing on the y-axis. Points on the scatterplot were made into proportional symbols to easily visualize differences in GDP per capita. An animation function was added to the scatterplot so that users can see how each country’s GDP and net trade value change over time. The United States, United Kingdom, Ghana, and Ukraine were labeled in the scatterplot to act as reference points in the graph. The US and UK are two of the top net exporters of used clothing in recent years while Ghana and Ukraine or two of the top net importers.

Geo-visualization Improvements Wish List

  • My initial idea for the trade flow map was to use a 3D model of the earth and animate the trade flow lines between countries. Users would be able to rotate the earth and the animated lines would more clearly and dynamically illustrate the direction of used clothing shipments (i.e. from exporter to importer).
  • The layout of the geo-visualization can be improved so that the balance between white space and text/visuals is more balanced when viewed on different devices. I had difficulty adjusting the layout in Tableau to be suitable for one device type without interfering with the layout on another device (e.g. smart phone versus a desktop). With the current layout, the geo-visualization elements appear much more spread out with a lot of white space in between.
  • When viewing the geo-visualization using the Tableau software on my computer, the playback speed of the scatterplot time lapse is fine; however, it is extremely slow when viewing it through the shareable link. I’d like to figure out how to resolve this so that the scatterplot animation doesn’t lag when others view it through the link.
  • For the countries that are labeled in the scatterplot (US, UK, Ghana, and Ukraine), I would like to add an outline their points so that they are easily identifiable. Currently, it’s difficult to tell which circle each label is referring to. I would also like to change the proportional symbols by reducing the number of classes for GDP per capita and increasing the size contrast between each class. Unfortunately, I wasn’t able to figure out how to customize the proportional symbols (e.g. choosing the number of classes).

Limitations & Future Work

  • One of the primary limitations of this geo-visualization is in the data. I only downloaded export trade value between countries, as opposed to both exports and imports. Export values are reported by the “reporting country” (i.e. the country that is exporting the commodity). The reporting country must also identify the “partner country” of that export (i.e. the country that is receiving the commodity). It was therefore assumed that the trade value of the imports received by the partner country is equal to the trade value of the exports reported by the reporting country. However, there are often mismatches between the trade value reported by the exporter and the importer because of differences in commodity valuation by different countries. The UN International Trade Statistics Knowledgebase explains this discrepancy here.
  • It would be interesting to supplement the geo-visualization with additional information on the total amount of second-hand clothing that each country produces (including items that get exported and those that stay within a country). This would give us a better sense of the proportion of clothing that ends up getting exported rather than staying in the domestic market.

How Does Canada Generate Electricity?

by Arthur Tong

GeoVisualization Project @RyersonGeo, SA8905, FALL 2020

Project Weblink (Click Here)


  • INTRODUCTION

Getting electricity to a country’s homes, different types of buildings and industries is an extremely challenging task, especially for countries that are enourmous in land area; transporting power over long distances are much more difficult. Up to now, the produced electrical energy is either very inconvenient to store or expensive, and with the increasing demand over the years in Canada, balancing betwen two in real time is crucial.

The way how electricity is generated solely depends on what kind of technologies and fuels are avaiable by that area. According to Natural Resources Canada (2020), “the most important energy source in Canada is moving water , which accounts for 59.3% of electricty supply, making it the second largest producer of hydroelectricity in the world with over 378 tearwatt hours in 2014.”

The goal of this interactive map project is to view most of the power plants in Canada and their respective sources and generating capacties (MW), which are proportional to the size of the circles shown in the project weblink above.


  • METHODOLOGY

In this section, I will be introducing the methdology for conducting this project. I would first describe how the data was collected, then followed by steps needed to produce the final dashboard with Tableau Public.

Data Collection

For the purpose of this study, I would need to retrieve pin-point (latitude/longitude) location of all types of power plants across Canada: from primary energy like nuclear energy and the renewables, to secondary energy that are produced from primary energy commodities like coal, natural gas and diesel. I tried looking up on various sources like Open Government Portal, but most of the open data they provide does not necessarily contain the power plants’ exact location.

Therefore, I had to manually pin-point all the data from external sources, mostly based on these two websites Global Energy Observatory (GEO) and The Wind Power. Other projects were identified by looking up on either the publicly/privately owned electricity utility company’s websites for all the provinces, for example BC Hydro, Ontario Hydro, TransAlta, etc, and their relative coordinates were retrieved using google maps. A similar interactive map “Electricity Generating Stations in British Columbia Map” has been done by researchers from University of Victoria, which provided most of the data for British Columbia and framework on what other relevant data I would like to include for my other provinces (as shown in the figure below).

Figure 1: Snapshot of the columns included for the dataset.

In addition, all 13 provinces were accounted and a total of 612 points were collected manually.


Construction of Tableau Dashboard

Tableau Public is the software used for this project. First, load in the excel data into Tableau through Data->Open New Data Source-> Microsoft Excel. Here, make sure the latitude and longitude columns were assigned a Geographic role as shown in the snapshot below, so they could be used to map the data.

Figure 2: Snapshot showcasing the Geographic roles assigned to the Latitude and Longitude columns.

From the new worksheet screen, sections on the left corresponds to the columns of the table. Drag the non-generated latitude and longitude to columns and rows and choose the ‘symbol map’ under ‘show me’ on top right. If the ‘unknown locations’ tab pop-up from the bottom right, it means that Tableau was not able to automatically align the name of the provinces given to the column to their database, which can be simply fixed by clicking that tab and manually edit the unknown locations. After dragging in essential elements you want to present, it would look something like this as shown in the figure below. In addition, the base map can also be changed into a dark theme under Map->Background Maps.

Figure 3: Taleau Interactive Map Layout. ‘Source’ is presented by differnet colours while their ‘capacity’ is presented by the sizes of the circles.

Moving on, to create a bar/pie chart, hover the bar on the left to choose which graph would best visualize the data you are trying present, then drag essential data into columns/rows.

Figure 4: Bar graph showing “Total capacity by all provinces”.

Last but not least, add a new ‘dashboard’ sheet and drag in all the maps/graphs into the dashboard to be the final product. Organizing the layout in the dashboard could be frustrating without the proper frame, you may also consider making elements like the filters and smaller graphs into a ‘float’ item by right clicking it, so that those ‘floating’ items could be placed on top of other elements on the dashbaord; in this case, I made the bar graph ‘floating’ so it is layed on top of the interactive map.

Figure 5: Dashboard Layout.

RESULTS & LIMITATIONS

Hydroelectricty do contribute to 56.67% of electricity generation across the country, followed by natural gas (12.39%) and nuclear energy (11.29%). However, a lot of electricity generation in Alberta are still based on coal, which takes up to 46.21% of the total capacity in that province.

Since all the data were collected manually, they may not be 100% accurate but the idea is to have a sense on where approximately it is located. For example, one single wind farm containing ten wind turbines may consist a large space across the mountain/field, the data collected was based on one wind turbine instead of plotting all ten of them.

Moreover, less developed provinces like the Northwest Territories has a very low amount of electricity generated due to its lower population (one diesel power plant per small town located using google satellite), there could have been more power plants around the area.

In conclusion, precise and consistent data is lacking for all the provinces from open data source portal, creating a potential for future similar studies carried out if more data is allowed. A time line perspective could also be added to this interactive map as well, so as users drag along the bar they can see the change in different types of powerplants that were being built in different locations.

A Glimpse of Short Term Rentals in Calgary Using Tableau

by Bryan Willis
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2020

Project linkhttps://thebryanwillis.github.io/CalgaryShortTermRentals.html

Background

Over the years, many homeowners have decided to turn their place of residence into short term rentals, allowing their place of residence to be rented out for short periods of time. Short term rentals have also seen an increase in popularity due to their better pricing when compared with hotels and the unique neighbourhood characteristics it provides. Although Calgary has not seen the increase of short term rentals as dramatics as that of Toronto and Vancouver, Calgary has continued to see growth in the short term rental supply. The City of Calgary defines a short term rental as a place of residence that provides temporary accommodation and lodging for up to 30 days and all short term rentals in Calgary must legally obtain a business license to run.

This interactive dashboard will aim to highlight some key components related to short term rentals in Calgary such as the locations, the license status, the composition of the housing type and licenses per month

Data

The data used in this dashboard is based off of the Short Term Rentals data set which was acquired through the City of Calgary’s Open Data Portal.

Methods

  1. Data Cleaning – After downloading the data from the open data portal, the data needed to be cleaned for it to properly display the attributes we want. All rows containing NULL values were removed from the data set via MS Excel.
  2. Map Production – After importing the cleaned data into Tableau, we should quickly be able to create our map that shows where the locations of the short term rentals are. To do this, drag both the auto generated into the middle of the sheet which should automatically generate a map with the location points. To differentiate LICENSED and CANCELLED points, drag the License Status column into the ‘Color’ box.
  1. Monthly Line Graph – To produce the line graph that shows the number of licenses produced by month, drag into the COLUMN section at the top and right click on it and select MONTH. For the ROWS section, again use but right click on it after dragging and select MEASURE and COUNT. Lastly, drag License Status into the ‘Color’ box.
Finalized monthly line graph
  1. City Quadrant Table – To create this table, we first need to create a new column value for the city quadrant. Right click the white space under ‘Tables’ and click on ‘Create Calculated Field’ which will bring up a new window. In the new window input RIGHT([Address],2) into the blank space. This code will create a new field with the last two letters in the Address field which is the quadrant. Once this field is created, drag it into the ROW section and drag it again into the ROW but this time right clicking it and clicking on Measure and then Count. Finish off by dragging License Status to the ‘Color’ box.
Finalized City Quadrant Table
  1. Dwelling Type Pie Chart – For the pie chart, first right click on the ROW section and click ‘New Calculation’. In the box, type in avg(0) to create a new ‘Mark’. There should now be an AGG(avg(0)) section under “Marks’, make sure the dropdown is selected at ‘Pie’. Then drag the Type of Residence column into the ‘Angle’ and ‘Color’ boxes. To further compute the percentage for each dwelling type, right click on the angle tab with the Type of Residence column in it then go the ‘Quick Table calculation’ and finally ‘Percent of Total’ .
Finalized pie chart
  1. Dashboard Creation – Once the above steps are complete, a dashboard can be made with the pieces by combining all 4 sheets in the Dashboard tab.
Finalized dashboard with the 4 created components

Limitations

The main limitations in this project comes from the data. Older licensing data is removed from the data set when the data set is updated daily by city staff. This presents the problem of not being able to compare full year to date data. As seen in the data set used in the dashboard, majority of the January data has already been removed from the data set with the except of January 26, 2020. Additionally, there were also quite a few entries in the data set that had null addresses which made it impossible to pinpoint where those addresses were. Lastly, as this data set is for 2020, the COVID-19 pandemic might have disrupted the amount of short term rentals being licensed due to both the city shifting priorities as well as more people staying home resulting in less vacant homes available for short term rentals.

Geovisualization of the York Region 2018 Business Directory


(Established Businesses across Region of York from 1806 through 2018)

Project Weblink (ArcGIS Online): Click here or direct weblink at https://ryerson.maps.arcgis.com/apps/opsdashboard/index.html#/82473f5563f8443ca52048c040f84ac1

Geovisualization Project @RyersonGeo
SA8905- Cartography and Geovisualization, Fall 2020
Author: Sridhar Lam

Introduction:

York Region, Ontario as identified in Figure 1, with over one million people from a variety of cultural backgrounds is across 1,776 square kilometres stretching from Steeles Avenue in the south to Lake Simcoe and the Holland Marsh in the north. By 2031, projections indicate 1.5 million residents, 780,000 jobs, and 510,000 households. Over time, York Region attracted a broad spectrum of business activity and over 30,000 businesses.

Fig.1: Region of York showing context within Ontario, Greater Toronto Area (GTA) and its nine Municipalities.
(Image-Sources: https://www.fin.gov.on.ca/en/economy/demographics/projections/ , https://peelarchivesblog.com/about-peel/ and https://www.forestsontario.ca/en/program/emerald-ash-borer-advisory-services-program)

Objective:

To create a geovisualization dashboard for the public to navigate, locate and compare established Businesses across the nine Municipalities within the Region of York.

The dashboard is intended to help Economic Development market research divisions sort and visualize businesses’ nature, year of establishment (1806 through 2018), and identify clusters (hot-spots) at various scales.

Data-Sources & References:

  1. Open-Data York Region
  2. York Region Official Plan 2010

Methodology:

First, the Business Directory updated as of 2018, and the municipal boundaries layer files, which are made available at the Open-Data Source of York Region, are downloaded. As shown in Figure 2, the raw data is analyzed to identify the Municipal data based on the address / municipal location distribution. It is identified that the City of Markham and the City of Vaughan have a major share.

Fig.2: The number of businesses and the percentage of share within the nine Municipalities of the York Region.

The raw-data is further analyzed, as shown in Figure 3, to identify the major business categories, and the chart below presents the top categories within the dataset.

Fig.3: Major Business Categories identified within the dataset.

Further, the raw data is analyzed, as shown in figure 4, to identify the businesses by the year of establishment, and identifies that most of the businesses within the dataset were established after the 1990s.

Fig 4: Business Establishment Years identified within the dataset.

The Business addressed data is checked for consistency, and Geocodio service is used to geocode the address list for all the business location addresses. The resulting dataset is imported into ArcGIS Map, as shown in figure 5, along with the municipal boundaries layers and checked for inconsistent data before being uploaded onto ArcGIS Online as hosted layers.

Fig.5: Business Locations identified after geocoding of the addresses across the York Region.

Once hosted on ArcGIS Online, a new dashboard titled: ‘Geovisualization of the York Region 2018 Business Directory’ is created. To the dashboard, the components are tested for visual hierarchy, and careful selection is made to use the following components to display the data:

  1. Dashboard Title
  2. Navigation (as shown in figure 6, is placed on the left of the interface, which provides information and user-control to navigate)
  3. Pull-Down/ Slider Lists for the user to select and sort from the data
  4. Maps – One map to display the point data and the other to display cluster groups
  5. Serial Chart (List from the data)- To compare the selected data by the municipality
  6. Map Legend, and
  7. Embedded Content – A few images and videos to orient the context of the dashboard

The user is given a choice to select the data by:

Fig.6: User interface for the dashboard offering selection in dropdown and slider bar.

Thus a user of the dashboard can select or make choices using one or a combination of the following to display the results in on the right panes (Map, data-chart and cluster density map):

  1. Municipality: By each or all Municipalities within York Region
  2. Business Type: By each type or multiple selections
  3. Business Establishment Year Time-Range using the slider (the Year 1806 through 2018)

For the end-user of this dashboard, results are also provided based on business locations identified after geocoding the addresses across the York Region, comparative and quantifiable by each of the nine municipalities shown in Figure 7.

Fig.7: Data-Chart displayed once the dashboard user makes a selection.

By plotting the point locations on a map, and simultaneously showing the clusters within the selected range (Region/ by Municipality / by Business Type / Year of Establishment selections), Figure 8.

Fig.8: Point data map and cluster map indicate the exact geolocation as well as the cluster for the selection made by the user across the York Region at different scales.

Results:

Overall, the dashboard provides an effective geovisualization with a spatial context and location detail of the York Region’s 2018 businesses. The business type index with an option to select one/ multiple at a time and the timeline slider bar offers an end-user of the dashboard to drill down to the information they seek to obtain from this dashboard. The dashboard design offers a dark theme interface maintaining a visual hierarchy of the different map elements such as the map title, legend, colour scheme, colour combinations ensuring contrast and balance, font face selection and size, background and map contrast, choice of hues, saturation, emphasis etc. The maps also offer the end-user to change the background map base layers to see the data in the context of their choice. As shown in figure 9 of location data and quantifiable data at different scales, the dashboard interface offers visuals to display the 30,000+ businesses across the York Region.

This image has an empty alt attribute; its file name is Capture-1-1024x496.jpg

Fig.9: Geovisualization Dashboard to display the York Region 2018 Business Directory across the Nine Municipalities of the York Region.

The weblink to access the ArcGIS Online Dashboard where it is hosted is: https://ryerson.maps.arcgis.com/apps/opsdashboard/index.html#/82473f5563f8443ca52048c040f84ac1

(Please note an ArcGIS Online account is required)

Limitation:

The 2018 business data across York Region contains over 38,000 data points, and the index/ legend of the business types may look cluttered while a selection is made as well. The fixed left navigation panel width is definitely a technical limitation because the pull-down display cannot be made wider. However, the legend screen could be maximized to read all the business categories clearly. There may be errors, incomplete or missing data in the compilation of business addresses. This dashboard can be updated quickly but requires a little effort, whenever there is an update of the York Region business directory’s new release in the coming years.

An Interactive Introduction to Retail Geography

by Jack Forsyth
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2020

Project Link: https://gis.jackforsyth.com/


Who shops at which store? Answers to this fundamentally geographic question often use a wide variety of models and data to understand consumer decision making to help locate new stores, target advertisements, and forecast sales. Understanding store trade areas, or where a store’s customers come from, plays an important role in this kind of retail analysis. The Trade Area Models web app lets users dip their toes into the world of retail geography in a dynamic, interactive fashion to learn about buffers, Voronoi polygons, and the Huff Model, some of the models that can underlie trade area modeling.

The Huff Model on display in the Trade Area Models web app

The web app features a tutorial that walks new users through the basics of trade area modeling and the app itself. Step by step, it introduces some of the underlying concepts in retail geography, and requires users to interact with the app to relocate a store and resize the square footage of another, giving them an introduction to the key interactions that they can use later when interacting with the models directly.

A tutorial screenshot showing users how to interact with the web app

The web app is designed to have a map dominate the screen. On the left of the browser window, users have a control panel where they can learn about the models displayed on the map, add and remove stores, and adjust model parameters where appropriate. As parameters are changed, users receive instant feedback on the map that displays the result of their parameter changes. This quick feedback loop is intended to encourage playful and exploratory interactions that are not available in desktop GIS software. At the top of the screen, users can navigate between tabs to see different trade area models, and they are also provided with an option to return to the tutorial, or read more about the web app in the About tab.

The Buffers tab allows for Euclidean distance and drive time buffers (pictured above)

Implementation

The Trade Area Models web app was implemented using HTML/CSS/JavaScript and third party libraries including Bootstrap, JQuery, Leaflet, Mapbox, and Turf.js. Bootstrap and JQuery provided formatting and functionality frameworks that are common in web development. Leaflet provided the base for the web mapping components, including the map itself, most of the map-based user interactions, and the polygon layers. Mapbox was used for the base map layer and its Isochrone API was used to visualize drive time buffers. Turf.js is a JavaScript-based geospatial analysis library that makes performing many GIS-related functions and analysis simple to do in web browsers, and it was used for distance calculation, buffering, and creating Voronoi polygons. Toronto (Census Metropolitan Area) census tract data for 2016 were gathered from the CensusMapper API, which provides an easy to use interface to extract census data from Statistics Canada. Data retrieved from the API included geospatial boundaries, number of households, and median household income. The Huff Model was written from scratch in JavaScript, but uses Turf.js’s distance calculation functionality to understand the distance from each store to each census tract’s centroid. Source code is available at https://github.com/mappinjack/spatial-model-viz

Limitations

One of the key limitations in the app is a lack of specificity in models. Buffer sizes and store square footage areas are abstracted out of the app for simplicity, but this results in a lack of quantitative feedback. The Huff Model also uses Euclidean distance rather than drive time which ignores the road network and alternative means of transit like subway or foot traffic. The Huff Model also uses census tract centroids, which can lead to counter intuitive results in large census tracts. The sales forecasting aspect of the Huff Model tab makes large assumptions on the amount of many spent by each household on goods, and is impacted by edge effects of both stores and customers that may fall outside of the Toronto CMA. The drive time buffers also fully rely on the road network (rather than incorporating transit) and are limited by an upper bounded travel time of 60 minutes from the Mapbox Isochrone API.

Future work

The application in its current form is useful for spurring interest and discussion around trade area modeling, but should be more analytical to be useful for genuine analysis. A future iteration should remove the abstractions of buffer sizes and square footage estimates to allow an experienced user to directly enter exact values into the models. Further, more demographic data to support the Huff Model, and parameter defaults for specific industries would help users more quickly create meaningful models. Applying demographic filters to the sales forecasting would allow, for example, a store that sells baby apparel to more appropriately identify areas where there are more new families. Another useful addition to the app would be integration of real estate data to show retail space that is actually available for lease in the city so that users can pick their candidate store locations in a more meaningful way.

Summary

The Trade Area Models web app gives experienced and inexperienced analysts alike the opportunity to learn more about retail geography. While more analytical components have been abstracted out of the app in favour of simplicity, users can not only learn about buffers, Voronoi polygons, and the Huff Model, but interact with them directly and see how changes in store location and model parameters affect the retail landscape of Toronto.

An interactive demo of Voronoi polygons that includes adding and moving stores

100 Years of Wildfires in California – Tableau Dashboard Time Series

Shanice Rodrigues

GeoVis Project Assignment @RyersonGeo, SA8905, Fall 2020

Natural phenomenon can be challenging to map as they are dynamic through time and space. However, one solution is dynamic visualization itself through time series maps, which offered on Tableau. Through this application, an interactive dashboard can be created which can relay your data in various ways, including time series maps, graphs, text and graphics. If you are interested in creating a dashboard in Tableau with interactive time series and visuals, keep reading.

In this example, we will be creating a timeseries dashboard for the distribution of California’s wildfires over time. The overall dashboard can be viewed on Tableau Public HERE.

First, let’s go over the history of these wildfires which will present an interesting context for what we observe from these fires over time.

History of Wildfires

There is a rich, complicated history between civilization and wildfires. While indigenous communities found fires to be productive in producing soils rich in fertile ash ideal for crops, colonizers dismissed all fires as destructive phenomenon that needed to be extinguished. Especially with the massive fires in the early 1900s causing many fatalities, such as that in the Rocky Mountains that killed 85 people. The United States Forest Service (USFS) decided in implementing a severe fire suppression policy, requiring fires of 10 acres or less to be put out beginning in 1926, and then all fires to be put out by 10 A.M. the next day in 1935. It is expected that from the immediate extinction of fires in the early to mid-1900s, natural fire fuels such as forest debris continued to build up. This is likely the cause of massive fires that appeared in the late 1900s and persist to the current age which continue to be both difficult and expensive to manage. This pattern is obvious, as shown on the bar graph below for the number of fires and acres burned over the years (1919-2019).

Dashboard Creation

Data Importation

Many types of spatial files can be imported into Tableau such as shapefiles and KML files to create point, line or polygon maps. For our purposes, we will be extracting wildfire perimeter data from the Fire and Resource Assessment Program (FRAP) as linked here or on ArcGIS here.  This data displays fire perimeters dating back to 1878 up till the last calendar year, 2019 in California. Informative attribute data such as fire alarm dates, fire extinction dates, causes of fire and acre size of fires are included. While there is a file on prescribed burns, we will only be looking at the wildfire history file. The data imported into Tableau as a ‘Spatial file” where the perimeter polygons are automatically labelled as a geometry column by Tableau.

Timeseries

The data table is shown on the “Data Source” tab, where the table can be sorted by fields, edited or even joined to other data tables. The “Sheet” tabs are used to produce the maps or graphs individually that can all be added in the “Dashboard” table. First, we will create the wildfire time series for California. Conveniently, Tableau categorizes table columns by their data types, such as date, geometry, string text or integers. We can add column “Year” to the “Pages” card from which Tableau will use as the temporal reference for the time series.

The following timeseries toolbar will appear, where wildfire polygons will appear on the map depending on the year they occurred and is defined by the following scroll bar. The map can be shown as a looped animation with different speeds.

Additionally, the “Geometry” field can be added to the “Marks” card which are the wildfire perimeter polygons. Tableau has also generated “Longitude” and “Latitude” that are the total spatial extent of the wildfire geometries and can be added to the “Columns” and “Rows” tab.

In the upper-right “Show Me” table, the map icon can be selected to generate the base map.

Proportionally Sized Point Features

Multiple features can be added to this map to improve the visualization. First, the polygon areas appear to be very small and hard to see on the map above therefore it may be more effective to display them as point locations. In the “Marks” card, use the dropdown and select the ‘Shape” tab.

From the shape tab, there are multiple symbols to select from, or symbols can be uploaded from your computer into Tableau. Here, we chose a glowing point symbol to represent wildfire locations.

Additionally, to add more information to the points, such as proportional symbol sizes according to area burned (GIS ACRES field) by each fire. A new calculated field will have to be created for the point size magnitudes as shown:

The field is named “Area Burned (acres)” and is brought to the power of 10 so that the differences in magnitude between the wildfire points are noticeable and large enough on the map to be spotted, even at the lowest magnitude.

Tool Tip

Another informative feature to add to the points is the “Tool Tip,” or the attribute box about the feature that a reader has scrolled over. Often, attribute fields are already available in the data table to use in the tool tip such as fire names or the year of the fire. However, some fields need to be calculated such as the length of each wildfire. This can be calculated from the analysis tab as shown:

For the new field named “Fire Life Length (Days)” the following script was used:

Essentially this script finds the difference between the alarm date (when the fire started) and the contained date (when the fire ended) in unit “days.”  

For instance, here are some important attributes about each wildfire point that was added to the tool tip.

As shown, limitless options of formatting such as font, text size, and hovering options can be applied to the tool tip.

Graphics and Visualizations

The next aspects of the dashboard to incorporate would be the graphs to better inform the reader on the statistics of wildfire history. For the first graph, it will not only show the number of fires annually, but the acres burned as this will show the sizes of the fires.

Similarly to the map, the appropriate data fields need to be added to the columns and rows to generate a graph. Here the alarm date (start of the fire) is added to the x-axis, whereas the number of fires and Gis Acres (acres burned) was added to the y-axis and are filtered by “year.”

The field for the number of fires was a new field calculated with the following script:

Essentially, every row with a unique fire name is counted for every year under the “Alarm_Date” field to count the number of fires per year.

Another graph to be added to this dashboard is to inform the reader about the causes of fires and if they vary temporally. Tableau offers many novel ways of displaying mundane data into interesting visualizations that are both informative and appealing. Below is an example of a clustering graph, showing number of fires by cause against months over the entire timeseries. A colour gradient was added to provide more emphasis on causes that result in the most fires, displaying a bright yellow against less popular causes displayed with crimson red.

Similarly to the map, the “(Alarm_Date)” was added to the “Filters” card, however since we want to look at the average of causes per month rather than year, we can use the dropdown to change the date of interest to “MONTH.”

We also want to add the “Number of Fires” field to the “Marks” card to quantify how many fires are attributed to each cause. As shown, the same field can be added twice, such as one to edit its size attribute and one to edit its colour gradient attribute.

Putting it All Together

Finally, in the “Dashboard” tab, all these pages below of the timeseries map and graphs can be dragged and dropped into the viewer. The left toolbar can be used to import sheets into, change the extent size of the dashboard, as well as add/edit graphics and text.

Hopefully you’ve learned some of the basics of map and statistical visualizations that can be done in Tableau using this tutorial. If you’re interested in the history, recommendations and limits of this visualization, it is continued below.

Data Limitations and Recommendations

Firstly, with the wildfire data itself there are shortcomings, particularly that fires may have not been well documented prior to the mid-1900s due to the lack of observational technology. Additionally, only large fires were detected by surveyors whereas smaller fires were left unreported. With today’s technology in satellite imagery and LiDAR, fires of all sizes can be detected therefore it may appear that more fires of all sizes happen frequently in the modern age than prior. Besides the data, there are limitations with Tableau itself. First, the spatial data are all transformed to the spatial reference system WGS84 (EPSG:4326) when imported into Tableau and there can be inaccuracies of the spatial data through the system conversion. Therefore, it would be helpful for Tableau to utilize other reference systems and provide the user the choice to convert systems or not. Another limitation is with the proportional symbols for wildfires. The proportional symbol field had to be calculated and used had to be put to the “power of 10” to show up on the map, with no legend of the size range produced. It would be easier for Tableau to have a ‘Proportional Symbol” added onto the “Size” tab as this is a basic parameter required for many maps and would communicate the data easier to the reader. Hopefully Tableau can resolve these technical limitations to making mapping a more exclusive format that will work in visualizing many dataset types.

With gaps in wildfire history data for California, many recommendations can be made. While this visualization looked at the general number of fires per month by cause, it would be interesting to go in depth with climate or weather data, such as if there are an increasing number of thunderstorms or warmer summers that are sparking more fires in the 200s than the 1900s. Additionally, visualizing wildfire distributions with urban sprawl, such as if fires in range of urban centers or are more commonly in the range of people so are ranked as more serious hazards than those in the wilderness. Especially since the majority of wildfires are caused by people, it would be important to point out major camping groups and residential areas and their potential association with wildfires around them. Also, recalling the time since areas were last burned, as this can quantify the time regrowth has occurred for vegetation as well as the build-up of natural fuels which can then predict the size of future wildfires that can occur here if sparked. This is important for residential areas near these areas of high natural-fuel buildup and even insurance companies to locate large fire-prone areas. Overall, improving a visualization such as this requires the building of context surrounding it, such as filling in gaps of wildfire history through reviewing historical literature and surveying, as well as deriving data of wildfire risk using environmental and anthropogenic data.

Hello world!

Welcome to https://spatial.blog.torontomu.ca! The blog was created as a teaching tool for the graduate course SA8905 Thematic Cartography and Geovisualization in the Master of Spatial Analysis (MSA) program at Toronto Metropolitan University’s Department of Geography and Environmental Studies.

Please read on below or use the search function, categories list, or tag cloud to find posts of interest. Keep in mind that most posts reflect student work summarizing one of two projects that had to be completed within a 12-week term. Happy reading!