Geospatial Assessment of Solar Glare Hazard Potential on Urban Road Network (Mississauga, ON)

1. Introduction and Objectives
This report documents the methodology and execution of a geospatial analysis aimed at identifying specific segments of the road network with a high potential for dangerous solar glare during critical commute times.
The analysis focuses on the high-risk window for solar glare in the Greater Toronto Area (GTA), typically the winter months (January) around the afternoon commute (4:00 PM EST), when the sun is low on the horizon and positioned in the southwest.
The primary objectives were to:
1. Calculate the average solar position (Azimuth and Elevation) for the defined high-risk period.
2. Determine the orientation (Azimuth) of all road segments in the StreetCentreline layer.
3. Calculate the acute angle between the road and the sun (R_S_ANGLE).
4. Filter the results to identify segments where the road is both highly aligned with the sun and the driver is traveling into the solar direction, marking them as High Glare Hazard Potential.
2. Phase I: ArcPy Scripting for Data Calculation
The first phase involved developing an ArcPy script to calculate the necessary astronomical and geometric values and append them to the input feature class. Due to database constraints (specifically the 10-character field name limit in certain geodatabase formats), field names were abbreviated.
2.1. Script Parameters and Solar Calculation
The script uses the approximate latitude and longitude of Mississauga, ON (43.59, -79.64), and calculates the average solar position for the first week of January 2025 at 4:00 PM EST.
2.2. Final ArcPy Script
The following Python code was executed in the ArcGIS Pro Python environment:
import arcpy
import datetime
import math
import calendar

# — User Inputs (ADJUST THESE VALUES AS NEEDED) —
input_fc = “StreetCentreline”
MISSISSAUGA_LAT = 43.59
MISSISSAUGA_LON = -79.64
TARGET_TIME_HOUR = 16
YEAR = 2025

# — Field Names for Output (MAX 10 CHARACTERS FOR COMPLIANCE) —
ROAD_AZIMUTH_FIELD = “R_AZIMUTH” # Road segment’s direction (Calculated)
SOLAR_AZIMUTH_FIELD = “S_AZIMUTH” # Average Sun direction
SOLAR_ELEVATION_FIELD = “S_ELEV” # Average Sun altitude
ROAD_SOLAR_ANGLE_FIELD = “R_S_ANGLE” # Angle difference (Glare Indicator: 0=Worst Glare)

# — Helper Functions (Solar Geometry and Segment Azimuth) —

def calculate_solar_position(lat, lon, dt_local):
“””Calculates Solar Azimuth and Elevation (Simplified NOAA Standard).”””
TIMEZONE = -5
day_of_year = dt_local.timetuple().tm_yday
gamma = (2 * math.pi / 365) * (day_of_year – 1 + (dt_local.hour – 12) / 24)

eqtime = 229.18 * (0.000075 + 0.001868 * math.cos(gamma) – 0.032077 * math.sin(gamma)
– 0.014615 * math.cos(2 * gamma) – 0.040849 * math.sin(2 * gamma))
decl = math.radians(0.006918 – 0.399912 * math.cos(gamma) + 0.070257 * math.sin(gamma)
– 0.006758 * math.cos(2 * gamma) + 0.000907 * math.sin(2 * gamma)
– 0.002697 * math.cos(3 * gamma) + 0.00148 * math.sin(3 * gamma))

time_offset = eqtime + 4 * lon – 60 * TIMEZONE
tst = dt_local.hour * 60 + dt_local.minute + dt_local.second / 60 + time_offset
ha_deg = (tst / 4) – 180
ha_rad = math.radians(ha_deg)
lat_rad = math.radians(lat)

cos_zenith = (math.sin(lat_rad) * math.sin(decl) +
math.cos(lat_rad) * math.cos(decl) * math.cos(ha_rad))
zenith_rad = math.acos(min(max(cos_zenith, -1.0), 1.0))
solar_elevation = 90 – math.degrees(zenith_rad)

azimuth_num = -math.sin(ha_rad)
azimuth_den = math.tan(decl) * math.cos(lat_rad) – math.sin(lat_rad) * math.cos(ha_rad)

if azimuth_den == 0:
solar_azimuth_deg = 180 if ha_deg > 0 else 0
else:
solar_azimuth_rad = math.atan2(azimuth_num, azimuth_den)
solar_azimuth_deg = math.degrees(solar_azimuth_rad)

solar_azimuth = (solar_azimuth_deg + 360) % 360

return solar_azimuth, solar_elevation


def calculate_segment_azimuth(first_pt, last_pt):
“””Calculates the azimuth/bearing of a line segment.”””
dx = last_pt.X – first_pt.X
dy = last_pt.Y – first_pt.Y
bearing_rad = math.atan2(dx, dy)
bearing_deg = math.degrees(bearing_rad)
azimuth = (bearing_deg + 360) % 360
return azimuth

# — Main Script Execution —
arcpy.env.overwriteOutput = True

try:
# 1. Calculate Average Solar Position
start_date = datetime.date(YEAR, 1, 1)
end_date = datetime.date(YEAR, 1, 7)
total_azimuth, total_elevation, day_count = 0, 0, 0
current_date = start_date

while current_date <= end_date:
local_dt = datetime.datetime(current_date.year, current_date.month, current_date.day, TARGET_TIME_HOUR, 0, 0)
az, el = calculate_solar_position(MISSISSAUGA_LAT, MISSISSAUGA_LON, local_dt)
if el > 0:
total_azimuth += az
total_elevation += el
day_count += 1
current_date += datetime.timedelta(days=1)

if day_count == 0:
raise ValueError(“The sun is below the horizon for all calculated dates/times.”)

avg_solar_azimuth = total_azimuth / day_count
avg_solar_elevation = total_elevation / day_count

# 2. Add required fields
for field_name in [ROAD_AZIMUTH_FIELD, SOLAR_AZIMUTH_FIELD, SOLAR_ELEVATION_FIELD, ROAD_SOLAR_ANGLE_FIELD]:
if not arcpy.ListFields(input_fc, field_name):
arcpy.AddField_management(input_fc, field_name, “DOUBLE”)

# 3. Use an UpdateCursor to calculate and populate fields
fields = [“SHAPE@”, ROAD_AZIMUTH_FIELD, SOLAR_AZIMUTH_FIELD, SOLAR_ELEVATION_FIELD, ROAD_SOLAR_ANGLE_FIELD]

with arcpy.da.UpdateCursor(input_fc, fields) as cursor:
for row in cursor:
geometry = row[0]
segment_azimuth = None
if geometry and geometry.partCount > 0 and geometry.getPart(0).count > 1:
segment_azimuth = calculate_segment_azimuth(geometry.firstPoint, geometry.lastPoint)

road_solar_angle = None
if segment_azimuth is not None:
angle_diff = abs(segment_azimuth – avg_solar_azimuth)
road_solar_angle = min(angle_diff, 360 – angle_diff) # Acute angle (0-90)

row[1] = segment_azimuth
row[2] = avg_solar_azimuth
row[3] = avg_solar_elevation
row[4] = road_solar_angle

cursor.updateRow(row)

except arcpy.ExecuteError:
arcpy.AddError(arcpy.GetMessages(2))
except Exception as e:
print(f”An unexpected error occurred: {e}”)
3. Phase II: Classification of True Hazard Potential (Arcade)
Calculating the R_S_ANGLE ($0^\circ$ to $90^\circ$) identifies road segments that are geometrically aligned with the sun. However, it does not distinguish between a driver traveling into the sun (High Hazard) versus traveling away from the sun (No Hazard).
To isolate the segments with a true hazard potential, a new field (HAZARD_DIR) was created and calculated using an Arcade Expression in ArcGIS Pro’s Calculate Field tool.
3.1. Classification Criteria
A segment is classified as having High Hazard Potential (HAZARD_DIR = 1) if both conditions are met:
1. Angle Alignment: The calculated R_S_ANGLE is $15^\circ$ or less (indicating maximum glare).
2. Directional Alignment: The segment’s azimuth (R_AZIMUTH) is oriented within $\pm 90^\circ$ of the sun’s azimuth (S_AZIMUTH), meaning the driver is facing the sun.
3.2. Final Arcade Expression for Field Calculation
The following Arcade script was used to populate the HAZARD_DIR field (Short Integer type):
// HAZARD_DIR Field Calculation (Language: Arcade)

// Define required input fields
var solarAz = $feature.S_AZIMUTH; // Average Sun Azimuth (e.g., 245 degrees)
var roadAz = $feature.R_AZIMUTH; // Road Segment Azimuth (0-360)
var angleDiff = $feature.R_S_ANGLE; // Acute Angle between Road and Sun (0-90)

// 1. Check for High Glare Angle (< 15 degrees)
if (angleDiff <= 15) {

// 2. Check if the road direction is facing INTO the solar direction
// Calculate the acute difference between roadAz and solarAz (0-180 degrees)
var directionDiff = Abs(roadAz – solarAz);
var acuteDirDiff = Min(directionDiff, 360 – directionDiff);

// If the difference is <= 90 degrees, the driver is generally facing the sun
if (acuteDirDiff <= 90) {
return 1; // TRUE: HIGH Glare Hazard Potential
}
}

return 0; // FALSE: NO Glare Hazard Potential (Angle too high or driving away from sun)
4. Results and Mapping of Hazard Potential
The final classification based on the HAZARD_DIR field (where 1 indicates a High Glare Hazard Potential) was used to generate a thematic map of the Mississauga road network. The map isolates the segments that will experience direct, high-intensity sun glare during the 4:00 PM EST winter commute.
4.1. Map Output Description
The map, titled “Solar Glare Hazard Map of City of Mississauga for the First Week of The Year,” clearly differentiates between segments with no glare hazard (yellow) and those with a high solar glare hazard (red).
• Yellow Segments (Street with no Solar Glare Hazard in first week of the year): These represent the vast majority of the network. They include roads running generally north-south (where the sun is primarily hitting the side of the vehicle) or segments where the driver is traveling away from the low sun angle (i.e., eastbound/northeast-bound traffic).
• Red Segments (Street with High Solar Glare Hazard in first week of the year): These are the critical segments for this analysis. They represent roads that are:
1. Oriented in the southwest-to-west direction (similar to the sun’s average azimuth).
2. Where a driver traveling along that segment would be facing directly into the low sun angle.
4.2. Analysis of Identified Hazard Corridors
The high-hazard (red) segments are predominantly clustered along major arterial roads as shown in the following map that follow a strong East-West or Northeast-Southwest orientation.



• Major Corridors: A highly concentrated linear feature of red segments is visible running across the northern/central part of the city, strongly suggesting a major East-West highway or arterial road where the vast majority of segments are oriented to the west. This confirms that these major commuter corridors are the highest-risk areas for this specific time and season.
• Localized Hazards: Several smaller, isolated red segments are scattered throughout the map. These likely represent the East-West portions of minor residential streets or short segments of angled intersections where the road azimuth briefly aligns with the sun.
• Mitigation Focus: The results provide specific, actionable intelligence. Instead of deploying wide-scale mitigation efforts, the city can focus on the delineated red corridors for strategies such as:
o Targeted message boards warning drivers during the specific 3:30 PM–5:00 PM time window in January.
o Evaluating tree planting or physical barriers only along these identified segments to block the low western sun.
5. Conclusion and Next Steps
The integration of solar geometry (Python/ArcPy) and directional filtering (Arcade) successfully generated a definitive dataset of high-risk road segments. The final map, generated based on the HAZARD_DIR field, clearly highlights specific routes that pose a safety risk to westbound or southwest-bound drivers during the target time window.
Future steps for this analysis include:
• Expanding the calculation to include the morning commute period (e.g., 7:00 AM EST) when the sun is low in the East/Southeast.
• Integrating the analysis with collision data to validate the modeled hazard areas.
• Developing mitigation strategies, such as targeted placement of tree cover or glare-reducing signage, based on the identified high-hazard segments.

Unifying the “Megacity”A Historical Interactive Animation of The Amalgamation of Toronto Using Canva and ArcGIS Pro

By Aria Brown

Geovis Project Assignment | TMU Geography | SA8905 | Fall 2025

Hello everyone! Welcome to my geovis blog post :)

Introduction & Context

As someone who has an immense passion for geography, I have come across many opportunities where I can implement such a passion in relation to my other interests. Although geography is paramount in my interests, I am also quite an avid history buff. Thus, I wanted to see if I could capture both my passions and merge them into one project.

Contentedly, I was able to produce a project that combines 3 very important and personal aspects that provide insight into myself and my interests; my passion for geography and history, as well as my appreciation for the City of Toronto, where myself and my family’s roots in Canada first began. Therefore, I decided that I wanted to visualize the history of this great city, taking viewers through time to show how Toronto was able to get to what it is today by utilizing the free-to-use new gen. website Canva with its animation and interactive features.

Therefore, I present to you Unifying the “Megacity,” A Historical Interactive Animation of the Amalgamation of Toronto. My project takes us back to 1834 when the City of Toronto was first created and progressively follows a timeline to bring viewers to the present.

Figure 1

Timeline that the project will follow

Data & Rationale

Recently, incorporating animation into the world of GIS has been a quite popular trend that many individuals, organizations, and companies have decided to implement in their work. However, animation tools may be hard to come by and most would require a fee upon usage. Therefore, I knew I wanted to see if I could implement a way that GIS could be visualized using an easy-to-use tool that supports interactive features and animation. For those that may be aware, ArcGIS Pro itself does feature an animation tool that uses a timeline (key frames) that the software then compiles and presents in a rather static animation. 


So you might be wondering, why not just use ArcGISPro? ArcGISPro animation and interactive features I found to be quite limiting. Users are limited to key frames to present their animation and may not include extensive interactive features that can be played with or toggled on. I wanted to create a fun interactive animation that was almost seamless and easy to follow without being tied to the constraining ArcGISPro capabilities, and without the fees that other animation tools and softwares may require. Also, Canva is widely used by many to create presentations, reports, etc, and I thought why not showcase how this website that many know and love is capable of so much more.

Tools Used:

Canva- Free Graphic Design Tool

ArcGIS Pro- Desktop GIS Software

Data Used:

City of Toronto Historical Annexation Boundaries (1834-1967)- University of Toronto Dataverse

City of Toronto Community Council Boundaries- Toronto Open Data Portal

Municipal Boundaries as of 1996- Government of Canada

Methodology

Step 1: Upload Data into ArcGISPro and Examine the Data and its Attributes
First, I created a new map project in ArcGIS Pro and uploaded my data. I then right clicked my layers in the ‘Contents’ pane and selected ‘Attributes,’ here you can investigate your data and look for key information such as time frames or date fields. With this particular dataset, the attributes table featured key fields such as the Name of the annexed community (Name) and the year it was annexed (DtAnxd).

Figure 2

Opened attribute table in ArcGIS Pro highlighting the Name and DtAnxd fields

Step 2: Separate the Data into Key Time Frames

In order to ensure that my animation was concise, I separated the data into key time frames instead of showing the progression of city boundaries one-by-one, as there are a total of 51 records. I then laid out a framework as to how I was going to group my data using Microsoft Excel and grouped the data by their decades or significant time frames. I exported the attributes table to Excel using the ‘Table to Excel’ geoprocessing tool. I colour-coded my excel document to keep my records organized and so I could easily visualize the beginning and end of each time frame.

Figure 3

Table to Excel geoprocessing tool

Figure 4

Microsoft Excel spreadsheet of my data and key information sorted by the order it will be presented in

Step 3: Use the ‘Export Features’ Tool to Export Selected Attributes

Once my data and time frames were organized, I selected the date-specific polygons using the ‘Select By Attributes’ tool and ran the expression:

Where YrAnxd (text field that has just the year of annexation) includes the value(s) 1883, 1884 (years)

Figure 5

Select By Attributes tool in ArcGIS Pro and the attribute table view after selecting specific attributes


After running this tool for each time frame, I exported the selected attributes by selecting the layer on the Contents pane, right-clicking and selecting Data→Export Features, where I exported selected features based on the time frame. Each time frame was exported into separate shape-files, and the picture below shows the export of the 1883-1884 time frame into a shape-file entitled ‘1883_1884” which was the titling format I maintained for each shape-file.


Step 4: Customize the Map and Layout

After exporting each time frame into separate shape-files, I edited the look of my map by changing the basemap to depict a more historical-looking one that I felt fit the theme of my project and symbolized the boundary lines, scale bar, and north arrow to match the theme as well. I made sure to bookmark the location of the map in order to ensure each time frame would have a seamless transition without any movement from the map itself. 

In order to show a progression of Toronto’s boundaries that gradually increased over time, I made sure to toggle on the shape-files of the previous years to showcase this. For example, for the 1883-1884 time frame, I kept the previous 1834 shape-file to show this boundary progression.

Figure 6

Map in ArcGIS Pro with the layers of 1834 and 1883_1884 turned on

Step 5: Export Maps

I then exported each map to a PNG file by selecting the Share tab and selecting Export Layout.

Step 6: Upload to Canva

Each map was uploaded to Canva using the ‘Uploads’ tool on the left menu bar to a presentation-style template 

Figure 7

Uploads icon that is used to upload imagery files

Step 7: Use Presentation Template to Layout Maps and Customize

I customized each slide of my presentation by adding my maps, borders, images and icons.

Step 8: Enable Various Animation Tools Between Slides and Text

To create a rather seamless transition between time frames, I selected the ‘Dissolve’ animation tool by hovering my mouse over the space between each slide and selecting the ‘Add Transition’ option. Here, Canva presents a variety of different animation transitions to choose from, however I selected ‘Dissolve’ as I felt it was the most seamless transition due to its fading animation type. I also used different animation types for the display of different text components within the slides.

Figure 8

Selecting the ‘Dissolve’ animation under the Transitions tab, after selecting the transition style, the icon will the be present between slides (in this case the ‘Dissolve’ tool is represented by an infinity symbol

Step 9: Add Selectable Icons

I also wanted to make my animation more interactive and came up with the idea of allowing viewers to see a historic map of Toronto at the particular time frame that the slide was presenting. I got this idea by seeing the use of historic maps being used as basemaps to showcase the evolution of the city and its boundaries. I then found historical maps from the City of Toronto’s Historical Maps and Atlases website and saved them. I then uploaded them to Canva, duplicated my slide with a blank map and added the historic map to it. I then georeferenced the historic maps by lining up the borders between them and my map. I temporarily made the historic maps a bit transparent so I could line up the borders accurately.

I added a ‘plus’ icon from Canva on the slide I wanted and configured the icon so that if it was selected, viewers would be able to see the historic map of Toronto at around the same time. This was done by selecting the plus icon and selecting the ellipsis and navigating to ‘Add Link.’ Then, I selected the particular slide I wanted in the ‘Enter a link or Search’ section. This configured the plus icon to allow viewers to select it, prompting Canva to present the map with the historic imagery overlaid on top. An additional selectable icon (back button) was created on the historic map slides to allow users to go back to the original slide they were viewing.

Figure 9

Under the ‘Elements’ tool on the left menu bar, here I search for ‘plus button’ and selected one that I liked

Figure 10

Adding a link to the plus icon and linking the desired slide

*Note: I kept all my historic map slides at the end of my presentation, as they are are optional aspect that can be viewed

Results

Link to Canva Presentation:

https://www.canva.com/design/DAG4ck_PXLQ/S60h1kAtqPjzCwqFgmYAVA/edit?utm_content=DAG4ck_PXLQ&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton

Presentation:

After the completion of the previous steps, the final product features an animated and interactive presentation of maps! The final product now has:

  • Select capabilities
  • Animated transitions between slides and text
  • Selectable pop up icons to present new information (historic map imagery)
  • Zoom in and out capabilities

I hope you may be inspired by this and try to make your own interactive animation

Thank you!

References

City of Toronto. (2018). Toronto Community Council Boundaries Options Paper. City of Toronto. https://www.toronto.ca/legdocs/mmis/2018/ex/bgrd/backgroundfile-116256.pdf

City of Toronto. (2025). Community Council Area Profiles. City of Toronto. https://www.toronto.ca/city-government/data-research-maps/neighbourhoods-communities/community-council-area-profiles/

City of Toronto. (2025). Historical Maps and Atlases. City of Toronto. https://www.toronto.ca/city-government/accountability-operations-customer-service/access-city-information-or-records/city-of-toronto-archives/whats-online/maps/historical-maps-and-atlases/

Government of Canada. (2005). Municipal boundaries as of 1996. Government of Canada. https://open.canada.ca/data/en/dataset/a3e19e02-36f0-4244-87ab-8f029c6846e2

Fortin, M. (2023). City of Toronto Historical Annexation Boundaries (1834 – 1967). University of Toronto. https://borealisdata.ca/dataset.xhtml?persistentId=doi:10.5683/SP3/XN2NRW
MacNamara, J. (2005). Toronto Chronology. Ontario Genealogical Society.https://web.archive.org/web/20070929044646/http://www.torontofamilyhistory.org/chronology.html

Auto Theft Trends in Toronto: Interactive Geospatial Dashboard Using Power BI

SA8905 – Cartography and Geovisualization
Geovisualization Project Assignment

By: Nishaan Grewal

I am sure that many of us have experienced or have heard about the ever-growing issue, which is auto theft. This project, utilizing PowerBI, will not only help analyze but also provide a visualization of Toronto auto theft patterns (2020-2024). Using the spatial insights together with the business intelligence tools provided by PowerBI, it has allowed me to compare trends across the years, and overall create an interactive dashboard that is usable by users.

You may be asking why not just use ArcGIS and PowerBI, but as informative and advanced the mapping is on ArcGIS it still lacks the story telling and understanding for the everyday user. I believe GIS should be used to help educate and inform individuals in a user-friendly and easy-to-access manner. Using Power BI for the first time myself, I can truly say it has helped me understand how GIS can be used in many other ways, rather than in my comfort zone, which is just using ArcGIS Pro. My goal was to create an easy-to-follow to follow dashboard that allows the user to explore Toronto’s neighbourhood-level auto theft trends in an intuitive and meaningful way, without being confused.

So let’s get started and give an overview of the project. Please remember if your TorontoMU account does not work for PowerBI, you must obtain licencing from the TMU. I had to get help from the TMU help desk.

Datasets:

2. Neighbourhoods Boundary (JSON file) – City of Toronto Open Data (https://open.toronto.ca/dataset/neighbourhoods/)


STEP 1: Add and Clean Data

I started by importing the CSV file of the Auto Theft dataset into PowerBI.

Click: Home → Get Data → Text/CSV → Transform Data

With that, I cleaned the data as I was only interested in the Auto Theft crime from the years 2020 to 2024. I manually just removed unnecessary columns (e.g, other crime types). I also verified column types (Year = whole number, Neighbourhood = text)

With the cleaning done, my dataset looked a bit like this.

(Keep in mind, this cleaned dataset is 790 rows of data)

STEP 2: Add Map

As mentioned earlier, although there is a ArcGIS feature on PowerBI, I wanted to only use features that PowerBI offers, eliminating the need for furthur licensing (for everyday users). So for this project, I used Shape Map, which is under the Visualizations pane.

With that added to the Format panel, I had to expand the Map settings, turn on Custom map, and upload the Neighbourhoods Boundary (JSON file).

With that you will see a neighbourhood boundary of Toronto


After Dragging the Neighbourhoods and AutoTheftCount columns from the cleaned dataset (in the Data pane) and dragging them to the Location and Colour saturation as follows:

You get a map choropleth map like the following:

Feel free to choose a colour scheme of your liking and make sure to add your title.

STEP 3: Add Interactive Year Slicer

As a map is now present, it is time for us to make the interactive feature of the dashboard. I did this by adding a Slicer (Year Slicer), and adding the Year column from the dataset. The slicer can be located in the Visualization panel.

After playing around with the format settings, I had finalized a Slicer that allows a user to click a year (2020 – 2024), which changes the map appearance with the Auto Theft counts within that specific year.


Step 4: Create Measures for KPIs

With the slicer working, it was now time to create the KPIs to provide viewers with key indicators that not only updated with the click of a button, but also gave insightful and readable information. This allowed the viewer to understand the map with better context.

I did this by creating new measures (Go to the top ribbon → Modeling → New measure). As the formula bar popped up, I wrote some code that helped analyze different insights.

The insight measures created were:

1. Total Auto Thefts

2. Neighbourhood With Most Auto Theft

3. Neighbourhood With Least Auto Theft

4. Highest Auto Theft in Neighbourhood

5. Lowest Auto Theft in Neighbourhood

6. Change in Auto Theft Based on Previous Year (%)

With the help of ChatGPT, I was able to validate my code.

Step 5: Turn Every Measure into a KPI Card

Now that the measures were all made, it is time for the easier step, which is to add KPI Cards.

In the Visualization pane, there is a feature called “Card”; make sure you use the dynamic version as seen below.

With 6 Cards now on the Dashboard, I then dragged each measurement into each card from the Data pane.

This is what was created from steps 1 to 5.

Step 6: Add Graphs

With graphs missing, I had decided to add a Line Chart that showed “Total Auto Thefts in Years”, and a Stacked Bar Chart that showed “Top 5 Neighbourhoods With Most Total Auto Theft”.

These graphs can be added by using the same Visualization Panel.

LINE CHART:

Make sure to drag the data to the respective X and Y axes.

STACKED BAR CHART:

Make sure to drag the data to the respective X and Y axes, and make sure to add a filter that takes only the top 5 neighbourhoods instead of all neighbourhoods.

After this step was done, I had a rough dashboard, which, although it wasn’t aesthetically finished, it now at least had an interactive map that changed based on a viewer’s click on a selected year, which also updated the KPIs and graphs data.

Step 7: Make it Finalized

After playing around with each feature’s formatting (to change colour and look), I had now developed a finished product that I was very pleased with.


This is now an interactive dashboard that displays information on the Auto Theft trends in Toronto based on the years 2020 to 2024.

In Memoriam: A 3D Interactive Web Map of Toronto’s Decommissioned Speed Cameras (2020-2025)

By Matthew Cygan

GeoVis Project Assignment, TMU Geography, SA8905, Fall 2025

Introduction: Gone But Never Forgotten

Five days ago, on November 14, 2025, Ontario’s automated speed enforcement program ended¹. At midnight, all 150 of Toronto’s speed cameras stopped issuing tickets, closing a controversial five-year chapter in the city’s traffic safety infrastructure. Between January and August 2025 alone, these cameras issued 550,997 tickets and collected over $30 million². Since July 2020, they generated millions in revenue while reducing speeding in school zones by 45%³.

This memorial blog will aim to preserve the good memories. The result is a digital memorial: a 3D interactive web map documenting 198 speed camera locations across Toronto’s 25 wards, built with Mapbox GL JS and custom JavaScript. This map highlights geographic and temporal features along with the volume of tickets issued to drivers!

Users can scroll to zoom and hold Ctrl/Cmd and drag to rotate and tilt for 3D view

Preview

Context: The Automated Speed Camera Program and Its Ban

The automated speed enforcement program began in July 2020 after the provincial government enabled municipalities to deploy cameras⁴. By November 2022, Toronto had issued 560,000 tickets and collected $34 million⁵. The city expanded from 75 to 150 cameras in March 2025, using data-driven placement to target areas with speeding and collision histories⁷.

Despite some data indicating effectiveness (45% reduction in school zone speeding³, 87% reduction in vehicles exceeding limits by 20+ km/h⁹), Premier Doug Ford announced a provincial ban in September 2025, calling the cameras a “cash grab”⁸. The ban took effect November 14, 2025¹.

Consequences:

  • Potential layoff of 1,000 Toronto workers¹⁰
  • Loss of $30+ million in annual revenue¹⁰
  • Removal of all active 150 cameras and infrastructure
  • Provincial funding of $42 million deemed insufficient by city leaders¹⁰

Data Sources and Data Management

The project combines datasets from the City of Toronto’s Open Data Portal¹¹:

  • Speed Camera Locations (GeoJSON)
  • Monthly Charge Data (July 2020 to November 2025)

The Data Management Problem:

The biggest challenge was joining the location and charge datasets by my only linking variable: address. Unfortunately, both had a plethora of inconsistencies in address formats:

  • Random double spaces: “Kipling Ave.” vs “Kipling Ave.”
  • Period inconsistencies: “St.” vs “St”
  • Abbreviation differences: “W” vs “West”, “Dr” vs “Drive”
  • Directional contradictions: Same location described as “south of X” in one dataset, “north of Y” in another

The final dataset contains 198 documented locations, though some have incomplete enforcement data.

This experience showed me that real-world public datasets are messy. Data cleaning is often the most time-consuming part of any GIS project.

Building the Memorial Map

Step 1: GeoJSON Structure

All Speed Camera XY coordinates were exported to GeoJSON format to make the locations readable by JavaScript:

Step 2: Custom Basemap

I designed a custom basemap in Mapbox Studio that displayed a visually appealing and cartographically sound map:

  • Grayscale base colors creating a neutral and smooth background
  • Blue roads for road infrastructure clarity and visual harmony
  • Minimal labels with clean typography creating a modern aesthetic

Step 3: Adding 3D Terrain

Late in development, I added Mapbox’s 3D terrain feature using the Terrain-DEM v1 tileset¹². This transformed the user experience:

  • Users can hold Ctrl/Cmd and drag to rotate and tilt the view
  • 3D visualization reveals how cameras are positioned in space and provides more real world context

Step 4: Bringing the Interactive Map to Life

Getting the interactive web map functional required three main files working together: the GeoJSON data, the HTML file, and the CSS styling.

The toronto_speed_cameras.geojson file, as seen in the image in Step 1, contains the coordinates and details for all 198 camera locations. Each camera is a Point feature with properties like ward, location name, site code, and enforcement dates. The JavaScript reads this file to position markers and populate the interactive popups.

The index.html file is the backbone. It sets up the page structure, loads the Mapbox GL JS library, and links everything together. JavaScript embedded in the HTML initializes the map and handles the interactive features.

The style.css file handles visual styling including map default extent and placement, marker appearance including hover effects and drop shadows, and responsive design for different screen sizes.

All three parts need each other. Without the HTML, there’s no web canvas. Without the GeoJSON, there are no locations to display. Without CSS, everything looks unpolished. Making sure each was properly linked and loaded was essential to creating a functional, interactive, and professional-looking final product.

Step 5: Interactive Markers

Using JavaScript, I implemented custom markers with the official speed camera icon:

Step 6: Deployment

I deployed the final map to Netlify’s free hosting service, making it publicly accessible at toronto-speed-camera-locations.netlify.app. Netlify’s continuous deployment means any updates automatically publish.

Technical Challenges and Solutions

Building the interactive map involved several practical technical challenges that shaped the final design. The raw dataset required rigorous cleaning and restructuring as mentioned earlier which lead to a large problem cleaning and thus a few locations could not be obtained.

Additionally, browsers block pages from loading local files for security reasons, meaning the map wouldn’t work when opened directly from the file system; it had to be run through a live server and eventually deployed online to load the GeoJSON data correctly.

The popups also displayed incorrectly at first. Text would overflow outside the popup container, which required removing conflicting styles and adding proper wrapping and layout rules. Although these limitations introduced roadblocks, the final result is a smooth, fully functioning interactive 3D map.

References

¹ CP24. (November 14, 2025). “Speed enforcement camera ban is now in effect in Ontario.” YouTube. https://www.youtube.com/watch?v=zvDqLrSjoos

² The Globe and Mail. (October 2, 2025). “Speed cameras work, but do generate revenue. Is the solution investing more?” https://www.theglobeandmail.com/drive/culture/article-speed-cameras-work-but-do-generate-revenue-is-the-solution-investing/

³ Toronto Metropolitan University & The Hospital for Sick Children. (July 24, 2025). “Automated cameras cut speeding by 45 per cent in Toronto school zones, study finds.” https://www.torontomu.ca/news-events/news/2025/07/cameras-cut-speeding-by-45-per-cent-in-toronto-school-zones/

⁴ CBC News. (November 29, 2022). “Toronto drivers paying $34M in fines thanks to speed cameras.” https://www.cbc.ca/news/canada/toronto/toronto-speed-cameras-millions-in-fines-1.6668443

⁵ CBC News. (November 29, 2022). “Toronto drivers paying $34M in fines thanks to speed cameras.” https://www.cbc.ca/news/canada/toronto/toronto-speed-cameras-millions-in-fines-1.6668443

⁶ CityNews Toronto. (March 13, 2025). “Toronto doubling the number of speed cameras on city streets.” https://toronto.citynews.ca/2025/03/13/toronto-doubling-the-amount-of-speed-cameras-on-city-streets/

⁷ Narcity. (April 15, 2025). “Toronto just got 75 new speed cameras and they’re already active in ‘problematic’ areas.” https://www.narcity.com/toronto/toronto-new-speed-cameras-2025

⁸ Ontario Ministry of Transportation. (September 24, 2025). “Ontario Protecting Taxpayers by Banning Municipal Speed Cameras.” https://news.ontario.ca/en/release/1006534/

⁹ Toronto Metropolitan University. (July 2025). “Study on Automated Speed Enforcement Effectiveness.” (Referenced in multiple sources as demonstrating 87% reduction in vehicles exceeding speed limit by 20+ km/h)

¹⁰ ST Lawyers. (November 14, 2025). “Toronto Layoffs After Speed Camera Ban: Severance Pay for 1,000 Workers.” https://stlawyers.ca/blog-news/toronto-layoffs-speed-camera-ban-severance/

¹¹ City of Toronto. (2025). “Automated Speed Enforcement & Open Data Portal.” https://www.toronto.ca/services-payments/streets-parking-transportation/road-safety/vision-zero/safety-initiatives/automated-speed-enforcement/

¹² Mapbox. (2025). “Add 3D terrain to a map.” Mapbox GL JS Documentation. https://docs.mapbox.com/mapbox-gl-js/example/add-terrain/

¹³ Halliday, L. (April 15, 2019). “Mapbox – Interactive maps in React.” YouTube. https://youtu.be/JJatzkPcmoI?si=HMYAtzRkYQ-nbR4y

A One Stop Shop for Bitcoin Legality and Central Bank Digital Currency Development Statuses Across the World (Web Map Application)

You’ve heard about blockchain and Bitcoin about a million times by now, and may even own some, but did you know that many governments around the world are actually planning to intertwine blockchain technology in a different way, by combining it with their own countries official currency?

Governments plan to do this by implementing what’s known as a “Central Bank Digital Currency”, or CBDC for short. This solution allows governments to utilize the benefits of blockchain technology, without inhibiting the “risk” of decentralization. In fact, countries like Nigeria have already fully launched a fully operational CBDC for their citizens to use. Check out the maps I created below by clicking on the image, and try to spot some other countries that fully launched a CBDC! (Hint: Think islands!)

In case maps are not opening from image, try this link: https://torontomu.maps.arcgis.com/apps/instant/compare/index.html?appid=42c8571764d344949f7d80bd62e0be6b

CBDCs are important because they bring the benefits of blockchain technologies, such as transaction traceability (which can reduce fraud, scams, and other financial crimes), reduced friction, and improved transaction speeds with the benefit (or disadvantage, depending on who you ask) of a centralized government authority to monitor the network, meaning blockchain efficiency but without anonymous wallets. During Bitcoin’s inception, some early adopters called for the replacement of the US dollar with Bitcoin instead, making bold predictions that the world would transition to a decentralized financial system. Unfortunately for them, some governments around the world have banned Bitcoin by law.

What is interesting however, is that despite some countries being pro-CBDC, they may be-anti Bitcoin and crypto. For example; China, who outright banned the dealing and trading of Bitcoin by financial institutions, yet piloted the “eCNY” CBDC. (Try finding some other interesting patterns like this in the maps!) Some countries like the USA have researched a CBDC, but have the “CBDC Anti-Surveillance State Act” in place, preventing the Federal Reserve from issuing a CBDC. (Notes like this can be observed for a few countries in my maps, either under the field “CBDC Notes” or “Bitcoin Notes”).

I wanted to highlight some of these patterns, by creating a global map of Bitcoin legality across different countries, with a second displayable layer which includes CBDC development statuses across different countries. On top of that, I also found that existing Bitcoin legality maps online tend to be static, and do not allow for the convenient viewing of smaller countries, despite having relevant Bitcoin legality data, or CBDC development status data. It is worth mentioning that atlanticcouncil.org has already created an interactive “CBDC Tracker” map, however it does not contain any data regarding Bitcoin, preventing the instant viewing of a countries views on both topics (which my maps allow for by clicking on a country!).

Initially, I planned to create a different map, with layers not just displaying Bitcoin legal status across different countries, but also the legal status of altcoins and stablecoins. However, many countries do not have laws regarding altcoins and stablecoins (many don’t even have laws on Bitcoin, as seen in the grey areas of my map), so I decided to switch my projects trajectory a bit to instead focus on Bitcoin and CBDCs, as there clearly is a lot of data on CBDCs, and it is an even newer and more novel concept compared to traditional decentralized cryptocurrencies.

Map Creation Process:

The creation of my maps took a few steps. Firstly, I had to gather data. This proved relatively simple, as there was a convenient Wikipedia article (I know, I know, but its data was referenced!) that already listed Bitcoin legality across countries that had laws regarding it. I grouped legality into three categories: 1) Permitted By Law, 2) Partially Restricted by Law, and 3) Prohibited by Law. The next step, was to find a shapefile that contained the world’s countries. This was found on ArcGIS Hub, and was as follows:

I noticed it could not be edited, so I had to save it as a new feature layer in ArcGIS Online in order to do the necessary joins.

Next, creation of the .CSV file for joining. First, I copied the “Countries” column from the countries shapefile, then pasted it into a new .CSV. The rest was pretty simple, but tedious, as I manually went back and forth inputting the Bitcoin legality data for each country that contained it. This was done by inputting a 0 for prohibited, a 1 for permitted, and a 2 for partially restricted. I then used an IF statement to create the text column with “Permitted by Law”, “Prohibited by Law” and “Partially Restricted by Law”.

Next, I had to find data on CBDC development statuses around the world, and conveniently enough, atlanticcouncil.org has already created an interactive map that does just this, as mentioned earlier. I did however, find their map’s use of colours to be quite confusing (why is “launched” in pink!?). To add the data, I manually looked at each countries CBDC status, and included it into my .CSV into a new column. I also combined “Inactive” and “Cancelled” into the same category as “Inactive or Cancelled” to reduce the number of legend items, and for simplistic reading. Lastly, notes columns were added for any countries that may include, well, notable details regarding their CBDC status or Bitcoin legality status. In the end, my .CSV looked something like this:

Once the data was ready, it was time to map! I simply joined the countries shapefile to my data file, once for the “Bitcoin_Legal_Status” and once for the “CBDC_Development_Status”.

The resulting map layers were created:

Global Bitcoin Legal Status.

Global Central Bank Digital Currency Development Status.

Next, I had to publish each layer as individual web maps. This would allow me to get to my next step of creating the web app.

ArcGIS Online allows for the creation of web apps, known as “Instant Apps”. I created a “Compare” app, allowing for the side by side comparison of multiple web maps. I simply selected my maps, toggled a few settings, and my app was ready to go!

The world is changing fast, I think these maps highlight interesting dichotomies in some government’s views on blockchain technologies, the difference may simply just be if they can track who’s spending their CBDC’s and on what, as tracking anonymous Bitcoin transactions can be challenging. Invasion of privacy and government overreach or public safety and the future of finance? What do you think?

This blog post and map application was created for the SA8905 course at Toronto Metropolitan University taught by Dr. Claus Rinner.

Visualizing an Individual Tree (Quantitative Structural Model)

LiDAR point clouds are a challenging form of data to navigate. With perplexing noise, volume and distribution, point clouds require rigorous manipulation of both parameters and computational power. This blog post introduces a series of tools for the future biomass estimator, with a focus on detecting errors, and finding solutions through the process of visualization.

Programs you will need: RStudio, MATLAB, CloudCompare

Packages you will need: LidR, TreeQSM

Tree Delineation – R (lidR Package)

Beginning with an unclassified point cloud. Point clouds are generated from LiDAR sensors, strapped to aerial or terrestrial sources. Point clouds provide a precise measurement of areas, mapping three dimensional information about target objects. This blog post focuses on individual trees, however, entire forest stands, construction zones, and even cities may be mapped using LiDAR technology. This project aims to tackle a smaller object for simplicity, replicability, and potential for future scaling.

Visualized point cloud in DJI Terra. Target tree outlined in red.

The canopy of these trees overlap. The tree is almost indistinguishable without colour values (RGB), hidden between all the other trees. No problem. Before we quantify this tree, we will delineate it (separate it from the others).

library(lidR) # Load packages

library(rgl)

las <- readLAS("Path to las file")

Before we can separate the trees, we must classify and normalize them to determine what part of the point cloud consists of tree, and what part does not. Normalizing the points helps adjust Z values to represent above ground height rather than absolute elevation.

las <- classify_ground(las, csf())

las <- normalize_height(las, tin()) 

plot(las, color = "Z")

Now that the point cloud has been appropriately classified and normalized, we can begin to separate the trees. Firstly, we will detect the tree heights. The tree tops will define the perimeter of each individual tree.

hist(ttops$Z, breaks = 20, main = "Detected Tree Heights", xlab = "Height (m)")

plot(las, color = "Z") 

plot(sf::st_geometry(ttops), add = TRUE, pch = 3, col = "red", cex = 2)

The first iteration will not be the last. 158 trees were detected this first round. An urban park does not have more than 15 in a small, open region such as this. Adjusting window size and height will help isolate trees more accurately. These parameters will depend on factors such as the density of your trees, density of your point cloud, tree shape, tree size, tree height and tree canopy cover. Even factors such as terrestrial versus aerial lasers will effect which parts of the tree will be better detected. In this way, no two models will be perfect with the same parameters. Make sure to play around to accommodate for this.

ttops <- locate_trees(las, lmf(ws = 8, hmin = 10))

A new issue arises. Instead of too many trees, there are now no individual, fully delineated trees visible within the output. All trees consist of chopped up pieces, impossible for accurate quantification. We must continue to adjust our parameters.

This occurred due to the overlapping canopies between our trees. The algorithm was unable to grow each detected tree model into its full form. Luckily, the lidR package offers multiple methods for tree segmenting. We may move on to the dalponte2016 algorithm for segmenting trees.

chm <- rasterize_canopy(las, res = 0.5, p2r())

las <- segment_trees(las, dalponte2016(chm, ttops, th_tree = 1, th_seed = 0.3, th_cr = 0.5, max_cr = 20))

table(las$treeID, useNA = "always") # Check assignment

plot(las, color = "treeID")

A Canopy Height Model is produced to delineate the next set of trees. This canopy model…

…is turned into…

While this is much improved, the tree of choice (mustard yellow) has a significant portion of its limbs cut off. This is due to its irregular shape. This requires a much more aggressive application of the dalponte2016 parameters. We adjust parameters so that the minimum height (th_tree) is closer to the ground, the minimum height of the treetop seeds (th_seed) are inclusive of the smallest irregularities, the crown ratio (th_cr) is extremely relaxed so that the tree crowns can spread further, and the crown radius (max_cr) is allowed to expand further. Here is the code:

las <- segment_trees(las, dalponte2016(chm, ttops, th_tree = 0.2,  th_seed = 0.1, th_cr = 0.2, max_cr = 30))    

Finally, the trees are whole. Our tree (purple) is now distinct from all other trees. It is time to remove our favorite tree. You can decide which tree is your favourite by using the rgl package. It will allow you to visualize in 3D, even while in RStudio.

GIF created using Snipping Tool (Video Option) and Microsoft Clip Champ.

tree <- filter_poi(las, treeID == 13)
plot(tree, color = "Z")
coords <- data.frame(X = tree$X, Y = tree$Y, Z = tree$Z)
coords$X <- coords$X - min(coords$X)
coords$Y <- coords$Y - min(coords$Y)
coords$Z <- coords$Z - min(coords$Z)
write.table(coords, file = "tree_13_for_treeqsm.txt", row.names = FALSE, col.names = FALSE, sep = " ")

While the interactive file itself is too large to insert into this blog post. You can click on the exported visualization here. This will give you a good idea of the output. Unfortunately, Sketchfab is unable to accommodate the point data’s attribute table, but it does have the option for annotation if you wish to share a 3D model with simplified information. It even has the option to create virtual reality options.

Now if you have taken the chance to click on the link, try and think of the new challenge we now encounter after having delineated the single tree. There is one more task to be done before we will be able to derive accurate tree metrics.

Segmentation – CloudCompare

If you didn’t notice from the final output, the fence was included in our most aggressive segmentation. If this occurs, fear not, we can trim the point cloud further, with cloud compare.

Download cloud compare, drag and drop your freshly exported file, and utilize the segmentation tool.

Click on your file name under DB Tree so a cube appears, framing your tree.

Then click Tools –> Segmentation –> Cross Section.

Utilize the box to frame out the excess, and use the “Export Section as New Entity” function under the Slices section to obtain a fresh, fence (and ground) free layer.

Our tree can now be processed in Matlab!

Quantitative Structural Modeling – MATLAB (TreeQSM)

Allows visualization of your individual tree. A new window named “Figures” will pop up to show your progress.

The following parameters can be played around with. These are the initial parameters that worked for me. Further information about these parameters can be found here, in the official documentation of TreeQSM. Please ensure you have the Statistics and Machine Learning Toolbox installed if any errors occur.

inputs.PatchDiam1 = 0.10;
inputs.PatchDiam2Min = 0.02;
inputs.PatchDiam2Max = 0.05;
inputs.BallRad1 = 0.12;
inputs.BallRad2 = 0.10;
inputs.nmin1 = 3;
inputs.nmin2 = 1;
inputs.lcyl = 3;
inputs.FilRad = 3.5;
inputs.OnlyTree = 1;
inputs.Taper = 1;
inputs.ParentCor = 1;
inputs.TaperCor = 1;
inputs.GrowthVolCor = 1;
inputs.TreeHeight = 1;
inputs.Robust = 1;
inputs.name = 'tree_13';
inputs.disp = 1;
inputs.save = 0;
inputs.plot = 1;
inputs.tree = 1;
inputs.model = 1;
inputs.MinCylRad = 0;
inputs.Dist = 1;

QSM_clean = treeqsm(P, inputs);

fprintf('\n=== TREE 13 - NO FENCE ===\n');
fprintf('Total Volume: %.4f m³ (%.1f liters)\n', 

QSM_clean.treedata.TotalVolume/1000, QSM_clean.treedata.TotalVolume);

fprintf('Tree Height: %.2f m\n', QSM_clean.treedata.TreeHeight);
fprintf('DBH: %.1f cm\n', QSM_clean.treedata.DBHqsm * 100);
fprintf('Number of Branches: %d\n', QSM_clean.treedata.NumberBranches);

If this is your output, you have successfully created a Quantitative Structural Model of your tree of choice. This consists of a series of cylinders enveloping every branch, twig and trunk part of your tree. This cylindrical tree model can quantify every part of the tree and produce a spatial understanding of tree biomass distribution.

For now, we have structural indicators, such as total volume, number of branches, and height, which may be used as proxy for physical measurements. The summary of this process has been posted below in the form of a visualized workflow.

Summary of QSM Workflow

This project provides a foundational learning curve that encourages the reader to explore the tools available for visualizing and manipulating point clouds, by classification, segmentation and volumetric modeling. This has potential for studies in carbon sequestration, where canopy height and volume can be linked to broaden the understanding of above-ground biomass, and the spatial-temporal factors that affect our national resources. There is potential for vegetation growth monitoring, where smaller plantations can be consistently, and accurately monitored for structural changes (growth, rot, death etc.). While a single tree is not the most exhilarating project, it lays the groundwork for defining ecological accounting systems for expansive 3D spatial analysts.

Limitations

This project was unable to obtain ground truth for accurate comparison of the point cloud. With ground truth, the parameters could have been more accurately defined. This workflow relied heavily on visual indicators to segment and quantify this tree. This project requires further expansion, with a focus on verifying the extent of accuracy. The next step in this process consists of comparison to allometric modelling, as well as more expansive forest stands.

For now, we end with one tree, in hopes of one day tackling a forest.

Resources

Moral Support

Professor Cheryl Rogers – For the brilliant ideas and positivity.

Data

Roussel, J., & Auty, D. (2022). lidRbook: R for airborne LiDAR data analysis. Retrieved from https://r-lidar.github.io/lidRbook/

Åkerblom, M., Raumonen, P., Kaasalainen, M., & Casella, E. (2017). TreeQSM documentation (Version 2.3) [PDF manual]. Inverse Tampere. https://github.com/InverseTampere/TreeQSM/blob/master/Manual/TreeQSM_documentation.pdf

Toronto Metropolitan University Library Collaboratory. (2023). Oblique LiDAR flight conducted in High Park, Toronto, Ontario [Dataset]. Flown by A. Millward, D. Djakubek, & J. Tran.

Mapping Toronto’s Post-War Urban Sprawl & Infill Growth (1945-2021)

A Geovizualization Project by Mandeep Rainal.

SA8905 – Master of Spatial Analysis, Toronto Metropolitan University.

For this project, I explore how Toronto has grown and intensified over time, by creating a 3D animated geovisualization using Kepler.gl. I will be using 3D building massing data from the City of Toronto and construction period information from the 2021 Census data (CHASS).

Instead of showing a static before and after map, I decided to build a 3D animated geovizualization that reveals how the city “fills in” over time showing the early suburban expansion, mid-era infill, and rapid post-2000 intensification.

To do this, I combined the following:

  • Toronto’s 3D Massing Building Footprints
  • Age-Class construction era categories
  • A Custom “Built-Year” proxy
  • A timeline animation created in Kepler. gl and Microsoft Windows.

The result is a dynamic sequence showing how Toronto physically grew upward and outward.

BACKGROUND

Toronto is Canada’s largest and fastest growing city. Understanding where and when the built environment expanded helps explain patterns of suburbanization, identify older and newer development areas and see infill and intensification. This also helps contextualize shifts in density and planning priorities for future development.

Although building-level construction years are not publicly available, the City of Toronto provides detailed 3D massing geometry, and Statistics Canada provides construction periods at the census tract level for private dwellings.

By combining these sources into a single animated geovizualization, we can vizualize Toronto’s physical growth pattern over 75 years.

DATA

  • City of Toronto – 3D Building Massing (Polygon Data)
    1. Height attributes (average height)
    2. Building Footprints
    3. Used for 3D extrusions
  • City of Toronto – Muncipal Boundary (Polygon Data)
    1. Used to isolate from the Census metropolitan area to the Toronto city core.
  • 2021 Census Tract Boundary
  • CHASS (2021 Census) – Construction Periods for Dwellings
    1. Total dwellings
    2. 1960 and before
    3. 1961-1980
    4. 1981-1990
    5. 1991-2010
    6. 2011-2015
    7. 2016-2021
    8. Used to assign Age classes and a generalized “BuiltYear” for each building.

METHODOLOGY

Step 1: Cleaning and Preparing the Data in ArcGIS Pro

  • I first imported the collected data into ArcGIS. I clipped the census tract layers to the City of Toronto boundary to get census tracts for Toronto only.
  • Next, I joined the census tract polygon layer we created to the construction period data that was imported. This gives us census tracts with construction period counts.
  • Because Toronto does not have building-year data, I assigned construction era categories from the census as proxies for building age, and created an age classification system using proportions. Adding periods and dividing / total dwellings to get proportions, and assigned them into three classes:
    • Mostly Pre-1981 dwellings
    • Mixed-era dwellings
    • Mostly 2000+dwellings
  • Next, I needed a numeric date field for Kepler to activate the time field. I assigned a representative year to each tract using the Age classes.
    • if age = Mostly Pre-1981 dwellings = 1945
    • if age = Mixed-era dwellings = 1990
    • if age = Mostly 2000+dwellings = 2010
  • And to make the built year Kepler-compatible a new date field was created to format as 1945-01-01.
  • The data was then exported as GeoJSON files to import into Kepler.gl. The built year data was also exported as a CSV because Kepler doesn’t pick up on the time field in geoJSON easily.

Stage 2: Visualizing the Growth in Kepler

  • Once the layers are loaded into Kepler the tool allows you manipulate and vizualize different attributes quickly.
  • First the 3D Massing GeoJSON was set to show height extrusion based on the average height field. The colour of the layer was muted and set to be based on the age classes and dwelling eras of the buildings.
  • Second layer, was a point layer also based on the age-classes. This would fill in the 3D massings as the time slider progressed, and was based on brighter colours.
  • The Built Date CSV was added as a time-based filter for the build date field.

The final visualization was screen recorded and shows an animation of Toronto being built from 1945 to 2021.

  • Teal = Mixed-era dwellings
  • Amber = Mostly 2000+ dwellings
  • Dark purple = Mostly Pre-1981 dwellings

RESULTS

The animation reveals key patterns on development in the city.

  • Pre-1981 areas dominate older neighbourhoods, the purple shaded areas show Old Toronto, Riverdale, Highpark, North York.
  • Mixed-era dwellings appear in more transitional suburbs, filling in teal, and showing subdividisions with infill.
  • Mostly 2000+ dwellings are filling in amber and highlight the rapid intensification in areas like downtown with high-rise booms, North York centre, Scarborough Town Centre.

The animation shows suburban sprawl expanding outward, before the vertical intensification era begins around the year 2000.

Because Kepler.gl cannot export 3D time-slider animations as standalone HTML files, I captured the final visualization using Microsoft Windows screen recording instead.

LIMITATIONS

This visualization used census tract–level construction-period data as a proxy for building age, which means the timing of development is generalized rather than precise. I had to collapse the CHASS construction ranges into age classes because the census periods span multiple decades and cannot be animated in Kepler.gl’s time slider, which only accepts a single built-year value per feature. Because all buildings within a tract inherit the same age class, fine-grained variation is lost and the results are affected by aggregation. Census construction categories are broad, and assigning a single representative “built year” further simplifies patterns. The Kepler animation therefore illustrates symbolic patterns of sprawl, infill, and intensification, not exact chronological construction patterns.

CONCLUSION

This project demonstrates how multiple datasets can be combined to produce a compelling 3D time-based visualization of a city’s growth. By integrating ArcGIS Pro preprocessing with Kepler’s dynamic visualization tools, I was able to:

  • Simplify census construction-era data
  • Generate meaningful urban age classes
  • Create temporal building representations
  • Visualize 75+ years of urban development in a single interactive tool

Kepler’s time slider adds an intuitive, animated story of how Toronto grew, revealing patterns of change that static maps cannot communicate.

Spatial Intelligence Dashboard for Community Safety Using ArcGIS for Power BI

Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

By Debbie Verduga

Welcome to my Geo Viz Tutorial! 

I love ArcGIS and I love Power BI. Each shines in their own way, which is why I was excited to discover the new ArcGIS integration within Microsoft Power BI. Let’s dive in and explore what it can do! 

First off, traditional business intelligence (BI) technologies offer powerful analytical insights but often fall short in providing geospatial context. Conversely, geospatial technologies offer map centric capabilities and advanced geoprocessing, but often lack insights integration.  

ArcGIS for Power BI directly integrates spatial analytics with business intelligence without the need of geoprocessing tools or advanced level expertise. This integration has the potential to empower everyday analysts to explore and leverage spatial data without needing to be highly experts in GIS. 

This tutorial will demonstrate how to use ArcGIS for Power BI to: 

  • Integrate multiple data sources – using City of Toronto neighbourhood socio-demographics, crime statistics, and gun violence data.
  • Perform data joins within Power BI – by linking datasets effortlessly without needing complex GIS tools.
  • Create interactive spatial dashboards – by blending BI insights with mapping capabilities, including thematic maps and hotspot analysis. 

You will need TMU Credentials for: 

  • Power BI Online 
  • ArcGIS Online
  • Data
DatasetFile TypeSource
Neighbourhood Socio-demographic DataExcel Simply Analytics (Pre-Aggregated Census Data) 
Neighbourhood Crime RatesSHP File (Boundary)ArcGIS Online Published by Toronto Police Service
Shooting & Firearm DischargeSHP File (Point/Location)ArcGIS Online Published by Toronto Police Service

Power BI Online

Log into https://app.powerbi.com/ using your TMU credentials. To help you out, watch this tutorial to get started with Power BI Online.

ArcGIS Online 

Log Into https://www.arcgis.com/ using your TMU credentials. Make sure you explore data available for your analysis. For this tutorial we will be using Toronto Police Service Open data including neighbourhood crime rates and shootings and firearms discharges

Loading Data 

To create a new report in Power BI, you need a set of data to start. For this analysis, we will use neighbourhood level socio-demographic data in an excel format. This data includes total population and variables that are often associated with marginalized communities including low income, unemployment rates, no education and visible minorities rates. This data does not have spatial reference, however, the neighbourhood name and unique identifier is available in the data. 

Add Data > Click on New Report > Get Data > Choose Data Source > Upload File > Browse to your file location > Click Next > Select Excel Tab you wish to Import > Select Transform Data 

Semantic Model

This will take you to the report’s Semantic Model. Think of a semantic model as the way you transform raw data into a business-ready format for analysis and reporting. The opportunities for manipulating data here are endless!

Using the semantic model you can define relationships with other data, incorporate business logic, perform calculations, summarize, pivot and manipulate data in many ways.

 What I love about semantic models is that it remembers every transformation you have made to the data. If you made a mistake, don’t sweat it, come back to the semantic model and undo. It’s that simple. 

Once you load data you have two options. You can create a report or create a semantic model. Let’s create a report for now. 

Create Report > Name Semantic Model 

Report Layout 

The report layout functions as a canvas. This is where you add all your visualizations. On the right panel you have Filters, Visualizations and Data. 

Visuals are added by clicking on the icons. If you hover over the icon, you can see what they are called. There are default visuals, but you can add more visuals from the microsoft store. 

The section below the visual icons helps guide how to populate the visual with your data. 

Each visual is configured differently. For example, a chart/graph requires a y-axis and y-axis. To populate these visuals, drag and drop the column from your data table into these fields/values. 

Add a Visual 

Lets add an interactive visualization that displays the low income rate based on the neighbourhood selected from a list. 

From the visualization panel > Add the Card Visual 

From the data panel > Drag the variable column into the Fields 

By default, Power BI summarizes the statistics in your entire data. Lets create an interactive filter to interact with this statistic based on a selected neighborhood. 

From the Visualizations Panel > Add the Slicer Visual > Drag and drop the column that has the neighbourhood name > Filter from the slicer any given neighbourhood. 

The slicer now interacts with the Card visual to filter out its respective statistic. 

Congrats! We have created our very first interactive visualization! Well done :) 

Pro Tip: To validate that calculations and visualizations are correct in Power BI, use excel to manipulate data in the same format and validate that visualizations are correct. 

Load Data from ArcGIS Online

To demonstrate the full integration of Power BI and geospatial data, let’s bring data from ESRI’s ArcGIS Online. Authoritative open data is available in this portal and can be directly accessed through Power BI. 

Linking Data 

When you think about integrating non-spatial data with spatial/location data, you will need to keep in mind that at the very least, you will need common identifiers to be able to link data together. For example, the neighbourhood data has a neighbourhood name and identifier which are also available in the data published by the Toronto Police Service including neighbourhood crime rates and shootings and firearms discharges

Add ArcGIS for Power BI Visual 

Add the ArcGIS for Power BI visual > Log in using your TMU Credentials. 

Add Layers

Click on Layers Icon > Switch to ArcGIS Online > Search for Layers by Name > Select Layer > Done 

This will add the layer to your map. You can go back and add more layers. You can also add layers from your own ArcGIS Online account. 

ArcGIS Navigation Menu 

The navigation menu on the left panel of this window allows you to perform the following geospatial functions 

  • Change and style symbology
  • Change the Base Map
  • Change Layer Properties 
  • Analysis tab allows you to
    • Conduct Drive Time and Buffer
    • Add demographic data 
    • Link/Join Data
    • Find Similar Data

For the purposes of this analysis, we will: 

  • Establish a join between our neighbourhood socio-demographic data and the spatial data including crime rates and shootings and firearms discharges 
  • Develop a thematic map of one crime type
  • Develop a shootings hot spot analysis 

Data Cleansing

When linking data, the common neighbourhood IDs from all these different data sources are not in the same format. For example, in my data, the ID is an integer format. However, in the Shooting and Firearms Data, this field is in a text format padded with zeros. 

In Power BI, we can create a clean table with neighbourhood information that acts as a neighbourhood dimension table to link data together and manipulate the columns to facilitate data linkages. Let’s create a neighbourhood dimension table.

Create a clean Neighbourhood Table 

Open the Semantic Model > Change to Editing Mode > Transform Data > Right Click on Table > Duplicate Neighbourhood Table > Right Click on new table to Rename  >  Choose Column > Remove all Unwanted Columns > Click on Hood ID Column > Click Transform > Change to Whole Number > Right Click on Column to Rename > Add Custom Column (Formula Below) > Save

Formula = Text.PadStart(Text.From([HOOD_158]), 3, “0”)

The custom column and formula takes the HOOD ID, changes it into a text field and adds padding of zeros. This will match the neighbourhood ID format in the shootings and firearm discharge data. Keep in mind, the only data you can manipulate is the data within your semantic model. You cannot change or manipulate the data sourced from ArcGIS Online. 

Relationships 

The newly created table in the semantic model needs to be related to the neighbourhood socio-demographic data to make sure that all tables are related to each other. 

Establish a Relationship

In the semantic model view > Click Manage Relationships > Add New Relationship > Select From Table and Identifier > Select To Table and Identifier > Establish Cardinality and Cross-Filter Direction to Both > Save > Close 

Congrats! We created a clean table that will function as a neighbourhood dimension table to facilitate data joins. We also learned how to establish a relationship with other tables in your semantic model. This comes handy as you can integrate multiple sources of data. 

Let’s return to the report and continue building visualizations. 

Joining Non-Spatial Data with Spatial Data 

Neighbourhood Crime 

Now we will join our non-spatial data from our semantic model with our spatial data in ArcGIS Online.

Join Non-Spatial Data with ArcGIS Online Data

Click on the ArcGIS Visual > Analysis Tab > Join Layer > Target Layer > Power BI Data (Drag & Drop Hood ID into the Join Layer Field in the Visualization Pane > Additional Options will now Appear on the Map> Select Join Field > Join Operation Aggregate > Interaction Filter > Create Join > Close

Congrats! We have created a join between the non-spatial data in Power BI and the spatial data in ArcGIS Online. 

Change the neighbourhood filter to see this map filter and zoom into the selected neighbourhood. 

Thematic Map 

Now, let’s create a thematic map with one of the crime rates variables in this dataset. 

On the left hand panel of the map: 

Click on Layers > Click on Symbology > Active Layer > Select Field Assault Rate 2024 > Change to Colour > Click on Style Options Tab > Change Colour Ramp to another Colour > Change Classification > Close Window. 

Congrats! We created a thematic map in Power BI using ArcGIS Online boundary file. 

Change the neighbourhood filter to see how the map interacts with the data. Since this is a thematic map, we may not want to filter all the data, instead, we just want to highlight the area of interest. 

Click on Analysis > Join Layer > Change the Interaction Behaviour to Highlight instead of Filter> Update Join

Check this again by changing the neighbourhood filter. Now, the map just highlights the neighbourhood instead of filtering it. 

Customizing Map Controls 

When you have the ArcGIS visual selected, you have the ability to customize your settings using the visualization panel. This controls the map tools available in the report. This can come in handy when setting up a default extent in your map for this report or allowing users to have control over the map. 

Shootings and Firearm Discharges 

Let’s visualize location data and create a hot spot analysis. To save time, copy and paste the map you created in the step before. 

In the new map, add the Shootings and Firearms Data. 

Challenge: Practice what you have learned thus far! 

  • Change the base layer to a dark theme. 
  • Add the Shootings and Firearm Discharge Data from ArcGIS Online. 
  • Create a join with this layer and the data in Power BI. 
  • Play around with changing the colour and shape of the symbology. 

Hotspot Analysis 

Now create a heat map from the Shooting and Firearm Discharges location data.

Click on Layers > Symbology > Change Symbol Type from Location to Heat Map > Style Options > Change Colour Ramp > Change Blur Radius > Close 

Congrats! We have created a heat map in Power BI! 

This map is dynamic, when you filter the neighbourhood from the list, the hot spot map filters as well. 

Customizing your report

It’s time to customize the rest of the report by adding visualizations, incorporating additional maps, adjusting titles and text, changing the canvas background, and bringing in more data to enrich the overall analysis.

I hope you’re just as excited as I am to start using Power BI alongside ArcGIS. By blending these two powerful tools, you can easily bring different data sources together and unlock a blend of BI insights and spatial intelligence—all in one place. 

References

Neighbourhood Crime Rates – Toronto Police Service Open Data

Shooting and Firearm Discharges – Toronto Police Service Open Data

Environics Analytics – Simply Analytics

Geospatial Visualization of Runner Health Data: Toronto Waterfront Marathon

Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

Hello everyone! I am excited to share my running geovisualization blog with you all. This blog will allow you to transform the way you use GPS data from your phone or smart watch!

This idea came to me as I recorded my half marathon run on my apple watch in 2023 in the app “Strava”. Since then, I developed an interest in health tracking data and when assigned this project, I thought, hmm maybe I can make this data my own.

As a result, I explored the options and was able to create a 3D representation of my run and how I was doing physically throughout.

Here is a Youtube link to the final product!

The steps are as followed if you want to give this type of geospatial analysis a try yourself!

Step 1.

You will need to have installed the app Strava. This health and fitness app will track your GPS data from either your phone or watch and track your speed, elevation and heartrate (watch only). Apart from this, you will also need the app RunGap. This app will allow you to transfer your activity data and export it to a “.fit” file. A .fit file is a special data source that can track heartrate, speed and elevation that is geolocated by x and y coordinates every second (each row).

Step 2.

Once you have the apps downloaded, start a health activity on the Strava app. From there you can transfer your Strava data to RunGap.

After you sign in and import the Strava data, go to the activity you want to export as a .fit file. Save the .fit file and transfer it to your computer.

Step 3.

Now that you have the .fit file, you will need to download a tool to convert it to a CSV. This can be found at https://developer.garmin.com/fit/overview/. In Step 1 of this page you will need to download the https://developer.garmin.com/fit/download/ Fit SDK. The file will be in your downloaded folder under FitSDKRelease_21.171.00.zip. You will need to unzip this file and navigate to >java>FitToCSV.bat. This is the tool that you will use on the .fit file. To do this, go to your .fit file properties and change the “Open with:” application to your >java>FitToCSV.bat path.

Now simply run the .fit file and the tool will open and covert it to a CSV in the same folder after pressing any key to continue…

Step 4.

Now, open your CSV. The data is initially messy, and the fields are mixed. To clean it I added a new sheet, and then deleted from the original, continuing to narrow it down using the filter function. In the end, you only want the “data” rows in the Type column and rows with lat and long coordinates to create a point feature class. I also renamed the fields. For example, value 1 became Timestamp(s), which is used as the primary key to differentiate the rows. To get the coordinates in degrees, I used this calculation:

  • Lat_Degrees: Lat_semicircles / 11930464.71
  • Long_Degrees: Long_semicircles / 11930464.71

Furthermore, to display the points as lines in the final map, 4 more fields are needed to be added to the excel sheet. This is the start lat, start long, end lat and end long fields. These can simply be calculated by taking the coordinates of the next row for the end lat and end long. You will also need to do this with altitude to make a 3D representation of the elevation.

Step 5.

Now that your CSV is cleaned, it is ready to be exported as spatial data. Open ArcGIS Pro and create a new project. From here, load your CSV into a new map. This table will be used in the XY to line geoprocessing tool using the start and end coordinates for the WGS_1984_UTM_Zone_17N projection in Toronto.

Once you run the tool, your data should look something like this, displaying lines connecting each initial point/row.

Step 6.

Now it is time to bring your route to life! Start by running the Feature To 3D By Attribute geoprocessing tool on your feature class using the height field as your elevation/altitude.

Your line should now be 3D when opening a 3D Map Scene and importing the 3D layer

Step 7.

To add more dimensions to the symbology colours, I used “Bivariate Colours”. This provides a unique combination of speed and heart rate at each leg of the race.

To make the elevation more visually appealing, I used the extrusion function on the line feature class. Then, I used the “Min Height” category with the expression “$feature.Altitude__m_ / 3”. To further add realism, I added the ground elevation surfaces layer called WorldElevation3D/Terrain3D, so that the surrounding topography pops out more.

Step 8.

Now that the layer and symbology are refined, the final part of the visualization is creating a Birdseye view of the race trail from start to finish. To do this, I once again used ArcGIS Pro and added an animation in the view tab. From here I continuously added key frames throughout the path until the end. Finally, I directly exported the video to my computer.

Step 9. Canva

To conclude, I used Canva to add the legend to the map, add music, and a nice-looking title.

And now, you have a 3D running animation…! I hope you have learned something from this blog and give it a try yourself. It was very satisfying taking a real-life achievement and converting it to a in-depth geospatial representation. :)

The Carolinian Zone: Traditional Ecological Knowledge (TEK) Plant Species Common in the Carolinian Zone

Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

By: Danielle Lacka

INTRODUCTION:

Hello readers!

For my geo-visualization project, I wanted to weave together stories of land, knowledge, and technology through a Métis lens. My project, “Mapping Métis Traditional Ecological Knowledge (TEK): Where TEK Plant Species Are Found in the Carolinian Zone,” became a way to visualize how cultural knowledge and ecology intersect across southern Ontario’s most biodiverse landscape.

Inspired by the storytelling traditions that shape how knowledge is shared, I used ArcGIS StoryMaps to build an interactive narrative that brings TEK plant species to life on the map.

This project is more than just a map—it’s a story about connection, care, and the living relationships between people and the environment. Through digital tools and mapping in ArcGIS Pro, I aimed to highlight how Métis TEK continues to grow and adapt in today’s technological world.

See the finished story map here:

Join me as I walk through how I created this project where data meets story, and where land, plants, and knowledge come together on the screen.

PROJECT BACKGROUND:

In 2010, the Métis Nation of Ontario (MNO) released the Southern Ontario Métis Traditional Plant Use Study, the first of its kind to document Métis traditional ecological knowledge (TEK) related to plant and vegetation use in southern Ontario (Métis Nation of Ontario, 2010). The study, supported by Ontario Power Generation (OPG), was developed through collaboration with Métis Elders, traditional resource users, and community councils in the Northumberland, Oshawa, and Durham regions. It highlights Métis-specific traditional and medicinal practices that differ from those of neighbouring First Nations, while also recording environmental changes in southern Ontario and their effects on Métis relationships with plant life.

Since there are already extensive records documenting the plant species found across the Carolinian Zone, this project focuses on connecting those existing data sources with Métis Traditional Ecological Knowledge, revealing where cultural and ecological landscapes overlap and how they continue to shape our understanding of place. Not all species mentioned in the study are included in this storymap as some species mentioned were not found in the Carolinian Zone List of Vascular Plants by Michael J. Oldham. The video found at the end of this story is shared by the Métis Nation of Ontario as part of the Southern Ontario Métis Traditional Plant Use Study (2010). It is included to support the geovisualization of plant knowledge and landscapes in southern Ontario. The teachings and knowledge remain the intellectual and cultural property of the Métis Nation of Ontario and are presented with respect for community protocols, including acknowledging the Métis Nation of Ontario as the knowledge holders, not reproducing or claiming the teachings, and using them solely for the purposes of geovisualization and awareness in this project.

This foundational research of the MNO represents an important step in recognizing and protecting Métis ecological knowledge and cultural practices, ensuring they are considered in environmental assessments and future land-use decisions. Visualizing this knowledge on a map helps bring these relationships to life and helps in connecting traditional teachings to place, showing how Métis plant use patterns are tied to specific landscapes, and making this knowledge accessible in a meaningful, spatial way.

Let’s get started on how this project was built.

THE GEOVISUALIZATION PRODUCT:

The data that was used to build this StoryMap is as follows:

The software that was used to create this StoryMap is as follows:

  • ArcGIS StoryMaps to put the story together
  • ArcGIS Pro to build the map for the story
  • Microsoft Excel to build the dataset

Now that we have all the tools and data we need we can get started on building the project.

STEPS:

  1. Make your dataset: we have 2 sets of data and it is easier when everything is in one place. This requires some manual labour of reading and searching the data to find out what plants mentioned in the MNO’s study are found within the Carolinian zone and what census divisions they could *commonly be found in. 

*NOTE: I made this project based on the definition of status common to be found in the Carolinian zone and CDs as there were many different status definitions in Oldham’s data, but I wanted to connect these datasets based on the definition of being commonly found instead of other definitions (rare, uncommon, no status, etc.) (Oldham, 2017).

In order to make this new dataset I used Excel to hold the columns: Plant Name, Scientific Name, Comments, and Métis use of description from the MNO’s study, as well as a column called “Common Status” to hold the CDs these species were commonly found in. 

  1. Fill your dataset: Now that the dataset is set up, data can be put into it. I brought the list of species as well as the rest of the columns mentioned from the MNO’s plant use study into their respective columns: 

I included the comments column as this is important context to include to ensure that using this data was in its whole and told the whole story of this dataset rather than bits and pieces.

Once the base data is in the sheet we can start locating their common status within the Carolinian zone using Oldham’s data records.

What I did was search each species mentioned in the MNO plant use study within Oldham’s dataset. Then if the species matched records in the dataset I would include the CD’s name in the Common Status column.

Once the entire species list has been searched the data collection step is complete and we can move onto the next step.

  1. Bring in your map layers: Open ArcGIS Pro and create a new project. I changed my basemap layer to better match the theme of this to Imagery Hybrid. Add in the Ontario Geohub Shapefile (the red outline). Rename this if you want as it is pretty well named already. Next bring in the Stats Canada CD shapefile. 
  1. Refine your map layers: First I selected only 7E (The Carolinian Zone), using the select by attribute option: 

Then you filter based on this ecoregion: 

Then once you run the selection you can export as a new layer with only the Carolinian Zone.

Next I applied the CD layer and clipped it to the exported Carolinian zone layer using the clip feature:

This will only show the CDs that lie within the Carolinian Zone. Now you will add the pdf layer. We need to use this pdf to draw the boundary line for 7E4 which is an eco-district that includes several CDs. With the pdf layer selected, click Imagery and Georeference:

Next, you can right click on the layer and click zoom to layer. 

Then in the georeferencing tab, click move and the pdf should show up to move around the map.

Now, you can use the three options (in the figure above) to as best you can overlay the pdf to align with the map to look something like this:

Once it is fit you can draw the boundary line on the clipped CD layer with create layer 

If it is too tricky to see beyond the pdf you can change the transparency to make it easier:

Now you can draw the boundary. Once that is complete click save, then export the layer drawn as a new layer. Now you can change the symbology for colour to show the distinctive divisions in the Ecozone. 

For the labels, I added a new column in the Eco-divisions layer called Short for the abbreviations of the districts for a better look. I manually entered in the abbreviations for the CDs similar to how Oldham did it in his map.

Now you should have something like this:

Now that the map is completed, we can start on making the storymap.

  1. Make the storymap

I started by writing up the text for how I wanted the story map to flow in google docs, making an introduction and providing some background context: such as the data I used, why the work done by the MNO is important for Indigenous people and the environment, and what I hope the project achieves. I wrote up where I wanted to put the maps, and what images and plant knowledge tables.

I applied this plan to the story map and had to turn the map I made in ArcGIS into a web map in order to access it in story map. (You can choose to make the map in ArcGIS Online to avoid this).

I also found some awesome 3-D models of some of the species mentioned from a site called Sketch fab which I thought was super cool to be able to visualize!

Then you have created a story map learning about the Carolinian Zone and what Métis TEK plant species are commonly found and used from here!

CONCLUSIONS/LIMITATIONS:

One of the key limitations of this project is that some zones lacked common status plant species as described in the MNO Plant Use Study, resulting in no species being listed for those areas. This absence may reflect gaps in documentation rather than a true lack of plant use, pointing to the need for more comprehensive and localized research.

The uneven distribution of documented plant species across zones underscores both the complexity of Métis plant relationships and the urgency of further study. By embracing these limitations as a call to action, we affirm the value of Indigenous knowledge systems and encourage broader learning about the interdependence between people and place.

REFERENCES

Carolinian Canada Coalition. (2007). Caring for nature in Brant: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Brant_Factsheet_Final.pdf 

Carolinian Canada Coalition. (2007). Caring for nature in Elgin: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Elgin_Factsheet_Final.pdf 

Carolinian Canada Coalition. (2007). Caring for nature in Essex: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Essex_Factsheet_Final.pdf 

Carolinian Canada Coalition. (2007). Caring for nature in Haldimand: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Haldimand_Factsheet_Final.pdf 

Carolinian Canada Coalition. (2007). Caring for nature in Hamilton: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Hamilton_Factsheet_Final.pdf 

Carolinian Canada Coalition. (2007). Caring for nature in Lambton: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Lambton_Factsheet_Final.pdf 

Carolinian Canada Coalition. (2007). Caring for nature in Middlesex: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Middlesex_Factsheet_Final.pdf 

Carolinian Canada Coalition. (2007). Caring for nature in Niagara: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Niagara_Factsheet_Final.pdf 

Carolinian Canada Coalition. (2007). Caring for nature in Oxford: Landowner action in Carolinian Canada [Fact sheet]. https://caroliniancanada.ca/sites/default/files/File%20Depository/Library/factsheets/Oxford_Factsheet_Final.pdf 

Chatham-Kent Home. (2024, November 28). Agriculture & Agri-Food. https://www.chatham-kent.ca/EconomicDevelopment/invest/invest/Pages/Agriculture.aspx 

Métis Nation of Ontario. (2010). Traditional ecological knowledge study: Southern Ontario Métis traditional plant use [PDF]. Métis Nation of Ontario. https://www.metisnation.org/wp-content/uploads/2011/03/so_on_tek_darlington_report.pdf 

Oldham, Michael. (2017). List of the Vascular Plants of Ontario’s Carolinian Zone (Ecoregion 7E). Carolinian Canada. 10.13140/RG.2.2.34637.33764.