Hello world!

Welcome to https://spatial.blog.torontomu.ca! The blog was created as a teaching tool for the graduate course SA8905 Thematic Cartography and Geovisualization in the Master of Spatial Analysis (MSA) program at Toronto Metropolitan University’s Department of Geography and Environmental Studies.

Please read on below or use the search function, categories list, or tag cloud to find posts of interest. Keep in mind that most posts reflect student work summarizing one of two projects that had to be completed within a 12-week term. Happy reading!

Visualizing Historical Tornado Events in the USA using Experience Builder

SA8905: Geovisualization Project

Erika French

Background


Tornado Alley has no definitive boundary. The NOAA National Severe Storms Laboratory explains that Tornado Alley is a term invented by the media to refer to a broad area of relatively high tornado occurrence in the central United States, and that the boundary of Tornado Alley changes based on the data and variables mapped (NOAA National Severe Storms Laboratory, 2024). This inspired my geovisualization project, in which I wanted to visualize all tornado occurrences in the United States and see how the spatial distribution of tornadoes, or what could be deemed as tornado alley, would change based on different spatial queries.

Data

The data used for this project are all publicly available for download at the links provided below.

The NOAA’s National Weather Service Storm Prediction Center publishes tornado occurrence data dating back to 1950. This file can be found on their website and is named ‘1950-2024_all_tornadoes.csv’. Additionally, a data description file can be viewed here.

The US Boundary layer was downloaded from the US Census Cartographic Boundaries website. The ‘2024 1 : 500,000 (national) States’ shapefile was chosen.

How-To!

Part One: Geoprocessing in ArcGIS Pro

Two important tasks must be completed using ArcGIS Pro.

  1. Creating point data for all tornado occurrences.
  2. Creating tornado paths for all applicable tornado occurrences.

Here is how these tasks were completed.

Creating the tornado occurrence point data layer
  1. Add data to the map.

Open a new ArcGIS Pro Project and add the ‘1950-2024_all_tornadoes.csv’ and the ‘2024 1 : 500,000 (national) States’ shapefile to the current map.

2. Create point data.

Right click on the ‘1950-2024_all_tornadoes.csv’ in the contents pane and navigate to “XY Table to Point”. Fill out the parameters as pictured below and run. This creates point data for all tornado occurrences using the start longitude and start latitude.

3. Creating a unique Tornado ID Field

Currently, there is no unique tornado ID field as tornado numbers are reused annually. We will now create a unique tornado ID field. Right click on the new All_Tornadoes layer and navigate to Data Design > Fields. Here, add a new field as pictured below.

Open the All_Tornadoes attribute table and navigate to the new t_id field. Right click and choose Field Calculate. Here, we will create a unique tornado ID by concatenating the om field and the year field as pictured below.

4. Using Select by Location to clip point occurrences apppearing outside the United States.

With the All_Tornadoes layer selected, navigate to Select by Location and fill the parameters out as pictured below.

Right-click on the All_Tornadoes layer and navigate to Data > Export Features. Ensure that you are exporting the selected records. Name this new feature layer “USA_Tornadoes”.

5. Symbolizing Point Occurrences

Navigate to the USA_Tornadoes layer symbology. Here, we will choose unique values and symbolize the Magnitude field (mag) as pictured below.

Enable scale-based sizing of points at appropriate ranges so that they do not crowd the map at its full extent. To begin, I used 3 pt, then 8 pt, and then 12 pt progressive sizing.

6. Labelling Point Occurrences

Right-click on USA_Tornadoes and navigate to labeling properties. We would like to label each occurrence with its magnitude. This Arcade expression leaves any points with a -9 value unlabelled, as they have unknown magnitudes.

Under the new Label Class, navigate to symbol and fill out the settings as pictured below.

Under Visibility Range, fill out the minimum scale as pictured below. This will stop the labels from crowding the map when zoomed out. Save your project!

Creating tornado paths
  1. Use the XY to Line Tool to create paths

Launch the XY to Line tool. Use the ‘1950-2024_all_tornadoes.csv’, as the input feature. Fill out the parameters as pictured below. This will create a line from each tornado’s start lat/long to end lat/long. Running this will take a few minutes, be patient. Name this new layer “Tornado Paths”

2. Select by Location all tornado paths in USA

Some tornadoes may only have a start lat/long, and no end lat/long recorded. In this case, the end of their path will appear at a 0,0 lat/long. To remove all of these inaccurate paths, we will peform a select by location. Fill out the parameters as pictured below.

Open the tornado paths attribute table and switch the selection. Visually confirm that the only highlighted paths are now outside of the USA. Delete these records.

3. Creating a Buffer

To appropriately visualize the width of the tornado’s path, we will create a buffer of the Tornado Paths layer. Start by adding a new field to the Tornado Paths layer as pictured below.

Open the attribute table and field calculate the new halfwidth field as pictured below.

Now we can create a buffer. Open the buffer tool and fill in the parameters like below.

Symbolize this buffer in red and change the transparency to 60%. Add a 1 pt outline of grey. We have now created a layer showing the path of each tornado visualizing accurate length and width.

Save the project. Navigate to the share toolbar and share as a web map. We can now open our web map in ArcGIS Online.

Part Two: Preparing the Web Map for Experience Builder

  1. Open the web map.

Open up the new web map in ArcGIS Online. The web map should have two layers: USA Tornadoes and Tornado Paths. Any other layers can be removed. Our map appears the same way in which we saved it in ArcGIS Pro.

2. Format Popups.

Click on the USA Tornadoes layer. On the right hand pane, navigate to popups. Here, we will create some attribute expressions using Arcade to create nicely formatted fields for our popups. Add the following attribute expressions:

Length:

Width:

Crop Loss:

Property Loss:

Number of Fatalities:

Magnitude:

State:

In the popups options, choose the fields pictured below to appear in the popups. Ensure you are using the attribute expressions you have made so that the popups have the data formatted as desired.

Repeat these steps on the Tornado Paths layer. Save the web map.

Part Three: Creating the Experience

  1. Choose a template.

In ArcGIS Online, select create a new app > Experience Builder. This experience was generated using the template. However, I published the template with all of the edits made to create the Historical Tornado Events in the United States Experience.

2. Connect your web map.

Connect your Tornadoes web map to the map widget. Modify a custom extent for the map to appear at.

3. Configure filter widgets.

For each filter widget, configure a SQL expression using predefined unique values. For example, configure the Magnitude filter as pictured below.

This allows the user to select a value from a dropdown rather than typing in a value.

Additionally, configure a trigger action that will action the map to zoom to the filtered records, as pictured below.

4. Configure Feature Info widget.

Connect the feature info widget to the map widget. This widget will show the same information that we formatted in the web map popups.

5. Configure the Select Features widget.

Connect the Select Features widget to the map widget. Enable select by rectangle and select by point.

6. Save and publish! All widgets have now been configured.

Result

The Experience Builder output is an interactive web app that allows users to explore historical tornado occurrences dating back to 1950, their paths of destruction overlaying aerial imagery to see the exact structures and cities they passed through, and compares the statistics of human and financial loss caused by each. Users can see where the most severe tornadoes tend to occur, or visualize severity temporally.

Sources

The Online Tornado FAQ (by Roger Edwards, SPC)

Storm Prediction Center Maps, Graphics, and Data Page

Cartographic Boundary Files

Investigating The Distribution of Crime by Type

Geo-Vis Project Assignment, TMU Geography, SA8905, Fall 2025


Hello everyone, and welcome to my blog!

Today’s topic addresses the distribution of crime in Toronto. I am seeking to provide the public, and implicated stakeholders with a greater knowledge and understanding of how, where, and why different types of crime are distributed in relation to urban features like commercial buildings, public transit, restaurants, parks, open spaces, and more. We will also be looking at some of the socio-economic indicators of crime, and from there identify ways to implement relevant and context specific crime mitigation and reduction strategies.

This project investigates how crime data analysis can better inform urban planning and the distribution of social services in Toronto, Ontario. Research across diverse global contexts highlights that crime is shaped by a mix of socioeconomic, environmental, and spatial factors, and that evidence-based planning can reduce harm while improving community well-being. The following review synthesizes findings from six key studies, alongside observed crime patterns within Toronto.


Accompanying a literature review, I created a 3D model that displays a range of information including maps made in ArcGIS Pro. The data used was sourced from the Toronto Police Service Public Safety Data Portal, and Toronto’s Neighbourhood Profiles from the 2021 Census. The objective here is to draw insightful conclusions as to what types of crime are clustering where in Toronto, what socio-economic and/or urban infrastructural indicators are contributing to this? and what solutions could be implemented in order to reduce overall crime rates across all of Toronto’s neighbourhoods – keeping equitability in mind ?

The distribution of crime across Toronto’s neighbourhoods reflects a complex interplay of socioeconomic conditions, built environment characteristics, mobility patterns, and levels of community cohesion. Understanding these geographic and social patterns is essential to informing more effective city planning, targeted service delivery, and preventive interventions. Existing research emphasizes the need for long-term, multi-approach strategies that address both immediate safety concerns and the deeper structural inequities that shape crime outcomes. Mansourihanis et al. (2024) highlight that crime is closely linked to urban deprivation, noting that inequitable access to resources and persistent neighbourhood disadvantages influence where and how crime occurs. Their work stresses the importance of integrating crime prevention with broader social and economic development initiatives to create safer, and more resilient urban environments (Mansourihanis et al., 2024).

Mansourihanis, O., Mohammad Javad, M. T., Sheikhfarshi, S., Mohseni, F., & Seyedebrahimi, E. (2024). Addressing Urban Management Challenges for Sustainable Development: Analyzing the Impact of Neighborhood Deprivation on Crime Distribution in Chicago. Societies, 14(8), 139. https://doi.org/10.3390/soc14080139

Click here to view the literature review I conducted on this topic.


Methods – Creating a 3D Interactive Crime Investigation Board

The purpose of this 3D map is to provide an interactive tool that can be regularly updated over time; allowing users to build upon research using various sources of information in varying formats (e.g. literature, images, news reports, raw data, various map types presenting comparable socio-economic data, etc; thread can be used to connect images and other information to associated areas on the map). The model has been designed for easy means of addition, removal and connection of media items by using materials like tacks, clips, and cork board. Crime incidents can be tracked and recorded in real time. This allows for quick identification of where crime is clustering based on geography, socio-economic context, and proximity to different land use types and urban features like transportation networks. We can continue to record and analyze what urban features or amenities could be deterring or attracting/ promoting criminal activity. This will allow for fast, context specific, crime management solutions that will ultimately help reduce overall crime rates in the city.

1. Conduct a detailed literature review. 
Here is the literature review I conducted to address this topic.

2. Downloaded the following data from: Open Data | Toronto Police Service Public Safety Data Portal. Each dataset was filtered to show points only from 2025.

- Dataset: Shooting and Firearm Discharges
- Dataset: Homicides
- Dataset: Assault
- Dataset: Auto Theft
- Dataset: Break and Enter

Toronto Neighbourhood Profiles, 2021 Census from: Neighbourhood Profiles - City of Toronto Open Data Portal
- Average Total Household Income by Neighbourhood
- Unemployment Rates by Neighbourhood

3. After examining the full data sets by year, select a time period to map. In this case, July 2025 which was the month that had the greatest number of crimes to occur this year.

4. Map Setup
- Coordinate system: NAD 1983 UTM Zone 17N
- Rotation: -17
- Geography:
- City of Toronto, ON, Canada
- Neighbourhood boundaries from Toronto Open Data Portal

5. Add the crime incident data reports and Toronto’s Neighbourhood Boundary file.

Geospatial Analysis Tools Used
Tool - Select by attribute and delete the data that we are not mapping. In this case;
From the Attribute Table,
Select by Attribute [OCC_YEAR] [is less than] [2025]

Tool - Summarize within
Count the number of crime incidents within each of the neighbourhood's boundary polygons for the 5 selected crime types for preliminary analysis and mapping.

Design Tools and Map Types Used
- Dot Density
- 2025 Crime rates, by type, annual and for July of 2025
- Heat Map
- 2025 Crime rates, by type, annual and for July of 2025
- Choropleth
- Average Total Household Income, City of Toronto by Neighbourhood
- Unemployment Rates Across Toronto, 2021
- Design Tools e.g. convert to graphics
Based on literature review and analysis of the presented maps,  this model allows for us to further analyze, visually display and record the data and findings. This model will allow for users to see where points are clustering, and examine urban features, land use and the socio-economic context of cluster areas in order to address potential solutions, with equity in mind.

Supplies
- Thread,
- Painted tooth picks,
- Mini clothes pins,
- Highlighters, markers etc.
- Scissors,
- Hot glue
- Images of indicators
- Relevant/insightful literature research
- Socio-Economic Maps: Population Income, unemployment, and density
- Crime Maps: Dot density crime by type, heat map of crime distribution by type, from the select 5 crime types, all incidents to occur during the month of July, 2025

Process
1. Attach cork board to poster board;

2. Cut out and place down main maps that have been printed (maps created in ArcGIS Pro, some additional design edits made in Canva);

3. Outline the large or central base map with tacks; use string to connect the tacks outlining the City of Toronto's regional boundary line.

4. Using colour painted tooth picks (alternatively, tacks may be used depending on size limitations), crime incidents can be recorded in real time, using different colours to represent different crime types.

5. Additional data can be added on and joined to other map elements over time. This data could be: images and locations of crime indicators; new literature findings; news reports’ raw data; different map types presenting comparable socio-economic data; community input via email, from consultation meetings, 911 calls, or surveys; graphs; tables; land use type and features and more.

6. Thread is used to connect images and other information to associated areas on the map. In this case, blue string and tacks were used to highlight preventative crime measures and red to represent an indicator of crime.

7. Sticky notes can be used to update the day and month (using a new poster/cork board for each year), under “Time Stamp”

8. Use of Google Earth was applied to further analyze using satellite imagery, a terrestrial layer, and an urban features layer in order to further analyze land use, type, function, and significant features like Union Station - a major public transit connection point, and located within Toronto’s most dense and overall largest crime hot spot.

9. A satellite imagery base map in ArcGIS was used to compare large green spaces (parks, ravines, golf courses etc.) with the distribution of each incidence point on the dot map created. Select each point field individually for optimal view and map analysis.

10. Video and Photo content used to display the final results were created using an IPhone Camera and the "iMovie" video editing app.

See photos and videos for reference!

Socioeconomic and Environmental Indicators of Crime

A consistent theme across the literature and my own findings is the strong connection between neighborhood deprivation and crime. Mansourihanis et al. (2024) emphasize that understanding the “relationship between urban deprivation and crime patterns” supports targeted, long-term strategies for urban safety. Concentrated poverty, population density, and low social cohesion are significant predictors of violence (Mejia & Romero, 2025; M. C. Kondo et al., 2018). Similarly, poverty and weak rule of law correlate more strongly with homicide rates than gun laws alone (Menezes & Kavita, 2025).

Environmental characteristics also influence crime distribution. Multiple studies link greater green space to reduced crime, higher social cohesion, and stronger perceptions of safety (Mejia & Romero, 2025). Exposure to green infrastructure can foster community pride and engagement, further reinforcing crime-preventive effects (Mejia & Romero, 2025). Relatedly, Stalker et al. (2020) show that community violence contributes to poor mental and physical health, with feelings of unsafety directly associated with decreased physical activity and weaker social connectedness.

Other urban form indicators—including land-use mix, connectivity, and residential density—shape mobility patterns that, in turn, affect where crime occurs. Liu, Zhao, and Wang (2025) find that property crimes concentrate in dense commercial districts and transit hubs, while violent crimes occur more often in crowded tourist areas. These patterns reflect the role of population mobility, economic activity, and social network complexity in structuring urban crime.

Crime Prevention and Community-Based Solutions

Several authors highlight the value of integrating built-environment design, green spaces, and community-driven interventions. Baran et al. (2014) show that larger parks, active recreation features, sidewalks, and intersection density all promote park use, while crime, poverty, and disorder decrease utilization. Parks and walkable environments also support psychological health and encourage social interactions that strengthen community safety. In addition, green micro-initiatives—such as community gardens or small landscaped interventions—have been found to enhance residents’ emotional connection to their neighborhoods while reducing local crime (Mejia & Romero, 2025).

At the policy level, optimizing the distribution of public facilities and tailoring safety interventions to local conditions are essential for sustainable crime prevention (Liu, Zhao, & Wang, 2025). For gun violence specifically, trauma-informed mental health care, early childhood interventions, and focused deterrence are recommended as multidimensional responses (Menezes & Kavita, 2025).

Spatial Crime Patterns in Toronto

When mapped across Toronto’s geography, the crime data revealed distinct clustering patterns that mirror many of the relationships described in the literature. Assault, shootings, and homicides form a broad U- or O-shaped distribution that aligns with neighborhoods exhibiting lower average incomes and higher unemployment rates. These patterns echo global findings on deprivation and violence.

Downtown Toronto—particularly the area surrounding Union Station—emerges as the city’s highest-density crime hotspot. This zone features extremely high connectivity, car-centric infrastructure, dense commercial and mixed land use, and limited green space. These conditions resemble those identified by Liu, Zhao, and Wang (2025), where transit hubs and high-traffic commercial districts generate elevated rates of property and violent crime. Google Earth imagery further highlights the concentration of major built-form features that attract large daily populations and mobility flows, reinforcing the clustering of assaults and break-and-enter incidents in the downtown core.

Auto theft is relatively evenly distributed across the city and shows weaker clustering around transit or commercial nodes. However, areas with lower incomes and higher unemployment still show modestly higher auto-theft levels. Break and enter incidents, by contrast, concentrate more strongly in high-income neighborhoods with lower unemployment—suggesting that offenders selectively target areas with greater material assets.

Across all crime categories, one consistent pattern is the notable absence of incidents within large green spaces such as High Park and Rouge National Urban Park. This supports the broader literature connecting green space with lower crime and improved perceptions of safety (Mejia & Romero, 2025; Baran et al., 2014). Furthermore, as described, different kinds of crime occur in low versus high income neighbourhoods emphasizing a need for context specific resolutions that take into consideration crime type and socio-economics.

Synthesis and Relevance for Toronto

Collectively, these findings indicate that crime in Toronto is shaped by intersecting socioeconomic factors, environmental features, and mobility patterns. Downtown crime clustering reflects high density, transit connectivity, and land-use complexity; outer-neighborhood violence aligns with deprivation; and green spaces consistently correspond with lower crime. These patterns mirror global research emphasizing the role of social cohesion, urban form, and economic inequality in shaping crime distribution.

Understanding these relationships is essential for planning decisions around green infrastructure investments, targeted social services, transit-area safety strategies, and neighborhood-specific interventions. Ultimately, integrating environmental design, socioeconomic supports, and community-based programs that support safer, healthier, and more equitable outcomes for Toronto residents.

Full-Stack Geovisualization of Ontario’s Sanitation Infrastructure

Geovis Project Assignment, TMU Geography, SA8905, MSA Fall 2025

By Roseline Moseti

Critical decision-making requires access to real-time spatial data, but traditional GIS workflows often lag behind source updates, leading to delayed and sometimes inaccurate visualizations. This project explores an alternative, a Full-Stack Geovisualization Architecture that integrates the database, server and client forming a robust pipeline for low-latency spatial data handling. Unlike conventional systems, this architecture ensures every update in the SQL database is immediately visible through web services on the user’s display, preserving data integrity for time-sensitive decisions. By bridging the gap between static datasets and interactive mapping, the unified platform makes it easier and more intuitive for users to understand complex spatial relationships. Immediate synchronization and a user-friendly interface guarantee that decision-makers rely on the most accurate up to date spatial information when it matters the most.

The Full-Stack Pipeline

The term Full-Stack is key because it means that one has control over integration of every layer of the application ensuring seamless synchronization between the raw data and its interactive visualization.

Part 1: Establish the Database Connection

The foundation for this project is reliable data storage. For this project we use Microsoft SQL Server to host our key datasets; Wastewater plants, Lakes, Rivers as well as the Boundary on Ontario. In this initial step the database connection is established between the local desktop GIS environment (ArcGIS) and the Database through the ArcGIS Database Connection file (.sde).

The foundation for this project is reliable data storage. For this project we use Microsoft SQL Server to host our key datasets; Wastewater plants, Lakes, Rivers as well as the Boundary of Ontario. In this initial step the database connection is established between the local desktop GIS environment (ArcGIS) and the Database through the ArcGIS Database Connection file (.sde).

Once the connection is verified, ArcGIS tools are used to import the shapefile data directly into the SQL Server structure. This transforms the SQL database into an Enterprise Geodatabase, which is a spatial data model capable of storing and managing complex geometry.

The result is a highly structured repository where all attribute data and spatial features are stored, managed and indexed by the SQL engine.

Part 2: The Backend: Data Access & APIs

With the spatial data centralized in the SQL Database, the next part is creating the connection to serve this data to the web. The Backend is the intermediary between the database and the end-user display. The crucial first step is establishing the programmatic connection to the database using a secure Connection String. The key aspects here is the server name of the database and the name of the database itself as without the correct details here the data won’t be retrieved from the database.

To translate the raw database connection into usable web services, ASP.NET Core and Entity Framework are used for data management. Setting this up, the API endpoints that the front-end application will call are set. This structure ensures that the web application interacts only with the secure, controlled services provided by the backend, maintaining both security and data integrity.

When a user loads the site, the entire system is triggered, and the user’s web browser sends a request to the server and the server using C# accesses the database, pulls the relevant spatial data and transforms it. The server packages the processed data into a web-ready format and sends it back to the browser. The browser instantly reads that package and draws the interactive map on your screen.

Part 3: Data Modelling: Mapping the Shapefiles to the Code

Before spatial data can be accessed by the backend services, the application must understand how the data is structured and how it corresponds to the tables created in the SQL Database. This is handled by the data model for each of the shapefiles, reflecting the individual columns in each of the shapefiles.

The data context is where the C# model is explicitly mapped to the database structure using Entity Framework Core, ensuring data synchronization and proper interpretation.

This robust modelling process ensures that the data pulled from the SQL Database is always treated as the original spatial data, ready to be processed by the backend and visualized on the frontend.

Part 4: Data Translation Engine (C# and GeoJSON)

The SQL Database stores location data using accurate, specific data types that are native to the server environment (e.g., geometry or geography). A web browser doesn’t understand these data types and it needs data delivered in a simpler, lighter, and universally readable format. This is where the translation engines come into work and it’s powered by C# back-end logic.

C# acts as the translator and it pulls the coordinate data from the SQL database. The data is then converted into GeoJSON which is a simple standardized, text-based format that all modern web mapping libraries understand. By converting everything to GeoJSON, the C# back-end creates a single, clean package that ensures data consistency and allows the map to load quickly and without errors, regardless of the user’s device or operating system.

Part 5: The Front-end: Interactive Visualization

The most visible piece of the puzzle is the Interactive Presentation. This is the key deliverable which entails embedding a powerful Geographic Information System (GIS) entirely within the user’s browser. The traditional desktop GIS software is bypassed, and users don’t need to install anything, purchase licenses or even use any complex interface but just open a web page.

The real power lies in the integration between the frontend and the backend services established. When a user loads the site, the entire system is triggered and the user’s web browser sends a request to the server and the server using C# API accesses the database, pulls the relevant spatial data and transforms it. The server packages the processed data into a web-ready format and sends it back to the browser. The browser instantly reads that package and draws the interactive map on the screen.

Once the GeoJSON data arrives, a JavaScript mapping library is used to render the interactive map. The presentation is a live tool designed for exploring data. Users can click on any WWTP icon or river line to instantly pull up its database attribute. The map is displayed alongside all supporting elements that provide further details on the browser without a user needing to leave the browser window.

Looking Ahead!

The Full-Stack architecture is not just for current visualization but it’s about establishing a powerful scalable platform for the future. As the server side is already robust and designed to process large geospatial datasets, it is perfectly created for integration of more advanced features. This structure allows for seamless addition of modules for complex analysis like using Python for predictive simulations or proximity modeling. The main benefit is that when these complex server-side analyses are complete, the results can be immediately packaged into GeoJSON and visualized on the existing map interface, turning static data into dynamic, predictive insights.

This Geovis project is designed to be the foundational geospatial data hub for Ontario’s water management, built for both current display needs and future analytical challenges.

See you on the second edition!

The Intersection of Geography and Athletic Effort

SA8905 – Master of Spatial Analysis, Toronto Metropolitan University

A Geovizualization Project by Yulia Olexiuk.

Introduction

A common known fact is that all marathons are the same length, but are they created equal? Long distance running performance depends on more than just fitness and training. The physical environment plays a significant role in how runners exert effort. Whether it be terrain, slope, humidity, or temperature, marathons around the world present distinct geographic challenges. In this case, three races in three continents are compared. Boston’s rolling topography often masks the difficulty of its course such as its infamous Heartbreak Hill, and Singapore’s hot and humid climate has athletes start running before dawn to beat the sun.

Data

  • GPS data for the Boston, Berlin, and Singapore Marathons were sourced from publicly available Strava activities, limited to routes that runners had marked as public. The marathon data was ensured that it consisted of  dense point resolution, clean timestamps, and minimal GPS noise and then downloaded as .GPX files. 

Figure 1. Getting .GPX data from Strava.

  • Using QGIS, the .GPX files were first inspected and cleaned and then converted to GeoPackage format and imported into ArcGIS Pro, where they were transformed into both point feature classes and polyline feature classes. The polyline class was then projected using appropriate city-specific coordinate systems (ETRS89 / UTM Zone 33N and NAD83 / Massachusetts Mainland, etc). The DEMs were sourced from the LivingAtlas database and are labeled as Terrain3D.
  •  I used the Open-Meteo API to make queries for each marathon’s specific race day, narrowing the geographic coordinates, local timezone, and hourly variables including temperature (degC), humidity (%), wind speed (km/h), and precipitation(mm). It was integrated into ArcGIS Pro’s Add Surface Information and Extract Multi-Values to Points tools to derive slope, elevation range, and elevation gain per kilometre. The climate data was collected through an API which returned the data in JSON format. It was converted to .CSVs with Excel Power Query.

Software/Tools

  • ArcGISPro: Used to transform the data and make web layers, map routes, and calculate the field to get valuable runner information.
  • QGIS: Used to clean and overlook the .gpx files imported from Strava.
  • Experience Builder: Used to create an interactive dashboard for the geospatial data.

Methodology

  • The workflow for this project began with extensive preprocessing of GPS track data sourced from public Strava activities. Each GPX file was inspected, cleaned, and converted into usable spatial geometry and re-projecting all layers into city-appropriate projected coordinate reference systems. The fields were then calculated for pace per kilometre, elevation gain per kilometre, maximum slope, and mean slope, using a combination of the Generate Points Along Lines, Split Line at Measure, and Add Surface Information tools.

Figure 2. GPX point layer undergoing a spatial join. 

  • The visualization design was the main cornerstone of the project’s approach. Thus, race maps employed accessible, easy-to-comprehend gradients to represent sequential variables such as pace, slope, and elevation gain, while the dashboard created through Experience Builder enabled dynamic comparison across the three cities.

Figure 3. Slider showing the patterns and relationships between average pace and elevation of the Berlin marathon.

Results and Discussion

Relationship between Pace and Terrain

  • Berlin displays the most consistent and fastest pacing profile, with minimal variation in both slope and elevation gain of only 27 metres of elevation difference.
  • On the other hand Boston showed more variability by each consecutive marker due to its hilly terrain. The geovisualizations clearly highlight slowdowns associated with climb leading to Heartbreak Hill, followed by pace recoveries on downhill segments.
  • Surprisingly, the Singapore marathon route had a different performance dynamic but not in the way that was initially assumed. In addition to its exact elevation difference as Boston of 135 metres. Participants would also face more environmentally-centred constraints, not only terrain-based difficulty.
  • Pacing inconsistency can coincide with high humidity and hot overnight temperatures really showing viewers how tropical climate conditions can inflict a different form of endurance.

Figure 4. Chart demonstrating the recorded temperature in degrees Celsius at the time of each race day. Note that the date was omitted due to the differing years, days, and months of each marathon so the duration of the race is the primary focus.

Figure 5. Chart comparing the relative humidity (%) between the marathon cities during race day.

Environmental Conditions and Weather During Race Day

  • It’s interesting to note that each city hosts their marathon at very different times throughout the year. For example, the Boston marathon used in the case study was held on April 17th, 2023. Berlin hosted their race on September 24th, 2023, and Singapore hosted their annual marathon on December 2nd, 2012. Boston usually started their race around 8:00 AM, Berlin usually starts an hour later at 9:00am local time. Lastly, Singapore begins the marathon at 4:30AM, assumingly to avoid the midday heat, which reaches high 30 degrees Celsius by noon.
  • This integration of hourly weather data highlights how climate interacts with geography to shape athletic effort. Berlin demonstrates ideal running conditions having cool and stable temperature along with stable wind speeds, which makes sense of the fast, consistent pacing. Boston shows slightly more variable weather, perhaps being on the New England Coast, Singapore saw the most influential weather impact with the humidity exceeding 80% for majority of the race (Figure 5) and persistent hot temperatures even throughout the night before.

Limitations

  • I experienced many limitations making this geovisualization including the fact that the project relies on public Strava .GPX data, which could vary in precision due to the accuracy of runner’s device whether it be phone or watch, or even satellite reception.
  • Also, though it was a good idea to use the data of some top performers of the marathon to get a good idea of where a well conditioned athlete naturally takes more time and slows their pace, I wished more average participant data was available to have a more averaged experience mapped.
  • Furthermore, I was unable to match the weather data directly to specific kilometres and instead had it serve as contextual aids rather than precise environmental measurements. 

Conclusion

I think this geovisualization project does an effective job demonstrating how terrain, and climate distinctly shape marathon performance across Boston, Berlin, and Singapore and I believe that visuals like these can be super fascinating just to satisfy curiosity or plan strategically for a race in the future. Happy Mapping!

The Globalization of Tim Hortons

Sughra Syeda Rizvi, Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

Hey everyone! For my Geovisualization project, I wanted to explore something familiar, cultural, and globally recognizable: the international expansion of Tim Hortons. Although the brand is iconic in Canada, fewer people know just how far it has spread across the world. My goal was to create an interactive, chronological StoryMap that guides viewers through the first Tim Hortons location in every country where the brand operates.
To do this, I combined online research, geographic coordinates, and a storytelling platform that lets users “travel” from country to country using a next-slide navigation system. The result is an easy-to-follow visual timeline that maps the company’s journey from Hamilton to the Middle East, Asia, and beyond.

Step 1:
I began by compiling a list of all countries where Tim Hortons has opened at least one store. Since there is no single authoritative dataset, I had to build this list manually.

I used official Tim Hortons country pages and local news sources (e.g., CBC News, Bahrain News, Malaysia Lifestyle Asia)

These articles helped me confirm: The country, the city and address of the first-ever location and when the year the branch opened.

Because accuracy matters in mapping, I double-checked every location to make sure my information was credible, date-verified, and sourced properly.

Step 2: Finding Coordinates for Each First Store

Once I confirmed each opening location, I searched for latitude and longitude using Google Maps

This ensured that the map zooms into the exact spot of each first store instead of a general city view. If a building no longer existed or the listing wasn’t clear, I used the most reliable archival address information available.

Step 3: Creating the StoryMapJS Project

After gathering all my data, I went to StoryMapJS (Knight Lab), signed in, and created a new project. I gave it a title that reflects the purpose of the visualization: showing Tim Hortons’ global growth through time.

StoryMapJS works slide-by-slide. Each slide represents a location on the map paired with text and an image. Because the platform automatically pans to your coordinates, it creates a smooth storytelling experience.

Step 4: Designing the Title Page

My first slide acts as a title/introduction page.

I left the location fields empty so that StoryMapJS wouldn’t zoom anywhere yet. Instead, I wrote a short explanation of the project meaning what it shows, why I chose the topic, and how to navigate the timeline.

This gives viewers context before they begin clicking through the countries.

Step 5: Adding Slides for Each Country

This is where the main work happens.

For every country, I clicked “Add Slide” on the left panel. Entered the country’s name, put in the latitude and longitude, added a title, wrote a short description with the in-text citation of the source. I inserted an image that visually represents the branch

StoryMapJS then automatically zooms into the exact coordinates and animates the movement from one country to the next

This creates a chronological touring experience, allowing viewers to follow Tim Hortons’ global expansion in the order it happened.

Step 6: Adding a Final References Slide

Because this project relies heavily on online sources, I added a final references slide listing all of the sources i used. I made sure to use APA citations.

Step 7: Publishing the StoryMap

When all slides were complete, I clicked:

“Save”, then “Publish Changes”

Then “Share” and finally “Copy Public Link”

This generated a shareable URL that can be posted online, embedded, or submitted for grading.

I hope this tutorial was helpful! Here is the link to check my geovis project yourself! Along with a QR code: 

https://tinyurl.com/54w48bdd

Watt a Change: Toronto’s EV Adoption from 2022-2024

A Geovisualization Project by Rukiya Mohamed
SA8905 – Master of Spatial Analysis, Toronto Metropolitan University

As Toronto pushes toward a cleaner, electrified future, electric vehicles (EVs) are steadily becoming part of the urban landscape. But adoption isn’t uniform, some neighbourhoods surge ahead, others lag behind, and the geography of the transition tells an important story.

For this project, I created a time-series animation in ArcGIS Pro that maps how EV ownership has grown year by year across Toronto’s Forward Sortation Areas (FSAs) from 2022 to 2025. Instead of four static maps, the animation brings the data to life, showing how adoption spreads, accelerates, and concentrates across the city.

Background: Why Map EV Adoption?

EVs are central to Toronto’s climate goals, but adoption reflects deeper social and economic patterns. Mapping where EVs are growing helps us understand:

  • Which neighbourhoods lead the transition
  • Which areas may require more investment in charging infrastructure
  • How adoption relates to broader demographic and mobility trends
  • Where planning and policy may be falling out of sync with real demand

This project focuses on how the transition is happening across Toronto—not as a single snapshot but as a dynamic temporal process.

Data Sources

1. Ontario Data Portal – Electric Vehicles by Forward Sortation Area

  • Battery Electric Vehicles (BEVs)
  • Plug-in Hybrid Electric Vehicles (PHEVs)
  • Total EVs
  • Quarterly counts (Q1 2022 → Q1 2025)
  • Aggregated by FSA (e.g., M4C, M6P)

2. Toronto Open Data Portal – FSA Boundary File

  • Used to isolate FSAs located within the City of Toronto
  • Provides the spatial geometry needed for mapping

Methodology

This workflow combines Excel preprocessing with ArcGIS Pro spatial joins and animation tools. Below is the full process with a clear tutorial so readers can replicate it.

Step 1: Cleaning EV Data in Excel

The raw EV file arrived as a single unformatted line. To prepare it:

  1. Used TRIM() and TEXT-TO-COLUMNS to separate fields into:
    • FSA
    • BEVs
    • PHEVs
    • Total EVs
    • Year/Quarter
  2. Created four separate datasets for:
    • 2022
    • 2023
    • 2024
    • 2025
  3. Saved each file as a CSV for import into ArcGIS Pro.

Step 2: Bringing Data into ArcGIS Pro

  1. Imported the Toronto FSA shapefile.
  2. Imported each annual EV CSV.
  3. Performed a Table → Spatial Join using the common FSA code.
  4. Calculated EVs per 1000 persons using population from the Toronto boundary attributes.
  5. Created four clean layers, one per year, ready for mapping.

Step 3: Building Choropleth Maps (2022–2025)

For each year layer:

  • Applied graduated colours to map total EVs (or EVs per 1000 people).
  • Used a consistent classification scheme to ensure comparability.
  • Selected a warm-to-cool colour ramp to highlight hotspots vs. low-adoption areas.

This produced four individually interpretable annual maps.

But the real magic comes next.

Step 4: Creating the EV Time-Series Animation in ArcGIS Pro

1. Enable Animation

  • Go to the View tab → Add Animation
    A timeline appears at the bottom of the screen.

2. Set Keyframes

For each year layer:

  1. Turn ON only the layer for that year.
  2. Set symbology and map extent exactly as desired.
  3. Click Create Keyframe.
    Result:
  • Keyframe 1 = 2022
  • Keyframe 2 = 2023
  • Keyframe 3 = 2024
  • Keyframe 4 = 2025

3. Adjust Timing

  • Set transitions between keyframes to 2–3 seconds each.
  • Add fades if desired for smoother blending.

4. Export the Video

  • Go to Share → Export Animation
  • Output as MP4 at 1080p for clean blog-quality visuals
  • The final result is a smooth year-to-year playback of EV expansion across Toronto.

Results: What the Animation Reveals

The time-series visualization shows clear geographic patterns:

  • Downtown and midtown FSAs lead early adoption, reflecting higher incomes and better access to early charging infrastructure.
  • Inner suburbs show accelerating year-to-year growth, especially from 2023 onward.
  • By 2025, EV ownership forms a distinct corridor of high adoption stretching from the waterfront north through midtown and over to Etobicoke and North York.

Seeing these shifts unfold dynamically (rather than as four separate maps) reveals momentum,  not just differences.

Mapping and Printing Toronto’s Socioeconomic Status Index in 3D

Menusan Anantharajah, Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

Hello, this is my blog post!

My Geovis project will explore the realms of 3D mapping and printing through a multi-stage process that utilizes various tools. I have always had a small interest in 3D modelling and printing, so I selected this medium for the project. Although this is my first attempt, I was quite pleased with the process and the results.

I decided to map out a simplified Socioeconomic Status (SES) Index of Toronto’s neighbourhoods in 2021 using the following three variables:

  • Median household income
  • Percentage of population with a university degree
  • Employment rate

It should be noted that since these variables exist on different scales, they were standardized using z-scores and then scaled to a 0-100 range. The neighbourhoods will be extruded by the SES index value, meaning that neighbourhoods scoring high will be taller in height. I chose SES as my variable of choice since it would be interesting to physically visualize the disparities and differences between the neighbourhoods by height.

Data Sources

Software

A variety of tools were used for this project, including:

  • Excel (calculating the SES index and formatting the table for spatial analysis)
  • ArcGIS Pro (spatially joining the neighbourhood shapefile with the SES table)
  • shp2stl* (takes the spatially joined shapefile and converts it to a 3D model)
  • Blender (used to add other elements such as title, north arrow, legend, etc.)
  • Microsoft 3D Builder** (cleaning and fixing the 3D model)
  • Ultimaker Cura (preparing the model for printing)

* shp2stl would require an older node.js installation
** Microsoft 3D Builder is discontinued, though you can sideload it

Process

Step 1: Calculate the SES index values from the Neighbourhood Profiles

The three SES variables (median household income, percentage of population with a university degree, employment rate) were extracted from the Neighbourhood Profiles table. Using Microsoft Excel, these variables were standardized using z-scores, then combined into a single average score, and finally rescaled to a 0-100 range. I then prepared the final table for use in ArcGIS Pro, which included the identifiers (neighbourhood names) with their corresponding SES values. After this was done, the table was exported as a .csv file and brought over to ArcGIS Pro.

Step 2: Create the Spatially Joined Shapefile using ArcGIS Pro

The neighbourhood boundary file and the newly created SES table were imported into ArcGIS Pro. Using the Add Join feature, the two data sets were combined into one unified shapefile, which was then exported as a .shp file.

The figure above shows what the SES map looks like in a two-dimensional view. The areas with lighter hues represent neighbourhoods with low SES values, while the ones in dark green represent neighbourhoods with high SES values.

Step 3: Convert the shapefile into a 3D model file using shp2stl

Before using shp2stl, make sure that you have an older version of node.js (v11.15.0) and npm (6.7.0) installed. I would also recommend placing your shapefile in a new directory, as it can later be utilized as a Node project folder. Once the shapefile is placed in a new folder, you can open the folder in Windows Terminal (or Command Prompt) and run the following:

npm install shp2stl

This will bring in all the necessary modules into the project folder. After that, the script can be written. I created the following script:

const fs = require('fs');
const shp2stl = require('shp2stl');

shp2stl.shp2stl('TO_SES.shp', {
  width: 150,
  height: 25,
  extraBaseHeight: 3,
  extrudeBy: "SES_z",
  binary: true,
  verbose: true
}, function(err, stl) {
  if (err) throw err;
  fs.writeFileSync('TO_NH_SES.stl', stl);
});

This script was ‘compiled’ using Visual Studio Code; however, you can use any compiler or processor (even Notepad works). This script was then saved to a .js file in the project folder. The script was then executed in Terminal using this:

node shapefile_convert.js

The result is a 3D model that looks like this:

Since we only have Toronto’s neighbourhoods, we have to import this into Blender and create the other elements.

Step 4: Add the Title, Legend, North Arrow and Scale Bar in Blender

The 3D model was brought into Blender, where the other map elements were created and added alongside the core model. To create the scale bar for the map, the 3D model was overlaid onto a 2D map that already contained a scale bar, as shown in the following image.

After creating the necessary elements, the model needs to be cleaned for printing.

Step 5: Cleaning the model using Microsoft 3D Builder

When importing the model into 3D Builder, you may encounter this:

Once you click to repair, the program should be able to fix various mesh errors like non-manifold edges, inverted faces or holes.

After running the repair tool, the model can be brought into Ultimaker Cura.

Step 6: Preparing the model for printing

The model was imported into Ultimaker Cura to determine the optimal printing settings. As I had to send this model to my local library to print, this step was crucial to see how the changes in the print settings (layer height, infill density, support structures) could impact the print time and quality. As the library had an 8-hour print limit, I had to ensure that the model was able to be printed out within that time limit.

With this tool, I was able to determine the best print settings (0.1 mm fine resolution, 10% infill density).

With everything finalized from my side, I sent the model over to be printed at the library; this was the result:

Overall, the print of the model was mostly successful. Most of the elements were printed out cleanly and as intended. However, the 3D text could not be printed with the same clarity, so I decided to print out the textual elements on paper and layer them on top of the 3D forms.

The following is the final resulting product:

Limitations

While I am still satisfied with the end result, there were some limitations to the model. The model still required further modifications and cleaning before printing; this was handled by the library staff at Burnhamthorpe and Central Library in Mississauga (huge shoutout to them). The text elements were also messy, which was expected given the size and width of the typeface used. One improvement to the model would be to print the elements separately and at a larger scale; this would ensure that each part is printed more clearly.

Closing Thoughts

This project was a great learning experience, especially for someone who had never tried 3D modelling and printing before. It was also interesting to see the 3D map highlighting the disparities between neighbourhoods; some neighbourhoods with high SES index values were literally towering over the disadvantaged bordering neighbourhoods. Although this project began as an experimental and exploratory endeavour, the process of 3D mapping revealed another dimension of data visualization.

References

City of Toronto. (2025). Neighbourhoods [Data set]. City of Toronto Open Data Portal. https://open.toronto.ca/dataset/neighbourhoods/ 

City of Toronto. (2023). Neighbourhood profiles [Data set]. City of Toronto Open Data Portal. https://open.toronto.ca/dataset/neighbourhood-profiles/

3D Visualization of Traffic Collision Hotspots in Toronto (2022)

Geovis Project Assignment, TMU Geography, SA8905, Fall 2025
By: Haneen Banat

Introduction

Traffic collisions are a major urban safety concern in large cities like Toronto, where dense road networks, high population, and multimodal movement create complex interactions between drivers, pedestrians, cyclists, and transit. Traditional 2D maps and tables can represent collision statistics, but they often fail to communicate spatial intensity or the “feel” of risk across neighbourhoods. For this project, I explore how GIS, 3D modeling, and architectural rendering tools can work together to reimagine collision data as a three-dimensional, design-driven geovisualization.

My project, 3D Visualization of Traffic Collision Hotspots in Toronto, transforms Toronto Police Service collision data into an immersive 3D map. The goal is to visualize where collisions are concentrated, how spatial patterns differ across neighbourhoods, and how 3D storytelling techniques can make urban safety data more intuitive and visually compelling for planners, designers, and the public. I use a multi-software workflow that spans ArcGIS Pro, the City of Toronto’s 3D massing data, SketchUp, and Lumion. This project demonstrates how cartographic tools can support modern spatial storytelling, blending urban analytics with design. 

Data Sources

Toronto Police Open Data Portal

Dataset: Traffic Collisions (ASR-T-TBL-001)
Link: https://data.torontopolice.on.ca

This dataset includes over 770,000 collision records across many years. Each record includes: Location, Date, time, collision type, mode invloved and different attributes. Because the full dataset is extremely large and includes COVID-period anomalies, I filtered the dataset to only the year 2022. This produced roughly 50,000-60,000 collision records. For this project, only automobile collisions were used. I downloaded the geodatabase file as a CVS.

The second piece of data that was needed was City of Toronto – Neighbourhood Boundaries: Link: https://open.toronto.ca/dataset/neighbourhoods/

The third piece of data is City of Toronto Planning, 3D Massing Model. Link: https://cot-planning.maps.arcgis.com/apps/webappviewer/index.html?id=161511b3fd7943e39465f3d857389aab

This dataset includes 3D building footprints and massing geometry. I downloaded individual massing tiles in SketchUp format (.skp) for the neighbourhoods with the highest hotspot scores. Because each tile is extremely heavy, I imported them piece by piece.

Software Used:

  • ArcGIS Pr: filtering, spatial join, hotspot analysis
  • SketchUp: extrusion modeling and colour classification
  • Lumion: 3D rendering, lighting, and final visuals

Methodology

This project required a multi-stage workflow spanning GIS analysis, CAD conversion, 3D modeling, and rendering. The workflow is divided into four main stages.

Step 1: Data Cleaning & Hotspot Analysis using ArcGIS Pro

Filtering Collision Data:

The Police dataset originally contained 772,000 records.

  1. Applied a filter for OCC_DATE = 2022
  2. Removed non-automobile collisions
  3. Ensured that only records with valid geometry were included
  4. Downloaded as File Geodatabase (shapefile download was corrupt)

After filtering, the dataset was reduced to a manageable 50,000 records

Step 2: Spatial Join (Join collisions to neighbourhoods)

To understand spatial distribution, I joined the collision points to Toronto’s 158 neighbourhood polygons

Tool: Spatial Join

  • Target features: Neighbourhoods
  • Join features: Collisions
  • Join operation: JOIN_ONE_TO_ONE
  • Match option: INTERSECT

Step 3: Hotspot Analysis (Optimized Hot Spot Analysis)

Tool: Optimized Hot Spot Analysis
Input: All collision points
This produced a statistically significant hotspot/coldspot map: White = not significant Red = High-risk clusters Blue = Low-risk clusters

Step 4: 3D Map in Sketchup

Importing DWG into SketchUp

To import into SketchUp, the downloaded SketchUp files of Toronto neighborhood boundaries that were downloaded earlier were used in this stage. Each file must be downloaded separately. Identifying each section was done through the labels that were automatically applied.

Applying Colour Classification: To create a visually intuitive gradient:

Very high hotspots: red, Low collisions: blue

Step 5: Rendering & Visualization in Lumion

Importing the SketchUp Model: The extruded model was imported into Lumion for realistic visualization.

Adding Atmospheric Lighting: Because this visualization focuses on highlighting hotspot intensities, I chose a night time scene: which then i reduced sunlight intensity. Added Spotlights added at various heights Colours represented collision intensity: Blue = lower-risk area, Red = high collision zones

Add lighting and change color to to match the collision type

Decrease the Sun intensity to set a night time setting

Camera Movement & Composition: I created multiple camera angles to show: Nighttime lighting reflecting collision intensity, Panoramic views of the 3D collision landscape, Close-ups of high-risk clusters

Step 6: Exporting the Final Renders

Each neighborhood would need to be rendered at the chosen angle and exported. 

Results

1. Downtown and Waterfront

Areas such as St. Lawrence–East Bayfront–Islands, Harbourfront–CityPlace, Fort York–Liberty Village, and West Queen West showed extremely high collision densities

2. Inner Suburban Belt

Neighbourhoods like South Riverdale, Annex, Dufferin Grove, and Trinity-Bellwoods exhibited moderate-to-high collision intensity, correlating with high pedestrian and cyclist activity.

3. Lower-risk Zones

Coldspots appeared mainly in low-density residential neighbourhoods with fewer arterial roads.

3D Advantages

The extruded height and nighttime lighting made it easy to instantly see:

  • Which areas had the most collisions
  • How intensity changes across neighbourhoods
  • Where the city might focus safety interventions

Limitations

  • Massing data is extremely large: Importing all of Toronto was impossible due to memory and file size constraints.
    Only selected hotspot tiles were used.
  • Temporal variation ignored: This project analyzed only 2022 and not multi-year trends.
  • Hotspot analysis generalizes clustering
    While statistically robust, it does not differentiate between collisions caused by traffic volume, infrastructure, or behavioural factors.
  • Rendering is interpretive
    Height and colour were designed for visual storytelling rather than strict quantitative precision.
  • Limited Interactive: The 3D render isnt interactive unless you have acess to the softwares used, which would either need to be Sketchup or Lumion.

Conclusion

This project demonstrates how collision data can be transformed from static points into an immersive 3D visualization that highlights urban road safety patterns. By integrating ArcGIS analysis with architectural modeling tools like SketchUp and Lumion, I created a geographically accurate, data-driven 3D landscape of Toronto’s collision hotspots.

The final visualization shows where collisions cluster most intensely, provides intuitive spatial cues for understanding road safety risks, and showcases the potential of hybrid cartographic-design workflows. This form of neo-cartography offers a compelling way to communicate urban safety information to planners, designers, policymakers, and the public.

Geospatial Assessment of Solar Glare Hazard Potential on Urban Road Network (Mississauga, ON)

1. Introduction and Objectives
This report documents the methodology and execution of a geospatial analysis aimed at identifying specific segments of the road network with a high potential for dangerous solar glare during critical commute times.
The analysis focuses on the high-risk window for solar glare in the Greater Toronto Area (GTA), typically the winter months (January) around the afternoon commute (4:00 PM EST), when the sun is low on the horizon and positioned in the southwest.
The primary objectives were to:
1. Calculate the average solar position (Azimuth and Elevation) for the defined high-risk period.
2. Determine the orientation (Azimuth) of all road segments in the StreetCentreline layer.
3. Calculate the acute angle between the road and the sun (R_S_ANGLE).
4. Filter the results to identify segments where the road is both highly aligned with the sun and the driver is traveling into the solar direction, marking them as High Glare Hazard Potential.
2. Phase I: ArcPy Scripting for Data Calculation
The first phase involved developing an ArcPy script to calculate the necessary astronomical and geometric values and append them to the input feature class. Due to database constraints (specifically the 10-character field name limit in certain geodatabase formats), field names were abbreviated.
2.1. Script Parameters and Solar Calculation
The script uses the approximate latitude and longitude of Mississauga, ON (43.59, -79.64), and calculates the average solar position for the first week of January 2025 at 4:00 PM EST.
2.2. Final ArcPy Script
The following Python code was executed in the ArcGIS Pro Python environment:
import arcpy
import datetime
import math
import calendar

# — User Inputs (ADJUST THESE VALUES AS NEEDED) —
input_fc = “StreetCentreline”
MISSISSAUGA_LAT = 43.59
MISSISSAUGA_LON = -79.64
TARGET_TIME_HOUR = 16
YEAR = 2025

# — Field Names for Output (MAX 10 CHARACTERS FOR COMPLIANCE) —
ROAD_AZIMUTH_FIELD = “R_AZIMUTH” # Road segment’s direction (Calculated)
SOLAR_AZIMUTH_FIELD = “S_AZIMUTH” # Average Sun direction
SOLAR_ELEVATION_FIELD = “S_ELEV” # Average Sun altitude
ROAD_SOLAR_ANGLE_FIELD = “R_S_ANGLE” # Angle difference (Glare Indicator: 0=Worst Glare)

# — Helper Functions (Solar Geometry and Segment Azimuth) —

def calculate_solar_position(lat, lon, dt_local):
“””Calculates Solar Azimuth and Elevation (Simplified NOAA Standard).”””
TIMEZONE = -5
day_of_year = dt_local.timetuple().tm_yday
gamma = (2 * math.pi / 365) * (day_of_year – 1 + (dt_local.hour – 12) / 24)

eqtime = 229.18 * (0.000075 + 0.001868 * math.cos(gamma) – 0.032077 * math.sin(gamma)
– 0.014615 * math.cos(2 * gamma) – 0.040849 * math.sin(2 * gamma))
decl = math.radians(0.006918 – 0.399912 * math.cos(gamma) + 0.070257 * math.sin(gamma)
– 0.006758 * math.cos(2 * gamma) + 0.000907 * math.sin(2 * gamma)
– 0.002697 * math.cos(3 * gamma) + 0.00148 * math.sin(3 * gamma))

time_offset = eqtime + 4 * lon – 60 * TIMEZONE
tst = dt_local.hour * 60 + dt_local.minute + dt_local.second / 60 + time_offset
ha_deg = (tst / 4) – 180
ha_rad = math.radians(ha_deg)
lat_rad = math.radians(lat)

cos_zenith = (math.sin(lat_rad) * math.sin(decl) +
math.cos(lat_rad) * math.cos(decl) * math.cos(ha_rad))
zenith_rad = math.acos(min(max(cos_zenith, -1.0), 1.0))
solar_elevation = 90 – math.degrees(zenith_rad)

azimuth_num = -math.sin(ha_rad)
azimuth_den = math.tan(decl) * math.cos(lat_rad) – math.sin(lat_rad) * math.cos(ha_rad)

if azimuth_den == 0:
solar_azimuth_deg = 180 if ha_deg > 0 else 0
else:
solar_azimuth_rad = math.atan2(azimuth_num, azimuth_den)
solar_azimuth_deg = math.degrees(solar_azimuth_rad)

solar_azimuth = (solar_azimuth_deg + 360) % 360

return solar_azimuth, solar_elevation


def calculate_segment_azimuth(first_pt, last_pt):
“””Calculates the azimuth/bearing of a line segment.”””
dx = last_pt.X – first_pt.X
dy = last_pt.Y – first_pt.Y
bearing_rad = math.atan2(dx, dy)
bearing_deg = math.degrees(bearing_rad)
azimuth = (bearing_deg + 360) % 360
return azimuth

# — Main Script Execution —
arcpy.env.overwriteOutput = True

try:
# 1. Calculate Average Solar Position
start_date = datetime.date(YEAR, 1, 1)
end_date = datetime.date(YEAR, 1, 7)
total_azimuth, total_elevation, day_count = 0, 0, 0
current_date = start_date

while current_date <= end_date:
local_dt = datetime.datetime(current_date.year, current_date.month, current_date.day, TARGET_TIME_HOUR, 0, 0)
az, el = calculate_solar_position(MISSISSAUGA_LAT, MISSISSAUGA_LON, local_dt)
if el > 0:
total_azimuth += az
total_elevation += el
day_count += 1
current_date += datetime.timedelta(days=1)

if day_count == 0:
raise ValueError(“The sun is below the horizon for all calculated dates/times.”)

avg_solar_azimuth = total_azimuth / day_count
avg_solar_elevation = total_elevation / day_count

# 2. Add required fields
for field_name in [ROAD_AZIMUTH_FIELD, SOLAR_AZIMUTH_FIELD, SOLAR_ELEVATION_FIELD, ROAD_SOLAR_ANGLE_FIELD]:
if not arcpy.ListFields(input_fc, field_name):
arcpy.AddField_management(input_fc, field_name, “DOUBLE”)

# 3. Use an UpdateCursor to calculate and populate fields
fields = [“SHAPE@”, ROAD_AZIMUTH_FIELD, SOLAR_AZIMUTH_FIELD, SOLAR_ELEVATION_FIELD, ROAD_SOLAR_ANGLE_FIELD]

with arcpy.da.UpdateCursor(input_fc, fields) as cursor:
for row in cursor:
geometry = row[0]
segment_azimuth = None
if geometry and geometry.partCount > 0 and geometry.getPart(0).count > 1:
segment_azimuth = calculate_segment_azimuth(geometry.firstPoint, geometry.lastPoint)

road_solar_angle = None
if segment_azimuth is not None:
angle_diff = abs(segment_azimuth – avg_solar_azimuth)
road_solar_angle = min(angle_diff, 360 – angle_diff) # Acute angle (0-90)

row[1] = segment_azimuth
row[2] = avg_solar_azimuth
row[3] = avg_solar_elevation
row[4] = road_solar_angle

cursor.updateRow(row)

except arcpy.ExecuteError:
arcpy.AddError(arcpy.GetMessages(2))
except Exception as e:
print(f”An unexpected error occurred: {e}”)
3. Phase II: Classification of True Hazard Potential (Arcade)
Calculating the R_S_ANGLE ($0^\circ$ to $90^\circ$) identifies road segments that are geometrically aligned with the sun. However, it does not distinguish between a driver traveling into the sun (High Hazard) versus traveling away from the sun (No Hazard).
To isolate the segments with a true hazard potential, a new field (HAZARD_DIR) was created and calculated using an Arcade Expression in ArcGIS Pro’s Calculate Field tool.
3.1. Classification Criteria
A segment is classified as having High Hazard Potential (HAZARD_DIR = 1) if both conditions are met:
1. Angle Alignment: The calculated R_S_ANGLE is $15^\circ$ or less (indicating maximum glare).
2. Directional Alignment: The segment’s azimuth (R_AZIMUTH) is oriented within $\pm 90^\circ$ of the sun’s azimuth (S_AZIMUTH), meaning the driver is facing the sun.
3.2. Final Arcade Expression for Field Calculation
The following Arcade script was used to populate the HAZARD_DIR field (Short Integer type):
// HAZARD_DIR Field Calculation (Language: Arcade)

// Define required input fields
var solarAz = $feature.S_AZIMUTH; // Average Sun Azimuth (e.g., 245 degrees)
var roadAz = $feature.R_AZIMUTH; // Road Segment Azimuth (0-360)
var angleDiff = $feature.R_S_ANGLE; // Acute Angle between Road and Sun (0-90)

// 1. Check for High Glare Angle (< 15 degrees)
if (angleDiff <= 15) {

// 2. Check if the road direction is facing INTO the solar direction
// Calculate the acute difference between roadAz and solarAz (0-180 degrees)
var directionDiff = Abs(roadAz – solarAz);
var acuteDirDiff = Min(directionDiff, 360 – directionDiff);

// If the difference is <= 90 degrees, the driver is generally facing the sun
if (acuteDirDiff <= 90) {
return 1; // TRUE: HIGH Glare Hazard Potential
}
}

return 0; // FALSE: NO Glare Hazard Potential (Angle too high or driving away from sun)
4. Results and Mapping of Hazard Potential
The final classification based on the HAZARD_DIR field (where 1 indicates a High Glare Hazard Potential) was used to generate a thematic map of the Mississauga road network. The map isolates the segments that will experience direct, high-intensity sun glare during the 4:00 PM EST winter commute.
4.1. Map Output Description
The map, titled “Solar Glare Hazard Map of City of Mississauga for the First Week of The Year,” clearly differentiates between segments with no glare hazard (yellow) and those with a high solar glare hazard (red).
• Yellow Segments (Street with no Solar Glare Hazard in first week of the year): These represent the vast majority of the network. They include roads running generally north-south (where the sun is primarily hitting the side of the vehicle) or segments where the driver is traveling away from the low sun angle (i.e., eastbound/northeast-bound traffic).
• Red Segments (Street with High Solar Glare Hazard in first week of the year): These are the critical segments for this analysis. They represent roads that are:
1. Oriented in the southwest-to-west direction (similar to the sun’s average azimuth).
2. Where a driver traveling along that segment would be facing directly into the low sun angle.
4.2. Analysis of Identified Hazard Corridors
The high-hazard (red) segments are predominantly clustered along major arterial roads as shown in the following map that follow a strong East-West or Northeast-Southwest orientation.



• Major Corridors: A highly concentrated linear feature of red segments is visible running across the northern/central part of the city, strongly suggesting a major East-West highway or arterial road where the vast majority of segments are oriented to the west. This confirms that these major commuter corridors are the highest-risk areas for this specific time and season.
• Localized Hazards: Several smaller, isolated red segments are scattered throughout the map. These likely represent the East-West portions of minor residential streets or short segments of angled intersections where the road azimuth briefly aligns with the sun.
• Mitigation Focus: The results provide specific, actionable intelligence. Instead of deploying wide-scale mitigation efforts, the city can focus on the delineated red corridors for strategies such as:
o Targeted message boards warning drivers during the specific 3:30 PM–5:00 PM time window in January.
o Evaluating tree planting or physical barriers only along these identified segments to block the low western sun.
5. Conclusion and Next Steps
The integration of solar geometry (Python/ArcPy) and directional filtering (Arcade) successfully generated a definitive dataset of high-risk road segments. The final map, generated based on the HAZARD_DIR field, clearly highlights specific routes that pose a safety risk to westbound or southwest-bound drivers during the target time window.
Future steps for this analysis include:
• Expanding the calculation to include the morning commute period (e.g., 7:00 AM EST) when the sun is low in the East/Southeast.
• Integrating the analysis with collision data to validate the modeled hazard areas.
• Developing mitigation strategies, such as targeted placement of tree cover or glare-reducing signage, based on the identified high-hazard segments.

Unifying the “Megacity”A Historical Interactive Animation of The Amalgamation of Toronto Using Canva and ArcGIS Pro

By Aria Brown

Geovis Project Assignment | TMU Geography | SA8905 | Fall 2025

Hello everyone! Welcome to my geovis blog post :)

Introduction & Context

As someone who has an immense passion for geography, I have come across many opportunities where I can implement such a passion in relation to my other interests. Although geography is paramount in my interests, I am also quite an avid history buff. Thus, I wanted to see if I could capture both my passions and merge them into one project.

Contentedly, I was able to produce a project that combines 3 very important and personal aspects that provide insight into myself and my interests; my passion for geography and history, as well as my appreciation for the City of Toronto, where myself and my family’s roots in Canada first began. Therefore, I decided that I wanted to visualize the history of this great city, taking viewers through time to show how Toronto was able to get to what it is today by utilizing the free-to-use new gen. website Canva with its animation and interactive features.

Therefore, I present to you Unifying the “Megacity,” A Historical Interactive Animation of the Amalgamation of Toronto. My project takes us back to 1834 when the City of Toronto was first created and progressively follows a timeline to bring viewers to the present.

Figure 1

Timeline that the project will follow

Data & Rationale

Recently, incorporating animation into the world of GIS has been a quite popular trend that many individuals, organizations, and companies have decided to implement in their work. However, animation tools may be hard to come by and most would require a fee upon usage. Therefore, I knew I wanted to see if I could implement a way that GIS could be visualized using an easy-to-use tool that supports interactive features and animation. For those that may be aware, ArcGIS Pro itself does feature an animation tool that uses a timeline (key frames) that the software then compiles and presents in a rather static animation. 


So you might be wondering, why not just use ArcGISPro? ArcGISPro animation and interactive features I found to be quite limiting. Users are limited to key frames to present their animation and may not include extensive interactive features that can be played with or toggled on. I wanted to create a fun interactive animation that was almost seamless and easy to follow without being tied to the constraining ArcGISPro capabilities, and without the fees that other animation tools and softwares may require. Also, Canva is widely used by many to create presentations, reports, etc, and I thought why not showcase how this website that many know and love is capable of so much more.

Tools Used:

Canva- Free Graphic Design Tool

ArcGIS Pro- Desktop GIS Software

Data Used:

City of Toronto Historical Annexation Boundaries (1834-1967)- University of Toronto Dataverse

City of Toronto Community Council Boundaries- Toronto Open Data Portal

Municipal Boundaries as of 1996- Government of Canada

Methodology

Step 1: Upload Data into ArcGISPro and Examine the Data and its Attributes
First, I created a new map project in ArcGIS Pro and uploaded my data. I then right clicked my layers in the ‘Contents’ pane and selected ‘Attributes,’ here you can investigate your data and look for key information such as time frames or date fields. With this particular dataset, the attributes table featured key fields such as the Name of the annexed community (Name) and the year it was annexed (DtAnxd).

Figure 2

Opened attribute table in ArcGIS Pro highlighting the Name and DtAnxd fields

Step 2: Separate the Data into Key Time Frames

In order to ensure that my animation was concise, I separated the data into key time frames instead of showing the progression of city boundaries one-by-one, as there are a total of 51 records. I then laid out a framework as to how I was going to group my data using Microsoft Excel and grouped the data by their decades or significant time frames. I exported the attributes table to Excel using the ‘Table to Excel’ geoprocessing tool. I colour-coded my excel document to keep my records organized and so I could easily visualize the beginning and end of each time frame.

Figure 3

Table to Excel geoprocessing tool

Figure 4

Microsoft Excel spreadsheet of my data and key information sorted by the order it will be presented in

Step 3: Use the ‘Export Features’ Tool to Export Selected Attributes

Once my data and time frames were organized, I selected the date-specific polygons using the ‘Select By Attributes’ tool and ran the expression:

Where YrAnxd (text field that has just the year of annexation) includes the value(s) 1883, 1884 (years)

Figure 5

Select By Attributes tool in ArcGIS Pro and the attribute table view after selecting specific attributes


After running this tool for each time frame, I exported the selected attributes by selecting the layer on the Contents pane, right-clicking and selecting Data→Export Features, where I exported selected features based on the time frame. Each time frame was exported into separate shape-files, and the picture below shows the export of the 1883-1884 time frame into a shape-file entitled ‘1883_1884” which was the titling format I maintained for each shape-file.


Step 4: Customize the Map and Layout

After exporting each time frame into separate shape-files, I edited the look of my map by changing the basemap to depict a more historical-looking one that I felt fit the theme of my project and symbolized the boundary lines, scale bar, and north arrow to match the theme as well. I made sure to bookmark the location of the map in order to ensure each time frame would have a seamless transition without any movement from the map itself. 

In order to show a progression of Toronto’s boundaries that gradually increased over time, I made sure to toggle on the shape-files of the previous years to showcase this. For example, for the 1883-1884 time frame, I kept the previous 1834 shape-file to show this boundary progression.

Figure 6

Map in ArcGIS Pro with the layers of 1834 and 1883_1884 turned on

Step 5: Export Maps

I then exported each map to a PNG file by selecting the Share tab and selecting Export Layout.

Step 6: Upload to Canva

Each map was uploaded to Canva using the ‘Uploads’ tool on the left menu bar to a presentation-style template 

Figure 7

Uploads icon that is used to upload imagery files

Step 7: Use Presentation Template to Layout Maps and Customize

I customized each slide of my presentation by adding my maps, borders, images and icons.

Step 8: Enable Various Animation Tools Between Slides and Text

To create a rather seamless transition between time frames, I selected the ‘Dissolve’ animation tool by hovering my mouse over the space between each slide and selecting the ‘Add Transition’ option. Here, Canva presents a variety of different animation transitions to choose from, however I selected ‘Dissolve’ as I felt it was the most seamless transition due to its fading animation type. I also used different animation types for the display of different text components within the slides.

Figure 8

Selecting the ‘Dissolve’ animation under the Transitions tab, after selecting the transition style, the icon will the be present between slides (in this case the ‘Dissolve’ tool is represented by an infinity symbol

Step 9: Add Selectable Icons

I also wanted to make my animation more interactive and came up with the idea of allowing viewers to see a historic map of Toronto at the particular time frame that the slide was presenting. I got this idea by seeing the use of historic maps being used as basemaps to showcase the evolution of the city and its boundaries. I then found historical maps from the City of Toronto’s Historical Maps and Atlases website and saved them. I then uploaded them to Canva, duplicated my slide with a blank map and added the historic map to it. I then georeferenced the historic maps by lining up the borders between them and my map. I temporarily made the historic maps a bit transparent so I could line up the borders accurately.

I added a ‘plus’ icon from Canva on the slide I wanted and configured the icon so that if it was selected, viewers would be able to see the historic map of Toronto at around the same time. This was done by selecting the plus icon and selecting the ellipsis and navigating to ‘Add Link.’ Then, I selected the particular slide I wanted in the ‘Enter a link or Search’ section. This configured the plus icon to allow viewers to select it, prompting Canva to present the map with the historic imagery overlaid on top. An additional selectable icon (back button) was created on the historic map slides to allow users to go back to the original slide they were viewing.

Figure 9

Under the ‘Elements’ tool on the left menu bar, here I search for ‘plus button’ and selected one that I liked

Figure 10

Adding a link to the plus icon and linking the desired slide

*Note: I kept all my historic map slides at the end of my presentation, as they are are optional aspect that can be viewed

Results

Link to Canva Presentation:

https://www.canva.com/design/DAG4ck_PXLQ/S60h1kAtqPjzCwqFgmYAVA/edit?utm_content=DAG4ck_PXLQ&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton

Presentation:

After the completion of the previous steps, the final product features an animated and interactive presentation of maps! The final product now has:

  • Select capabilities
  • Animated transitions between slides and text
  • Selectable pop up icons to present new information (historic map imagery)
  • Zoom in and out capabilities

I hope you may be inspired by this and try to make your own interactive animation

Thank you!

References

City of Toronto. (2018). Toronto Community Council Boundaries Options Paper. City of Toronto. https://www.toronto.ca/legdocs/mmis/2018/ex/bgrd/backgroundfile-116256.pdf

City of Toronto. (2025). Community Council Area Profiles. City of Toronto. https://www.toronto.ca/city-government/data-research-maps/neighbourhoods-communities/community-council-area-profiles/

City of Toronto. (2025). Historical Maps and Atlases. City of Toronto. https://www.toronto.ca/city-government/accountability-operations-customer-service/access-city-information-or-records/city-of-toronto-archives/whats-online/maps/historical-maps-and-atlases/

Government of Canada. (2005). Municipal boundaries as of 1996. Government of Canada. https://open.canada.ca/data/en/dataset/a3e19e02-36f0-4244-87ab-8f029c6846e2

Fortin, M. (2023). City of Toronto Historical Annexation Boundaries (1834 – 1967). University of Toronto. https://borealisdata.ca/dataset.xhtml?persistentId=doi:10.5683/SP3/XN2NRW
MacNamara, J. (2005). Toronto Chronology. Ontario Genealogical Society.https://web.archive.org/web/20070929044646/http://www.torontofamilyhistory.org/chronology.html