Investigating The Distribution of Crime by Type

Geo-Vis Project Assignment, TMU Geography, SA8905, Fall 2025


Hello everyone, and welcome to my blog!

Today’s topic addresses the distribution of crime in Toronto. I am seeking to provide the public, and implicated stakeholders with a greater knowledge and understanding of how, where, and why different types of crime are distributed in relation to urban features like commercial buildings, public transit, restaurants, parks, open spaces, and more. We will also be looking at some of the socio-economic indicators of crime, and from there identify ways to implement relevant and context specific crime mitigation and reduction strategies.

This project investigates how crime data analysis can better inform urban planning and the distribution of social services in Toronto, Ontario. Research across diverse global contexts highlights that crime is shaped by a mix of socioeconomic, environmental, and spatial factors, and that evidence-based planning can reduce harm while improving community well-being. The following review synthesizes findings from six key studies, alongside observed crime patterns within Toronto.


Accompanying a literature review, I created a 3D model that displays a range of information including maps made in ArcGIS Pro. The data used was sourced from the Toronto Police Service Public Safety Data Portal, and Toronto’s Neighbourhood Profiles from the 2021 Census. The objective here is to draw insightful conclusions as to what types of crime are clustering where in Toronto, what socio-economic and/or urban infrastructural indicators are contributing to this? and what solutions could be implemented in order to reduce overall crime rates across all of Toronto’s neighbourhoods – keeping equitability in mind ?

The distribution of crime across Toronto’s neighbourhoods reflects a complex interplay of socioeconomic conditions, built environment characteristics, mobility patterns, and levels of community cohesion. Understanding these geographic and social patterns is essential to informing more effective city planning, targeted service delivery, and preventive interventions. Existing research emphasizes the need for long-term, multi-approach strategies that address both immediate safety concerns and the deeper structural inequities that shape crime outcomes. Mansourihanis et al. (2024) highlight that crime is closely linked to urban deprivation, noting that inequitable access to resources and persistent neighbourhood disadvantages influence where and how crime occurs. Their work stresses the importance of integrating crime prevention with broader social and economic development initiatives to create safer, and more resilient urban environments (Mansourihanis et al., 2024).

Mansourihanis, O., Mohammad Javad, M. T., Sheikhfarshi, S., Mohseni, F., & Seyedebrahimi, E. (2024). Addressing Urban Management Challenges for Sustainable Development: Analyzing the Impact of Neighborhood Deprivation on Crime Distribution in Chicago. Societies, 14(8), 139. https://doi.org/10.3390/soc14080139

Click here to view the literature review I conducted on this topic.


Methods – Creating a 3D Interactive Crime Investigation Board

The purpose of this 3D map is to provide an interactive tool that can be regularly updated over time; allowing users to build upon research using various sources of information in varying formats (e.g. literature, images, news reports, raw data, various map types presenting comparable socio-economic data, etc; thread can be used to connect images and other information to associated areas on the map). The model has been designed for easy means of addition, removal and connection of media items by using materials like tacks, clips, and cork board. Crime incidents can be tracked and recorded in real time. This allows for quick identification of where crime is clustering based on geography, socio-economic context, and proximity to different land use types and urban features like transportation networks. We can continue to record and analyze what urban features or amenities could be deterring or attracting/ promoting criminal activity. This will allow for fast, context specific, crime management solutions that will ultimately help reduce overall crime rates in the city.

1. Conduct a detailed literature review. 
Here is the literature review I conducted to address this topic.

2. Downloaded the following data from: Open Data | Toronto Police Service Public Safety Data Portal. Each dataset was filtered to show points only from 2025.

- Dataset: Shooting and Firearm Discharges
- Dataset: Homicides
- Dataset: Assault
- Dataset: Auto Theft
- Dataset: Break and Enter

Toronto Neighbourhood Profiles, 2021 Census from: Neighbourhood Profiles - City of Toronto Open Data Portal
- Average Total Household Income by Neighbourhood
- Unemployment Rates by Neighbourhood

3. After examining the full data sets by year, select a time period to map. In this case, July 2025 which was the month that had the greatest number of crimes to occur this year.

4. Map Setup
- Coordinate system: NAD 1983 UTM Zone 17N
- Rotation: -17
- Geography:
- City of Toronto, ON, Canada
- Neighbourhood boundaries from Toronto Open Data Portal

5. Add the crime incident data reports and Toronto’s Neighbourhood Boundary file.

Geospatial Analysis Tools Used
Tool - Select by attribute and delete the data that we are not mapping. In this case;
From the Attribute Table,
Select by Attribute [OCC_YEAR] [is less than] [2025]

Tool - Summarize within
Count the number of crime incidents within each of the neighbourhood's boundary polygons for the 5 selected crime types for preliminary analysis and mapping.

Design Tools and Map Types Used
- Dot Density
- 2025 Crime rates, by type, annual and for July of 2025
- Heat Map
- 2025 Crime rates, by type, annual and for July of 2025
- Choropleth
- Average Total Household Income, City of Toronto by Neighbourhood
- Unemployment Rates Across Toronto, 2021
- Design Tools e.g. convert to graphics
Based on literature review and analysis of the presented maps,  this model allows for us to further analyze, visually display and record the data and findings. This model will allow for users to see where points are clustering, and examine urban features, land use and the socio-economic context of cluster areas in order to address potential solutions, with equity in mind.

Supplies
- Thread,
- Painted tooth picks,
- Mini clothes pins,
- Highlighters, markers etc.
- Scissors,
- Hot glue
- Images of indicators
- Relevant/insightful literature research
- Socio-Economic Maps: Population Income, unemployment, and density
- Crime Maps: Dot density crime by type, heat map of crime distribution by type, from the select 5 crime types, all incidents to occur during the month of July, 2025

Process
1. Attach cork board to poster board;

2. Cut out and place down main maps that have been printed (maps created in ArcGIS Pro, some additional design edits made in Canva);

3. Outline the large or central base map with tacks; use string to connect the tacks outlining the City of Toronto's regional boundary line.

4. Using colour painted tooth picks (alternatively, tacks may be used depending on size limitations), crime incidents can be recorded in real time, using different colours to represent different crime types.

5. Additional data can be added on and joined to other map elements over time. This data could be: images and locations of crime indicators; new literature findings; news reports’ raw data; different map types presenting comparable socio-economic data; community input via email, from consultation meetings, 911 calls, or surveys; graphs; tables; land use type and features and more.

6. Thread is used to connect images and other information to associated areas on the map. In this case, blue string and tacks were used to highlight preventative crime measures and red to represent an indicator of crime.

7. Sticky notes can be used to update the day and month (using a new poster/cork board for each year), under “Time Stamp”

8. Use of Google Earth was applied to further analyze using satellite imagery, a terrestrial layer, and an urban features layer in order to further analyze land use, type, function, and significant features like Union Station - a major public transit connection point, and located within Toronto’s most dense and overall largest crime hot spot.

9. A satellite imagery base map in ArcGIS was used to compare large green spaces (parks, ravines, golf courses etc.) with the distribution of each incidence point on the dot map created. Select each point field individually for optimal view and map analysis.

10. Video and Photo content used to display the final results were created using an IPhone Camera and the "iMovie" video editing app.

See photos and videos for reference!

Socioeconomic and Environmental Indicators of Crime

A consistent theme across the literature and my own findings is the strong connection between neighborhood deprivation and crime. Mansourihanis et al. (2024) emphasize that understanding the “relationship between urban deprivation and crime patterns” supports targeted, long-term strategies for urban safety. Concentrated poverty, population density, and low social cohesion are significant predictors of violence (Mejia & Romero, 2025; M. C. Kondo et al., 2018). Similarly, poverty and weak rule of law correlate more strongly with homicide rates than gun laws alone (Menezes & Kavita, 2025).

Environmental characteristics also influence crime distribution. Multiple studies link greater green space to reduced crime, higher social cohesion, and stronger perceptions of safety (Mejia & Romero, 2025). Exposure to green infrastructure can foster community pride and engagement, further reinforcing crime-preventive effects (Mejia & Romero, 2025). Relatedly, Stalker et al. (2020) show that community violence contributes to poor mental and physical health, with feelings of unsafety directly associated with decreased physical activity and weaker social connectedness.

Other urban form indicators—including land-use mix, connectivity, and residential density—shape mobility patterns that, in turn, affect where crime occurs. Liu, Zhao, and Wang (2025) find that property crimes concentrate in dense commercial districts and transit hubs, while violent crimes occur more often in crowded tourist areas. These patterns reflect the role of population mobility, economic activity, and social network complexity in structuring urban crime.

Crime Prevention and Community-Based Solutions

Several authors highlight the value of integrating built-environment design, green spaces, and community-driven interventions. Baran et al. (2014) show that larger parks, active recreation features, sidewalks, and intersection density all promote park use, while crime, poverty, and disorder decrease utilization. Parks and walkable environments also support psychological health and encourage social interactions that strengthen community safety. In addition, green micro-initiatives—such as community gardens or small landscaped interventions—have been found to enhance residents’ emotional connection to their neighborhoods while reducing local crime (Mejia & Romero, 2025).

At the policy level, optimizing the distribution of public facilities and tailoring safety interventions to local conditions are essential for sustainable crime prevention (Liu, Zhao, & Wang, 2025). For gun violence specifically, trauma-informed mental health care, early childhood interventions, and focused deterrence are recommended as multidimensional responses (Menezes & Kavita, 2025).

Spatial Crime Patterns in Toronto

When mapped across Toronto’s geography, the crime data revealed distinct clustering patterns that mirror many of the relationships described in the literature. Assault, shootings, and homicides form a broad U- or O-shaped distribution that aligns with neighborhoods exhibiting lower average incomes and higher unemployment rates. These patterns echo global findings on deprivation and violence.

Downtown Toronto—particularly the area surrounding Union Station—emerges as the city’s highest-density crime hotspot. This zone features extremely high connectivity, car-centric infrastructure, dense commercial and mixed land use, and limited green space. These conditions resemble those identified by Liu, Zhao, and Wang (2025), where transit hubs and high-traffic commercial districts generate elevated rates of property and violent crime. Google Earth imagery further highlights the concentration of major built-form features that attract large daily populations and mobility flows, reinforcing the clustering of assaults and break-and-enter incidents in the downtown core.

Auto theft is relatively evenly distributed across the city and shows weaker clustering around transit or commercial nodes. However, areas with lower incomes and higher unemployment still show modestly higher auto-theft levels. Break and enter incidents, by contrast, concentrate more strongly in high-income neighborhoods with lower unemployment—suggesting that offenders selectively target areas with greater material assets.

Across all crime categories, one consistent pattern is the notable absence of incidents within large green spaces such as High Park and Rouge National Urban Park. This supports the broader literature connecting green space with lower crime and improved perceptions of safety (Mejia & Romero, 2025; Baran et al., 2014). Furthermore, as described, different kinds of crime occur in low versus high income neighbourhoods emphasizing a need for context specific resolutions that take into consideration crime type and socio-economics.

Synthesis and Relevance for Toronto

Collectively, these findings indicate that crime in Toronto is shaped by intersecting socioeconomic factors, environmental features, and mobility patterns. Downtown crime clustering reflects high density, transit connectivity, and land-use complexity; outer-neighborhood violence aligns with deprivation; and green spaces consistently correspond with lower crime. These patterns mirror global research emphasizing the role of social cohesion, urban form, and economic inequality in shaping crime distribution.

Understanding these relationships is essential for planning decisions around green infrastructure investments, targeted social services, transit-area safety strategies, and neighborhood-specific interventions. Ultimately, integrating environmental design, socioeconomic supports, and community-based programs that support safer, healthier, and more equitable outcomes for Toronto residents.

Demographics of Chicago Neighbourhoods and Gang Boundaries in 2024

By: Ganesha Loree

Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

INTRODUCTION

`Chicago is considered the most gang-occupied city in the United States, with 150,000 gang-affiliated residents, representing more than 100 gangs. In 2024, 46 gangs and their boundaries across Chicago were mapped by the City of Chicago. Factors about the formation of gangs have been of interest and a topic of research for many years all over the world (Assari et al., 2020), but for the purpose of this project, these factors are going to stem from demographics of Chicago. For instance, Chicago has deep roots within gang history and culture. Not only gangs but violent crimes are also dense. Demographics such as income, education, housing, race, etc., play factors within the neighbourhoods of Chicago and could be part of the cause of gang history.

METHODOLOGY

Step 1: Data Preparation

Chicago Neighbourhood Census Data (2025): Over 200 socioeconomic and demographic data for each neighbourhood was obtained from the Chicago Metropolitan Agency for Planning (CMAP) (Figure 1). In July 2025 their Community Data Snapshot portal released granular insights into population characteristics, income levels, housing, education, and employment metrics across Chicago’s neighbourhoods.

Figure 1: Census data for Chicago, 2024

Chicago Neighbourhood Boundary Files: official geographic boundaries for Chicago neighbourhoods were downloaded from the City of Chicago’s open data portal (Figure 2). These shapefiles were used to spatially join census data and support geospatial visualization.

Figure 2: Chicago Data Portal – Neighborhood Boundaries

Chicago Gang Territory Boundaries (2024): Gang territory data from 2024 was sourced from the Chicago Police Department’s GIS portal (Figure 3). These boundaries depict areas of known gang influence and were integrated into the spatial database to support comparative analysis with neighbourhood-level census indicators.

Step 2: Technology

Once the data was downloaded, they were applied to software to visualize the data. A combination of technologies was used, ArcGIS Pro and Sketchup (Web). ArcGIS Pro was used to import all boundary files, where neighbourhood census data was joined to Chicago boundary shapefile using unique identifier such as Neighbourhood Name (Figure 4).

Figure 4: ArcGIS Pro Data Join Table

Gang territory boundary polygons overlaid with neighborhood boundaries to enable spatial intersection and proximity analysis (Figure 5).

Figure 5: Shapefiles of Chicago’s Neighbourhoods and Gangs

Within ArcGIS Pro, the combined map of both boundaries allowed for analysis of the neighbourhoods with the most gang boundaries. Rough sketch of these neighbourhoods was made by circling the neighbourhoods of a clean map of Chicago, where the bigger circles show the areas with more gang areas and the stars indicate the neighborhoods with no gang boundaries (Figure 6). The CMAP was used to look at the demographics of the neighborhoods with the most area of gangs and compared to the areas with no gang areas (e.g. O’Hare).

Figure 6: Chicago neighborhood outlines with markers

SketchUp

SketchUp is 3D modeling tool that is used to generate and manipulate 3D models and is often used in architecture and interior design. Using this software for this project was a different purpose of the software; by importing Chicago neighborhoods outline as an image I was able to trace the neighborhoods.

Step 3: Visualization with 3D Extrusions (Sketch Up)

Determined the highest height of the 3D maps models, was based on the total number of neighborhoods (98) and total number of gangs records/areas (46). Determining which neighbourhoods had the most gang boundaries was based on the gang area number which was provided in the Gang Boundary file. The gang with the most area totaled to shape area of 587,893,900m2, where the smallest shape area is 217,949m2. Similar process was done with neighbourhood area measurements. Neighbourhoods were raised based on the number of gang areas that were present within that neighbourhood (as previously shown in Figure 5). 5’ (feet) is the highest neighbourhood, and 4” (inches) is the lowest neighbourhood where gangs are present, neighbourhoods that do not have gangs are not elevated.

A different approach was applied to the top 3 gangs map model, where the highest remains same in each gang, but are placed in the neighbourhoods that have that gang present. For instance, Gangster Disciples were set at approximately 5 feet (5′ 3/16″ or 1528.8 mm), Black P Stones at almost 4 feet (3′ 7/8″ or 936.6 mm), and Latin Kings at a little over 1 foot (1′ 8 1/4″ or 514.4 mm).

Map Design

Determined what demographic factors were going to be used to compare with gang areas, for example, income, race, and top 3 gangs (Gangster Disciples, Black P Stones, and Latin Kings). Two elements present with the two demographic maps (height and colour), where colour indicates the demographic factor and the height represents the gang presence (Figure 7).

Figure 7: 3D map models of Chicago gangs based on Race and Income

There was limited information available about the gang areas, which only consisted of gang name, shape area, and length measurements. In terms of SketchUp’s limitations, the free web version as some restrictions, had to manually draw the outline of Chicago neighbourhoods which was time consuming. In addition, SketchUp scale system was complex and was not consistent between maps. To address tis, each corner of the map was measured with the Tape Measure Tool to ensure uniformity. Lastly, when the final product was viewed in augmented reality (AR), the map quality was limited such as the neighbourhood outlines were gone, and the only parts that were visible were the colour parts of the models.

The most visual pattern shown from the race map is the areas with more gang activity have a large population of African Americans (Figure 7). For the income map, indicated in green, more gang areas have lower income whereas the areas with higher income do not have gangs in those neighborhoods. Based on the top three gangs, Gangster Disciples have the most gang boundaries across Chicago neighborhoods (Figure 8). Gangster Disciples takes up 33.6% of the area in km2, founded in 1964 in Englewood.

Figure 8: 3D map of the top 3 gangs in Chicago, 2024

FINAL PRODUCT

The final product, is user interactive through a QR code that allows viewers to look at the map models using augmented reality (AR) just by pointing your mobile device camera at the QR code below.

Being aware that the quality of the AR has its limits, the SketchUp map models can be viewed using the Geovis Map Models button below.

Reference

Assari, S., Boyce, S., Caldwell, C. H., Bazargan, M., & Mincy, R. (2020). Family income and gang presence in the neighborhood: Diminished returns of black families. Urban Science4(2), 29.

Parks and its association to Average Total Alcohol Expenditure (Alcohol in Parks Program in Toronto, ON)

Welcome to my Geovisualization Assignment!

Author: Gabyrel Calayan

Geovisualization Project Assignment

TMU Geography

SA8905 – FALL 2025

Today, we are going to be looking at Parks and Recreation Facilities and its possible association to average alcohol expenditure in census tracts (due to the Alcohol in Parks Program in the City of Toronto) using data acquired from the City of Toronto and Environics Analytics (City of Toronto, n.d.).

Context

Using R Studio’s expansive tool set for map creation and Quarto documentation, we are going to be creating a thematic and an interactive map for parks and its association with Average Total Alcohol Expenditure in Toronto. The idea behind topic was really out of the blue. I was just thinking of a fun, simple topic that I wanted to do that I haven’t done yet for my other assignments! And so I landed on this because of data availability while learning some new skills at R Studio and try out the Quarto documentation process.

Data

  • Environics Analytics – Average Alcohol Expenditure (Shapefile for census tracts and in CAD $) (Environics Analytics, 2025
  • City of Toronto – Parks and Recreation Facilities (Point data and filtered down to 40 parks that participate in the program) (City of Toronto, 2011).

Methodology

  • Using R Studio to map out my Average Alcohol Expenditure and the 55 Parks that are a part of the Alcohol in Parks Program by the City of Toronto
  • Utilize tmap functions to create both a static thematic and interactive maps
  • Utilize Quarto documentation to create a readme file of my assignment
  • Showcasing the mapping capabilities and potential of R Studio as a mapping tool

Example tmap code for viewing maps

This tmap code for initializing what kind of view you want (there are only two kinds of views)

  • Static thematic map

## This is for viewing as a static map

## tmap_mode("plot") + tm_shape(Alcohol_Expenditure)

  • Interactive map

## This for viewing as a interactive map

## tmap_mode("view") + tm_shape(Alcohol_Expenditure)

Visualization process

Step 1: Installing and loading the necessary packages so that R Studio can recognize our inputs

  • These inputs are kind of like puzzle pieces! Where you need the right puzzle piece (package) so that you can put the entire puzzle together.
  • So we would need a bunch of packages to visualize our project:
    • sf
    • tmap
    • dplyr
  • These two packages are important because “sf” lets us read the shapefiles into R Studio. While “tmap” lets us actually create the maps. And “dplyr” lets us filter our shapefiles and the data inside it.
  • Also, its very likely that the latest R Studio version has the necessary packages already. In that case, you can just do the library() function to call the packages that you would need. But, I like installing them again in case I forgot.

## Code for installing the packages

## install.packages("sf")

## install.packages("tmap")

## Loading the packages

## library(tmap)

## library(sf)

We can see in our console that it says “package ‘sf’ successfully unpacked and MD5 sums checked. That basically means its done installing.

  • In addition, these warning messages in this console output indicates that we have these packages already in the latest R Studio software.

After installing and loading these packages, then we can begin with loading and filtering the dataset so that we can move on to visualizing the data itself. The results of installing these packages can be seen in our “Console” section at the bottom left hand side of R Studio (it may depend on the user but I have seen people move the “Console” section to the top right hand side of R Studio interface.

Step 2: Loading and filtering our data

  • We must first set the working directory of where our data is and where our outputs are going to go

## Setting work directory

## setwd()

  • This code basically points to where your files are going to be outputted to in your computer
  • Now that we set our working directory, we can load in the data and filter it

## Code for naming our variables in R Studio and loading it in the software

## Alcohol_Parks <- read_sf("Parks and Recreation Facilities - 4326.shp")

## Alcohol_Expenditure <- read_sf("SimplyAnalytics_Shapefiles_5efb411128da3727b8755e5533129cb52f4a027fc441d8b031fbfc517c24b975.shp")

  • As we can see in the code snippets above, we are using one of the functions that belong to the sf package. The read_sf basically loads in the data that we have to be recognized as a shapefile.
  • It will appear on the right as part of the “Environment” section. This means it has read all the columns that are part of the dataset

Now we can see our data in the Environments Section. And there’s quite a lot. But no worries we only need to filter the Parks data!

Step 3: Filtering the data

  • Since we only need to filter the data for the parks in Toronto, we only need to grab the data that are a part of the 55 parks in the Alcohol in Parks Program
  • This follows a two – step approach:
    • Name your variable to match its filtered state
    • Then the actual filtering comes into play

## Code for running the filtering process

## Alcohol_Parks_Filtered <- filter(Alcohol_Parks, ASSET_NAME == "ASHTONBEE RESERVOIR PARK" | ASSET_NAME == "BERT ROBINSON PARK"| ASSET_NAME == "BOND PARK" | ASSET_NAME == "BOTANY HILL PARK" | ASSET_NAME == "BYNG PARK"

  • As we can see in the code above, before the filtering process we name the new variable to match its filtered state as “Alcohol_Parks_Filtered”
    • In addition, we are matching the column name that we type out in the code to the park names that are found in the Park data set!
    • For example: The filtering wouldn’t work if it was “Bond Park”. It must be all caps “BOND PARK”
  • Then we used the filter() function to filter the shapefile by ASSET_NAME to pick out the 40 parks
  • We can see in our filtered dataset that we have filtered it down to 53 parks with all the original columns attached. Most important being the geometry column so we can conduct visualizations!
  • Once we completed that, we can test out the tmap function to see how the data looks before we map it out.

Step 4: Do some testing visualizations to see if there is any issues

  • Now, we can actually use some tmap functions to see if our data work
  • tm_shape is the function for recognizing what shapefile we are using to visualize the variable
  • tm_polygons and tm_dots is for visualizing the variables as either a polygon or dot shapefile
  • For tm_polygons, fill and style is basically what columns are you visualizing the variable on and what data classification method you would like to use

## Code for testing our visualizations

## tm_shape(Alcohol_Expenditure) + tm_polygons(fill = "VALUE0", style = "jenks")

## tm_shape(Alcohol_Parks_Filtered) + tm_dots()

Now, we can see that it actually works! We can see that the map above is our shapefile and the one on the bottom is our parks!

Step 5: Using tmap and its extensive functions to build our map

  • We can now fully visualize our map and add all the cartographic elements necessary to flesh it out and make it as professional as possible

## Building our thematic map

##``tmap_mode("plot") + tm_shape(Alcohol_Expenditure) +

tm_polygons(fill = "VALUE0", fill.legend = tm_legend ("Average Alcohol Expenditure ($ CAD)"), fill.scale = tm_scale_intervals(style = "jenks", values = "Greens")) +

tm_shape(Alcohol_Parks_Filtered) + tm_bubbles(fill = "TYPE", fill.legend = tm_legend("The 40 Parks in Alcohol in Parks Program"), size = 0.5, fill.scale = tm_scale_categorical(values = "black")) +

tm_borders(lwd = 1.25, lty = "solid") +

tm_layout(frame = TRUE, frame.lwd = 2, text.fontfamily = "serif", text.fontface = "bold", color_saturation = 0.5, component.autoscale = FALSE) +

tm_title(text = "Greenspaces and its association with Alcohol Expenditure in Toronto, CA", fontfamily = "serif", fontface = "bold", size = 1.5) +

tm_legend(text.size = 1.5, title.size = 1.2, frame = TRUE, frame.lwd = 1) +

tm_compass(position = c ("top", "left"), size = 4) +

tm_scalebar(text.size = 1, frame = TRUE, frame.lwd = 1) +

tm_credits("Source: Environics Analytics\nProjection: NAD83", frame = TRUE, frame.lwd = 1, size = 0.75)

  • Quite a lot of code!
  • Now this is where the puzzle piece analogy comes into play as well
    • First, we add our tmap_plot function to specify that we want it as a static map first
    • We add both our variables together because we want to see our point data and how it lies on top of our alcohol expenditure shapefile
    • Utilizing tm_polygons, tm_shape, and tm_bubbles to draw both our variables as polygons and as point data
      • tm_bubbles is dots and tm_polygons draws the polygons of our alcohol expenditure shapefile
    • The code that is in our brackets for those functions are additional details that we would like to have in our map
    • For example: fill.legend = tm_legend ("Average Alcohol Expenditure ($ CAD)")
      • This code snippet makes it so that our legend title is “Average Alcohol Expenditure ($ CAD) for our polygon shapefile
      • The same applies for our point data for our parks
    • Basically, we can divide our code into two sections:
      • The tm_polygons all the way to tm_bubbles is essentially drawing our shapefiles
      • The tm_borders all the way to the tm_credits are what goes on outside our shapefiles
        • For example:
    • tm_title() and the code inside it is basically all the details that can be modified for our map. component.autoscale = FALSE is turning off the automatic rescaling of our map so that I can have more control over modifying the title part of the map to my liking

Now we have made our static thematic map! On to the next part which is the interactive visualization!

Since we built our puzzle parts for the thematic map, we just need to switch it over to the interactive map using tmap_mode(“view”)

This code chunk describes the process to create the interactive map

library(tmap)
library(sf)
library(dplyr)


##Loading in the data to check if it works
Alcohol_Parks <- read_sf("Parks and Recreation Facilities - 4326.shp")
Alcohol_Expenditure <- read_sf("SimplyAnalytics_Shapefiles_5efb411128da3727b8755e5533129cb52f4a027fc441d8b031fbfc517c24b975.shp")

#Filtering test_sf_point to show only parks where you can drink alcohol
Alcohol_Parks_Filtered <- 
  filter(Alcohol_Parks, ASSET_NAME == "ASHTONBEE RESERVOIR PARK" | ASSET_NAME == "BERT ROBINSON PARK"
                                 | ASSET_NAME == "BOND PARK" | ASSET_NAME == "BOTANY HILL PARK" | ASSET_NAME == "BYNG PARK"
                                 | ASSET_NAME == "CAMPBELL AVENUE PLAYGROUND AND PARK" | ASSET_NAME == "CEDARVALE PARK" 
                                 | ASSET_NAME == "CHRISTIE PITS PARK" | ASSET_NAME == "CLOVERDALE PARK" | ASSET_NAME == "CONFEDERATION PARK"
                                 | ASSET_NAME == "CORKTOWN COMMON" | ASSET_NAME == "DIEPPE PARK" | ASSET_NAME == "DOVERCOURT PARK"
                                 | ASSET_NAME == "DUFFERIN GROVE PARK" | ASSET_NAME == "EARLSCOURT PARK" | ASSET_NAME == "EAST LYNN PARK"
                                 | ASSET_NAME == "EAST TORONTO ATHLETIC FIELD" | ASSET_NAME == "EDWARDS GARDENS" | ASSET_NAME == "EGLINTON PARK"
                                 | ASSET_NAME == "ETOBICOKE VALLEY PARK" | ASSET_NAME == "FAIRFIELD PARK" | ASSET_NAME == "GRAND AVENUE PARK"
                                 | ASSET_NAME == "GORD AND IRENE RISK PARK" | ASSET_NAME == "GREENWOOD PARK" | ASSET_NAME == "G. ROSS LORD PARK"
                                 | ASSET_NAME == "HILLCREST PARK" | ASSET_NAME == "HOME SMITH PARK" | ASSET_NAME == "HUMBERLINE PARK" | ASSET_NAME == "JUNE ROWLANDS PARK"
                                 | ASSET_NAME == "LA ROSE PARK" | ASSET_NAME == "LEE LIFESON ART PARK" | ASSET_NAME == "MCCLEARY PARK" | ASSET_NAME == "MCCORMICK PARK" 
                                 | ASSET_NAME == "MILLIKEN PARK" | ASSET_NAME == "MONARCH PARK" | ASSET_NAME == "MORNINGSIDE PARK" | ASSET_NAME == "NEILSON PARK - SCARBOROUGH"
                                 | ASSET_NAME == "NORTH BENDALE PARK" | ASSET_NAME == "NORTH KEELESDALE PARK" | ASSET_NAME == "ORIOLE PARK - TORONTO" | ASSET_NAME == "QUEEN'S PARK"
                                 | ASSET_NAME == "RIVERDALE PARK EAST" | ASSET_NAME == "RIVERDALE PARK WEST" | ASSET_NAME == "ROUNDHOUSE PARK" | ASSET_NAME == "SCARBOROUGH VILLAGE PARK"
                                 | ASSET_NAME == "SCARDEN PARK" | ASSET_NAME == "SIR WINSTON CHURCHILL PARK" | ASSET_NAME == "SKYMARK PARK" | ASSET_NAME == "SORAREN AVENUE PARK"
                                 | ASSET_NAME == "STAN WADLOW PARK" | ASSET_NAME == "THOMSON MEMORIAL PARK" | ASSET_NAME == "TRINITY BELLWOODS PARK" | ASSET_NAME == "UNDERPASS PARK"
                                 | ASSET_NAME == "WALLACE EMERSON PARK" |  ASSET_NAME == "WITHROW PARK")  


##Now as a interactive map
tmap_mode("view") + tm_shape(Alcohol_Expenditure) + 
  
  tm_polygons(fill = "VALUE0", fill.legend = tm_legend ("Average Alcohol Expenditure ($ CAD)"), fill.scale = tm_scale_intervals(style = "jenks", values = "Greens")) +
  
  tm_shape(Alcohol_Parks_Filtered) + tm_bubbles(fill = "TYPE", fill.legend = tm_legend("The 55 Parks in Alcohol in Parks Program"), size = 0.5, fill.scale = tm_scale_categorical(values = "black")) + 
  
  tm_borders(lwd = 1.25, lty = "solid",) + 
  
  tm_layout(frame = TRUE, frame.lwd = 2, text.fontfamily = "serif", text.fontface = "bold", color_saturation = 0.5, component.autoscale = FALSE) +
 
   tm_title(text = "Greenspaces and its association with Alcohol Expenditure in Toronto, CA", fontfamily = "serif", fontface = "bold", size = 1.5) +
  tm_legend(text.size = 1.5, title.size = 1.2, frame = TRUE, frame.lwd = 1) +
  
  tm_compass(position = c("top", "right"), size = 2.5) + 
  
  tm_scalebar(text.size = 1, frame = TRUE, frame.lwd = 1, position = c("bottom", "left")) +
  
  tm_credits("Source: Environics Analytics\nProjection: NAD83", frame = TRUE, frame.lwd = 1, size = 0.75)

Link to viewing the interactive map: https://rpubs.com/Gab_Cal/Geovis_Project

  • The only differences that can be gleaned from this code chunk is that the tmap_mode() is not “plot” but instead set as “view”
    • For example: tmap_mode(“view”)

The map is now complete!

Results (Based on our interactive map)

  • Just based on the default settings for the interactive map, tmap includes a wide range of elements that make the map dynamic!
    • We have the zoom in and layer selection/basemap selection function on the top left
    • The compass that we created is shown in the top right
    • And the legend that we made is locked in at the bottom right
    • Our scalebar is also dynamic which changes scales when we zoom in and out
    • And our credits and projection section is also seen in the bottom right of our interactive map
    • We can also click on our layers to see the columns attached to the shapefiles
  • For example, we can click on the point data to see the id, LocationID, AssetID, Asset_Name, Type, Amenities, Address, Phone, and URL. While for our polygon shapefile we can see the spatial_id, name of the CT, and the alcohol spending value in that CT
  • As we can see in our interactive map, the areas that have the highest “Average Alcohol Expediture” lie near the upper part of the downtown core of Toronto
    • For example: The neighbourhoods that are dark green are Bridle Path-Sunnybrook-York Mills, Forest Hill North and South and Rosedale to name a few
  • However, only a few parks that are a park of the program reside in these high spending regions on alcohol
  • Most parks reside in census tracts where the alcohol expenditure is either the $500 to $3000 range
  • While there doesn’t seems to be much of an association, there is definitely more factors into play as to where people buy their alcohol or where they decide to consume it
  • Based on just visual findings:
    • For example: It’s possible that people simply do not drink in these parks even though its allowed. They probably find the comfort of their home a better place to consume alcohol
    • Or people don’t want to drink at a park when they could be doing more active group – like activities

References

Spatial Accessibility and Ridership Analysis of Toronto Bike Share Using QGIS & Kepler.gl

Teresa Kao

Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

Hi everyone, in this project, I explore how cycling infrastructure influences Bike Share ridership across Toronto. Specifically, I examine whether stations located within 50 meters of protected cycling lanes exhibit higher ridership than those near unprotected lanes, and to see which areas protected cycling lanes could be improved.

This tutorial walks through the full workflow using QGIS for spatial analysis and Kepler.gl for interactive mapping, filtering, and data exploration. By the end, you’ll be able to visualize ridership patterns, measure proximity to cycling lanes, and identify where additional stations or infrastructure could improve accessibility.

Preparing the Data in QGIS

Importing Cycling Network and Station Data

Import the cycling network shapefiles using Layer -> Add Layer -> Add Vector Layer, and load the Bike Share station CSV by assigning X = longitude, Y = latitude, and setting the CRS to EPSG:4326 (WGS84).

Reproject to UTM 17N for Distance Calculations

Because Kepler.gl only supports GeoJSON in EPSG:4326, all layers are first reprojected in QGIS to EPSG:26917 (Right click -> Export -> Save Features As…), distance calculations are performed there, and the processed results are then exported back to GeoJSON (EPSG:4326) for use in Kepler.gl.

Calculating Distance to the Nearest Cycling Lane

Use the Join Attributes by Nearest tool ( Processing Toolbox-> Join Attributes by Nearest), setting the Input Layer to stations dataset, the Join Layer to the cycling lane dataset, and Maximum neighbours to 1. This will generate an output layer with a new field (distance_to_lane_m) representing the distance in meters from each station to its nearest cycling lane.

Creating Distance Categories

Use the Field Calculator (∑) to create distance classifications using the following expression :

<CASE
WHEN "dist_to_lane_m" <= 50 THEN '≤50m'
WHEN "dist_to_lane_m" <= 100 THEN '≤100m'
WHEN "dist_to_lane_m" <= 250 THEN '≤250m'
ELSE '>250m'
END>

Exporting to GeoJSON for Kepler.gl

Since Kepler.gl does not support shapefiles, export each layer as a GeoJSON (Right Click -> Export -> Save Features As -> Format: GeoJSON -> CRS: EPSG:4326). The distance values will remain correct because they were already calculated in UTM projection.

Building Interactive Visualizations in Kepler.gl

Import Data

Go to Layer -> Add Layer -> choose data

  1. For Bike Share Stations, use the point layer and symbolize it by the distance_to_lane_m field, selecting a colour scale and applying custom breaks to represent different distance ranges.
  2. For Protected Cycling Network, use the polygon layer and symbolize it by all the protected lane columns, applying a custom ordinal stroke colour scale such as light green.
  3. For Unprotected Cycling Network, use the polygon layer and symbolize it by all the unprotected columns, applying a custom ordinal stroke colour scale such as dark green.
  4. For Toronto Boundary, use the polygon layer and assign a simple stroke colour to outline the study area.

Add Filters

The filter slider is what makes this visualization powerful. Go to Add Filter -> Select a Dataset -> Choose the Field (for example ridership, distance_to_lane)

Add Tooltips

Go to Tooltip -> Toggle ON -> Select fields to display. Enable tooltips so users can hover a station to see details, such as station name, ridership, distance to lane, capacity.

Exporting Your Interactive Map

Export image, table(csv), and map (shareable link) -> uses Mapbox Api to create an interactive online map that other people can interact with your map.

How this interactive map help answer the research question

This interactive map helps answer the research question in two ways.
First, by applying a filter on distance_to_lane_m, users can isolate stations located within 50 meters of a cycling lane and visually compare their ridership to stations farther away. Toggling between layers for protected and unprotected cycling lanes allows users to see whether higher ridership stations tend to cluster near protected infrastructure.

Based on the map, the majority of higher ridership stations are concentrated near protected cycling lanes, suggesting a positive relationship between ridership and proximity to safer cycling infrastructure.

Second, by applying a ridership filter (>30,000 trips), the map highlights high demand stations that lack nearby protected cycling lanes. These appear as busy stations located next to unprotected lanes or more than 50 meters away from any cycling facility.

Together, these filters highlight where cycling infrastructure is lacking, especially in the Yonge Church area and the Downtown East / Yonge Dundas area, making it clear where protected lanes may be needed.

Final Interactive Map

Thank you for taking the time to explore my blog. I hope it was informative and that you were able to learn something from it!

Evolution of Residential Real Estate in Toronto – 2014 to 2022

Shashank Prabhu, Geovis Project Assignment, TMU Geography, SA8905, Fall 2024 

Introduction
Toronto’s residential real estate market has experienced one of the most rapid price increases among major global cities. This surge has led to a significant affordability crisis, impacting the quality of life for residents. My goal with this project was to explore the key factors behind this rapid increase, while also analyzing the monetary and fiscal policies implemented to address housing affordability.

The Approach: Mapping Median House Prices
To ensure a more accurate depiction of the market, I used the median house price rather than the average. The median better accounts for outliers and provides a clearer view of housing trends. This analysis focused on all home types (detached, semi-detached, townhouses, and condos) between 2014 and 2022.

Although data for all years were analyzed, only pivotal years (2014, 2017, 2020, and 2022) were mapped to emphasize the factors driving significant changes during the period.

Data Source
The Toronto Regional Real Estate Board (TRREB) was the primary data source, offering comprehensive market watch reports. These reports provided median price data for Central Toronto, East Toronto, and West Toronto—TRREB’s three primary regions. These regions are distinct from the municipal wards used by the city.

Creating the Maps

Step 1: Data Preparation
The Year-to-Date (YTD) December figures were used to capture an accurate snapshot of annual performance. The median price data for each of the years across the different regions was organized in an Excel sheet, joined with TRREB’s boundary file (obtained through consultation with the Library’s GIS department), and imported into ArcGIS Pro. WGS 1984 Web Mercator projection was used for the maps.

Step 2: Visualization with 3D Extrusions
3D extrusions were used to represent price increases, with the height of each bar corresponding to the median price. A green gradient was selected for visual clarity, symbolizing growth and price.

Step 3: Overcoming Challenges

After creating the 3D extrusion maps for the respective years (2014, 2017, 2020, 2022), the next step was to export those maps to ArcOnline and then to Story Maps, the easiest way of doing so was to export it as a Web Scene, from which it would show up under the Content section on ArcOnline.

  • Flattened 3D Shapes: Exporting directly as a Web Scene to add onto Story Maps caused extrusions to lose their 3D properties. This was resolved using the “Layer 3D to Feature Class” tool.

  • Lost Legends: However, after using the aforementioned tool, the Legends were erased during export. To address this, static images of the legends were added below each map in Story Maps.

Step 4: Finalizing the Story Map
After resolving these issues, the maps were successfully exported using the Export Web Scene option. They were then embedded into Story Maps alongside text to provide context and analysis for each year.

Key Insights
The project explored housing market dynamics primarily through an economic lens.

  • Interest Rates: The Bank of Canada’s overnight lending rate played a pivotal role, with historic lows (0.25%) during the COVID-19 pandemic fueling a housing boom, and sharp increases (up to 5% by 2023) leading to market cooling.
  • Immigration: Record-breaking immigration inflows also contributed to increased demand, exacerbating the affordability crisis.

While earlier periods like 2008 were critical in shaping the market, boundary changes in TRREB’s data made them difficult to include.

Conclusion
Analyzing real estate trends over nearly a decade and visualizing them through 3D extrusions offers a profound insight into the rapid rise of residential real estate prices in Toronto. This approach underscores the magnitude of the housing surge and highlights how policy measures, while impactful, have not fully addressed the affordability crisis.

The persistent rise in prices, even amidst various interventions, emphasizes the critical need for increased housing supply. Initiatives aimed at boosting the number of housing units in the city remain essential to alleviate the pressures of affordability and meet the demands of a growing population.

Link to Story Map (You will need to sign in through your TMU account to view it): https://arcg.is/WCSXG

Family Travel Survey

Marzieh Darabi, Geovis Project Assignment, TMU Geography, SA8905, Fall 2024

https://experience.arcgis.com/experience/638bb61c62b3450ab3133ff21f3826f2

This project is designed to help transportation planners understand how families travel to school and identify the most commonly used walking routes. The insights gained enable the City of Mississauga to make targeted improvements, such as adding new signage where it will have the greatest impact.

Project Workflow

Each school has its own dedicated page within the app, displaying both a map and a survey. The maps were prepared in ArcGIS Pro and then shared to ArcGIS Online. In the Map Viewer, I defined the symbology and set the desired zoom level for the final map. To identify key routes for the study, I used the Buffer tool in ArcGIS Pro to analyze routes in close proximity to schools. Next, I applied the Select by Location tool to identify routes located within a 400-meter radius of each school. These selected routes were then exported as a new street dataset. I further refined this dataset by customizing the streets to include only the most relevant options, reducing the number of choices presented in the survey.

Each route segment was labeled to correspond directly with the survey questions, making it easy for families to understand which options in the survey matched the map. To make these labels, new field was added to street dataset that would correspond to options in the survey. These maps were then integrated into ArcGIS Experience Builder using the Map Widget, which allows further customization of map content and styling via the application’s settings panel.

ArcGIS Experience Builder interface showing the process of adding a Map Widget and customizing the app layout

Why Experience Builder?

When designing the application, I chose ArcGIS Experience Builder because of its flexibility, modern interface, and wide range of features tailored to building interactive applications. Here are some of the specifications and advantages of using Experience Builder for this project:

  1. Widget-Based Design:
    Experience Builder operates on a widget-based framework, allowing users to drag and drop functional components onto the canvas. This flexibility made it easy to integrate maps, surveys, buttons, and text boxes into a cohesive application.
  2. Customizable Layouts:
    The platform offers tools for designing responsive layouts that adapt to different screen sizes. For this project, I configured desktop layout to ensure that the application is accessible to families.
  3. Map Integration:
    The Map Widget provided options to display the walking routes and key streets interactively. I set specific map extents to align with the study’s goals. End-users could zoom in or out and interact with the map to see routes more clearly.
  4. Survey Integration:
    By embedding the survey using the Survey Widget, I was able to link survey questions directly to map visuals. The widget also allowed real-time updates, meaning survey responses are automatically stored and can be accessed or analyzed in ArcGIS Online.
  5. Dynamic User Navigation:
    The Button Widget enabled intuitive navigation between pages. Each button is configured to link directly to a school’s map and survey page, while a Back Button on each page ensures users can easily return to the introduction screen.
  6. Styling Options:
    Experience Builder offers extensive styling options to customize the look and feel of the application. I used the Style Panel to select fonts, colors, and layouts that are visually appealing and accessible.

App Design Features

The app is designed to accommodate surveys for seven schools. To ensure ease of navigation, I created an introductory page listing all the schools alongside a brief overview of the survey. From this page, users can navigate to individual school maps using a Button Widget, which links directly to the corresponding school pages. A Back Button on each map page allows users to return to the school list easily.

The survey is embedded within each page using the Survey Widget, allowing users to submit their responses directly. The submitted data is stored as survey records and can be accessed via ArcGIS Online.

Setting links between buttons and pages in ArcGIS Experience Builder

Customizing Surveys

The survey was created using the Survey123 app, which offers various question types to suit different needs. For my survey, I utilized multiple-choice and single-line text question types. Since some questions are specific to individual schools, I customized their visibility using visibility rules based on the school selected in Question 1. For example, Question 4, which asks families about the routes they use to reach school, only becomes visible once a school is selected in Question 1.

If the survey data varies significantly across different maps, separate surveys can be created for each school to ensure accuracy and relevance.

setting visibility rules for survey questions based on user responses

Final Thoughts

Using ArcGIS Experience Builder provided the ideal platform for this project by combining powerful map visualizations with an intuitive interface for survey integration. Its customization options allowed me to create a user-centric app that meets the needs of both families and transportation planners.

Natural Disasters around the world from 1950-2018

By: Zahra H. Mohamed for SA8905 @RyersonGeo

You can download the code here!

Introduction

Natural disasters are major events that result from natural processes of the planet. With global warming and the changing of our climate, it’s rare to go through a week without mention of a flood, earthquake, or a bad storm happening somewhere in the world. I chose to make my web map on natural disasters, because it is at the front of lot of people’s minds lately, as well as there is reliable and historical public data available on disasters around the world. My main goal is to make an informational and easy to use web page, that is accessible to anyone from any educational level or background. The web page will display all of the recorded natural disasters around the world over the past 68 years, and will allow you to see what parts of the world are more prone to certain types of disasters in a clear and understandable format.

Figure 1. Map displaying natural disaster data points, zoomed into Africa.

In order to make my web map I used:

  • Javascript – programming language
  • HTML/CSS – front-end programming language and stylesheets
  • Leaflet – a javascript library or interactive maps
  • JQuery – a javascript framework
  • JSCharting – a javascript charting library that creates charts using SVG (Scalable Vector Graphics)

Data & Map Creation

The data for this web map was taken from: Geocoded Disasters (GDIS) Dataset, v1 (1960-2018) from NASA’s Socioeconomic Data and Applications Centre (SEDAC). The data was originally downloaded as a Comma-separated values (CSV) file. CSV files are simple text files that allow for you to easily share data, and generally take up less space.

A major hurdle in preparing this map was adding the data file onto the map. Because the CSV file was so large (30, 000+). I originally added the csv file onto mapbox studio as a dataset, and then as tiles, but I ended up switching to Leaflet, and locally accessing the csv file instead. Because the file was so large, I decided to use QGIS to sort the data by disaster type, and then uploaded them in my javascript file, using JQuery.

Data can come in different data types and formats, so it is important to convert data into format that is useful for whatever it is you hope to extract or use it for. In order to display this data, first the markers data is read from the csv file, and then I used Papa Parse to convert the string file, to an array of objects. Papa Parse is a csv library for javascript, that allows you to parse through large files on the local system or download them from the internet. Data in an array and/or object, allows you to loop through the data, making it easier to access particular information. For example, when including text in the popup for the markers (Figure 2), I had to access to particular information from the disaster data, which was very easy to do as it was an object.

Code snippet for extracting csv and creating marker and popup (I bolded the comments. Comments are just notes, they are not actually part of the code):

// Read markers data from extreme_temp.csv
$.get('./extreme_temp.csv', function (csvString) {

  // Use PapaParse to convert string to array of objects
  var data = Papa.parse(csvString, { header: true, dynamicTyping: true }).data;

  // For each row in data, create a marker and add it to the map
  for (var i in data) {
    var row = data[i];

        // create popup contents
        var customPopup = "<h1>" + row.year + " " + row.location + "<b> Extreme Temperature Event<b></h1><h2><br>Disaster Level: " + row.level + "<br>Country: " + row.country + ".</h2>"

        // specify popup options 
        var customOptions =
        {
          'maxWidth': '500',
          'className': 'custom'
        }

    var marker = L.circleMarker([row.latitude, row.longitude], {
      opacity: 50
    }).bindPopup(customPopup, customOptions);

// show popup on hover
    marker.on('mouseover', function (e) {
      this.openPopup();
    });
    marker.on('mouseout', function (e) {
      this.closePopup();
    });

// style marker and add to map
    marker.setStyle({ fillColor: 'transparent', color: 'red' }).addTo(map);
  }

});
Figure 2. Marker Popup

I used L.Circlemarker ( a leaflet vector layer) to assign a standard circular marker to each point. As you can see in Figure 1 and 3, the markers appear all over the map, and are very clustered in certain areas. However, when you zoom in as seen in Figure 3, the size of the markers adjusts, and they become easier to see, as you zoom into the more clustered areas. The top left corner of the map contains a zoom component, as well these 4 empty square buttons vertically aligned, which are each assigned a continent (just 4 continents for now), and will navigate over to that continent when clicked.

Figure 3. Map zoomed in to display, marker size

The bottom left corner of the map contains the legend and toggle buttons to change between the theme of the map, from light to dark. Changing the theme of the map doesn’t alter any of the data on the map, it just changes the style of the basemap. Nowadays almost every browser and web page seems to have a dark mode option, so I thought it would be neat include. The title, legend and the theme toggles, are all static and their positions on the web page remain the same.

Another component on the web page is the ‘Disaster Fact’ box on the bottom right corner of the page. This textbook is meant display random facts about natural disaster over a specified time interval. Ideally, i have variable that contains an array of facts in a list, in string form. Then use the setInterval(); function, and a function that generates a random number, that is the length of the array – 1, and use that as an index to select one of the list items from the array. However, for the moment the map will display the first fact after the specific time interval, when the page loads, but then it remains on the page. But refreshing the page, will cause for the function to generate another random fact.

Figure 4. Pie Chart displaying Distribution of Natural Disasters

One of the component of my web map page, that I will expand on, is the chart. For now I added a simple pie chart using JSCharts to display the total number of disasters per disaster type, for the last 68 years. Using JSCharts as fairly simple, as you can see if you take a look at the code for it in my GitHub. I calculated the total number of disasters for each disaster type by looking at the number of lines in each of my already divided csv files, and manually entered them as the y values. However, normally in order to calculate this data, especially if it was in one large csv file, I would use RStudio.

Something to keep in mind:

People view websites on different platform nowadays, from laptops, to tables and iPhones. A problem with creating web pages is to keep in mind that different platform for viewing web pages, have different screen sizes. So webpages need to be optimized to look good in differ screen sizes, and this is largely done using CSS.

Looking Ahead

Overall my web map is still in progress, and there are many components I need to improve upon, and would like to add to. I would also like to add a bar chart that shows the total number of disasters for each year, for each disaster type , along the bottom of the map, with options to toggle between each disaster type. Also I would like to add a swipe bar that allows you to see the markers on the map based on the year. A component of the map I had trouble adding was an option to hide/view marker layers on the map. I was able to get it to work for just one marker for each disaster type, but it wouldn’t work for the entire layer, so looking ahead I will figure out how to fix that as well.

There was no major research question in making this web page, my goal was to simply make a web map that was appealing, interesting, and easy to use. I hope to expand on this map and add the components that I’ve mentioned, and fix the issues I wasn’t able to figure out. Overall, making a web page can be frustrating, and there is a lot of googling and watching youtube videos involved, but making a dynamic web app is a useful skill to learn as it can allow you to convey information as specifically and creatively as you want.

Interactive Map and Border Travels

Given the chance to look at making geovisualisation, a pursuit began to bring in data on a scope which would need adjustments and interaction for understanding geography further and further, while still being able to begin the journey with an overview and general understanding of the topic at hand.

Introduction to the geovisualisation

This blog post doesn’t unveil a hidden gem theme of border crossing, but demonstrates how an interactive map can share the insights which the user might seek, not being limited to the publisher’s extents or by printed information. Border crossing is selected as topic of interest to observe the navigation that may get chosen with borders, applying this user to a point of view that is similar to those crossing at these points themselves, by allowing them to look at the crossing options, and consider preferences.

To give the user this perspective, this meant beginning to locate and provide the crossing points. The border crossing selected was the US border between Canada and between Mexico, being a scope which could be engaged with the viewer and provide detail, instead of having to limit this data of surface transportation to a single specified scale and extent determined by the creator rather than the user.

Border crossings are a matter largely determined by geography, and are best understood in map rather than any other data representation, unlike attributes like sales data which may still be suitable in an aspatial sense, such as projected sales levels by line graph.

To get specific, the data came from the U.S. Bureau of Transportation Statistics, and was cleaned to be results from the beginning of January 2010 til the end of September 2020. The data was geocoded with multiple providers and selected upon consistency, however some locations were provided but their location could not be identified.

Seal of the U.S. Bureau of Transportation Statistics

To start allowing any insights for you, the viewer, the first data set to be appended to the map is of the border locations. These are points, and started to identify the distribution of crossing opportunities between the north American countries. If a point could not be appended to the location of the particular office that processed the border entries, then the record was assigned to the city which the office was located in. An appropriate base layer was imported from Mapbox to best display the background map information.

The changes in the range of border crossings were represented by shifts in colour gradient and symbol size. With all the points and their proportions plotted, patterns could begin to be provided as per the attached border attributes. These can illustrate the increases and decreases in entries, such as the crossings in California points being larger compared to entries in Montana.

Mapped Data

But is there a measure as to how visited the state itself is, rather than at each entry point? Yes! Indeed there is. In addition to the crossing points themselves, the states which they belong to have also been given measurement. Each state with a crossing is represented on the map displaying a gradient for the value of average crossing which the state had experienced. We knew that California had entry points with more crossings than the points shown in Montana, but now we compare these states themselves, and see that California altogether still experienced more crossings at the border than Montana had, despite having fewer border entry points.

Could there be a way to milk just a bit more of this basic information? Yes. This is where the map begins to benefit from being interactive.

Each point and each state can be hovered over to show the calculated values they had, clarifying how much more or less one case had when compared to another. A state may have a similar gradient, an entry point may appear the same size, but to hover over them you can see which place the locations belong to, as well as the specific crossing value it has. Montana is a state with one of the most numerous crossing points, and experiencing similar crossing frequencies across these entries. To hover over the points we can discover that Sweetgrass, Montana is the most popular point along the Montana border.

Similar values along the Montana border

In fact, this is how we discover another dimension which belongs to the data. Hovering over these cases we can see a list of transport modes that make up the total crossings, and that the sum was made up of transport by trucks, trains, automotives, busses, and pedestrians.

To discover more data available should simply mean more available to learn, and to only state the transport numbers without their visuals would not be the way to share an engaging spatial understanding. With these 5 extra aspects of the border crossings available, the map can be made to display the distributions of each particular mode.

Despite the points in Alaska typically being one of the least entered among the total border crossings, selecting the entries by train draws attention to Skagway, Alaska, being one of the most used border points for crossing into the US, even though it is not connected to the mainland. Of course, this mapped display paints a strong understanding from the visuals, as though this large entry experienced at Skagway, Alaska is related to the border crossings at Blaine, Washington, likely being the train connection between Alaska and Continental USA.

Mapping truck crossing levels (above), crossings are made going east and past the small city of Calexico. The Calexico East is seen having a road connection between the two boundaries facing a single direction, suggesting little interaction intended along the way

When mapping pedestrian crossings (above), these are much more popular in Calexico, the area which is likely big dense to support the operation of the airport shown in its region, and is displaying an interweaving connection of roads associated with an everyday usage

Overall, this is where the interactive mapping applies. The borders and their entry points have relations largely influenced by geography. The total pedestrian or personal vehicle crossings do well to describe how attractive the region may be on one side rather than another. Searching to discover where these locations become attractive, and even the underlying causes for the crossing to be selected, can be discovered in the map that is interactive for the user, looking at the grounds which the user chooses.

While this theme data layered on top highlights the topic, the base map can help explain the reasons behind it, and both are better understood when interactive. It isn’t necessary to answer one particular thought here as a static map may do, but instead to help address a number of speculative thoughts, enabling your exploration.

Desperate Journeys

By Ibrahim T. Ghanem

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Background:

Over the past 20 years, Asylum Seekers have invented many travel routes between Africa, Europe and Middle East in order be able to reach a country of Asylum. Many governmental and non-governmental provided information about those irregular travel routes used by Asylum Seekers. In this context, this geovisualization project aims at compiling and presenting two dimensions of this topic: (1) a comprehensive animated spider map presenting some of the travel routes between the above mentioned three geographic areas; (2) develop a dashboard that connects those routes to other statistics about refugees in a user-friendly interface. In that sense, the best software to fit the project is Tableau.

Data and Technology

Creation of Spider maps at Tableau is perfect for connecting hubs to surrounding point as it allows paths between many origins and destinations. Besides, it can comprehend multiple layers. Below is a description of the major steps for the creation of the animated map and dashboard.

Also, Dashboards are now very useful in combining different themes of data (i.e. pie-charts, graphs, and maps), and accordingly, they are used extensively in non-profit world to present data about a certain cause. The Geovisualiztion Project applied geocoding approach to come up with the animated map and the dashboard.

The Data used to create the project included the following:

-Origins and Destinations of Refugees

-Number of Refugees hosted by each country

-Count of Refugees arriving by Sea (2010-2015)

-Demographics of Refugees arriving by Sea – 2015

Below is a brief description of the steps followed to create the project

Step 1: Data Sources:

The data was collected from the below sources.

United Nations High Commissioner for Refugees, Human Rights Watch, Vox, InfoMigrants, The Geographical Association of UK, RefWorld, Broder Free Association for Human Rights, and Frontex Europa.

However, most of the data are not geocoded. Accordingly, Google Sheets was used in Geocoding 21 routes, and thereafter each Route was given a distinguishing ID and a short description of the route.

Step 2: Utilizing the Main Dataset:

Data is imported from an excel sheet. In order to compute a route, Tableau requires data about origins,and destination with latitude and longitude. In that aspect, the data contains different categories:

A-Route I.D. It is a unique path I.D. for each route of the 21 routes;

B-Order of Points: It is the order of stations travelled by refugees from their country of origin to country of Asylum;

C-Year: the year in which the route was invented;

D-Latitude/Longitude: it is the coordinates of the each station;

F-Country: It is the country hosting Refugees;

E- Population: Number of refugees hosted in each country.

Step 3: Building the Map View:

The map view was built by putting longitude in columns, latitude in rows, Route I.D. at details, and selecting the mark type as line. In order to enhance the layout, Oder of Points was added to Marks’ Path, and changing it to dimensions instead of SUM.  Finally, to bring stations of travel, another layer was added to by putting another longitude to columns, and changing it to Dual Axis. To create filtration by Route, and timeline by year, route was added Filter while year was added to page.

Step 4: Identifying Routes:

To differentiate routes from each other by distinct colours, the route column was added to colours, and the default setting was changed to Tableau 20. And Layer format wash changed to dark to have a contrast between the colours of the routes and the background.

Step 5: Editing the Map:

After finishing up with the map formation. A video was captured by QuickStart and edited by iMovie to be cropped and merged.

Step 6: Creating the Choropleth map and Symbology:

In another sheet, a set of excel data (obtained from UNHCR) was uploaded to create a Choropoleth map that would display number of refugees hosted by each country by year 2018. Count of refugees was added to columns while Country was added to rows. The Marks’ colour ramp of orange-gold, with 4 classes was added to indicate whether or not the country is hosting a significant number of refugees. Hovering over each country would display the name of the country and number of refugees it hosts.

Step 7: Statistical Graphs:

A pie-chart and a graph were added to display some other statistics related to count of Refugees arriving by Sea from Africa to Europe, and the demographics of those refugees arriving by sea. Demographics was added to label to display them on the charts.

Step 8: Creation of the Dashboard:

All four sheets were added in the dashboard section through dragging them into the layer view. To comprehend that amount of data explanation, size was selected as legal landscape. Title was given to the Dashboard as Desperate Journeys.

Limitations

A- Tableau does not allow the map creator to change the projection of the maps; thus, presentation of maps is limited. Below is a picture showing the final format of the dashboard:

B-Tableau has an online server that can host dashboard; nevertheless, it cannot publish animated maps. Thus, the animated maps is uploaded here a video. The below link can lead the viewer to the dashboard:

https://prod-useast-a.online.tableau.com/t/desperatejourneysgeovis/views/DesperateJourneys_IbrahimGhanem_Geoviz/DesperateJourneys/ibrahim.ghanem@ryerson.ca/23c4337a-dd99-4a1b-af2e-c9f683eab62a?:display_count=n&:showVizHome=n&:origin=viz_share_link

C-Due to unavailability of geocoded data, geocoding the routes of refugees’ migration consumed time to fine out the exact routes taken be refugees. These locations were based on the reports and maps released by the sources mentioned at the very beginning of the post.

A Shot in the Dark: Analyzing Mass Shootings in the United States, 2014-2019

By: Miranda Ramnarayan

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

The data gathered for this project was downloaded from the Gun Violence Archive (https://www.gunviolencearchive.org/), which is a non-for Profit Corporation. The other dataset is the political affiliation per state, gathered by scrapping this information from (https://www.usa.gov/election-results). Since both of these datasets contain a “State Name” column, an inner join will be conducted to allow the two datasets to “talk” to each other.

The first step is importing your excel files, and setting up that inner join.

There are four main components this dashboard is made of: States with Mass Shootings, States with Highest Death Count, Total Individuals Injured from Mass Shootings and a scattergram displaying the amount of individuals injured and killed. All of these components were created in Tableau Worksheets and then combined on a Dashboard upon completion. The following are steps on how to re-create each Worksheet. 

1. States with Mass Shootings

In order to create a map in Tableau, very basic geographic information is needed. In this case, drag and drop the “State” attribute under the “Dimensions” column into the empty frame. This will be the result:

In order to change the symbology from dots to polygons, select “Map” under the Marks section.

To assign the states with their correct political affiliation, simply drag and drop the associated year you want into the “Colour” box under Marks.

This map is displaying the states that have had mass shootings within them, from 2014 to 2019. In order to automatic this, simply drag and drop the “Incident Date” attribute under Pages. The custom date page has been selected as “Month / Year” since the data set is so large.

This map is now complete and when you press the play button displayed in the right side of this window, the map will change as it only displays states that have mass shootings within them for that month and year.

2. States with Highest Death Count

This is an automated chart that shows the Democratic and Republican state that has the highest amount of individuals killed from mass shootings, as the map with mass shootings above it runs through its time series. Dragging and dropping “State” into the Text box under Marks will display all the states within the data set. Dragging and dropping the desired year into Colour under Marks will assign each state with its political party.

 In order for this worksheet to display the state with the highest kill count, the following calculations have to be made once you drag and drop the “# Killed” from Measures into Marks.

To link this count to each state, filter “State” to only display the one that has the maximum count for those killed.

This will automatically place “State” under Filters.

Drag and drop “Incident Date” into Pages and set the filter to Month / Year, matching the format from section 1.

Format your title and font size. The result will look like:

3. Total Individuals Injured from Mass Shootings

In terms of behind the scenes editing, this graph is the easiest to replicate.

Making sure that “State Name” is above “2016” in this frame is very important, since this is telling Tableau to display each state individually in the bar graph, per year.

4. Scattergram

This graph displays the amount of individuals killed and injured per month / year. This graph is linked to section 1 and section 2, since the “Incident Date” under Pages is set to the same format. Dragging and dropping “SUM (#Killed)” into Rows and SUM (#Injured) into Columns will set the structure for the graph.

In order for the dot to display the sum of individuals killed and injured, drag and drop “# Killed” into Filter and the following prompt will appear. Select “Sum” and repeat this process for “# Injured”.

Drag and drop “Incident Date” and format the date to match Section 1 and 2. This will be your output.

Dashboard Assembly

This is where Tableau allows you to be as customizable as you want. Launching a new Dashboard frame will allow you to drag and drop your worksheets into the frame. Borders, images and text boxes can be added at this point. From here, you can re-arrange/resize and adjust your inserted workbooks to make sure formatting is to your desire.  

Right clicking on the map on the dashboard and selecting “Highlight” will enable an interactive feature on the dashboard. In this case, users will be able to select a state of interest, and it will highlight that state across all workbooks on your dashboard. This will also highlight the selected state on the map, “muting” other states and only displaying that state when it fits the requirements based on the calculations set up prior.

Since all the Pages were all set to “Month/Year”, once you press “play” on the States with Mass Shootings map, the rest of the dashboard will adjust to display the filtered information.

It should be noted that Tableau does not allow the user to change the projection of any maps produced, resulting in a lack of projection customization. The final dashboard looks like this: