Investigating The Distribution of Crime by Type

Geo-Vis Project Assignment, TMU Geography, SA8905, Fall 2025


Hello everyone, and welcome to my blog!

Today’s topic addresses the distribution of crime in Toronto. I am seeking to provide the public, and implicated stakeholders with a greater knowledge and understanding of how, where, and why different types of crime are distributed in relation to urban features like commercial buildings, public transit, restaurants, parks, open spaces, and more. We will also be looking at some of the socio-economic indicators of crime, and from there identify ways to implement relevant and context specific crime mitigation and reduction strategies.

This project investigates how crime data analysis can better inform urban planning and the distribution of social services in Toronto, Ontario. Research across diverse global contexts highlights that crime is shaped by a mix of socioeconomic, environmental, and spatial factors, and that evidence-based planning can reduce harm while improving community well-being. The following review synthesizes findings from six key studies, alongside observed crime patterns within Toronto.


Accompanying a literature review, I created a 3D model that displays a range of information including maps made in ArcGIS Pro. The data used was sourced from the Toronto Police Service Public Safety Data Portal, and Toronto’s Neighbourhood Profiles from the 2021 Census. The objective here is to draw insightful conclusions as to what types of crime are clustering where in Toronto, what socio-economic and/or urban infrastructural indicators are contributing to this? and what solutions could be implemented in order to reduce overall crime rates across all of Toronto’s neighbourhoods – keeping equitability in mind ?

The distribution of crime across Toronto’s neighbourhoods reflects a complex interplay of socioeconomic conditions, built environment characteristics, mobility patterns, and levels of community cohesion. Understanding these geographic and social patterns is essential to informing more effective city planning, targeted service delivery, and preventive interventions. Existing research emphasizes the need for long-term, multi-approach strategies that address both immediate safety concerns and the deeper structural inequities that shape crime outcomes. Mansourihanis et al. (2024) highlight that crime is closely linked to urban deprivation, noting that inequitable access to resources and persistent neighbourhood disadvantages influence where and how crime occurs. Their work stresses the importance of integrating crime prevention with broader social and economic development initiatives to create safer, and more resilient urban environments (Mansourihanis et al., 2024).

Mansourihanis, O., Mohammad Javad, M. T., Sheikhfarshi, S., Mohseni, F., & Seyedebrahimi, E. (2024). Addressing Urban Management Challenges for Sustainable Development: Analyzing the Impact of Neighborhood Deprivation on Crime Distribution in Chicago. Societies, 14(8), 139. https://doi.org/10.3390/soc14080139

Click here to view the literature review I conducted on this topic.


Methods – Creating a 3D Interactive Crime Investigation Board

The purpose of this 3D map is to provide an interactive tool that can be regularly updated over time; allowing users to build upon research using various sources of information in varying formats (e.g. literature, images, news reports, raw data, various map types presenting comparable socio-economic data, etc; thread can be used to connect images and other information to associated areas on the map). The model has been designed for easy means of addition, removal and connection of media items by using materials like tacks, clips, and cork board. Crime incidents can be tracked and recorded in real time. This allows for quick identification of where crime is clustering based on geography, socio-economic context, and proximity to different land use types and urban features like transportation networks. We can continue to record and analyze what urban features or amenities could be deterring or attracting/ promoting criminal activity. This will allow for fast, context specific, crime management solutions that will ultimately help reduce overall crime rates in the city.

1. Conduct a detailed literature review. 
Here is the literature review I conducted to address this topic.

2. Downloaded the following data from: Open Data | Toronto Police Service Public Safety Data Portal. Each dataset was filtered to show points only from 2025.

- Dataset: Shooting and Firearm Discharges
- Dataset: Homicides
- Dataset: Assault
- Dataset: Auto Theft
- Dataset: Break and Enter

Toronto Neighbourhood Profiles, 2021 Census from: Neighbourhood Profiles - City of Toronto Open Data Portal
- Average Total Household Income by Neighbourhood
- Unemployment Rates by Neighbourhood

3. After examining the full data sets by year, select a time period to map. In this case, July 2025 which was the month that had the greatest number of crimes to occur this year.

4. Map Setup
- Coordinate system: NAD 1983 UTM Zone 17N
- Rotation: -17
- Geography:
- City of Toronto, ON, Canada
- Neighbourhood boundaries from Toronto Open Data Portal

5. Add the crime incident data reports and Toronto’s Neighbourhood Boundary file.

Geospatial Analysis Tools Used
Tool - Select by attribute and delete the data that we are not mapping. In this case;
From the Attribute Table,
Select by Attribute [OCC_YEAR] [is less than] [2025]

Tool - Summarize within
Count the number of crime incidents within each of the neighbourhood's boundary polygons for the 5 selected crime types for preliminary analysis and mapping.

Design Tools and Map Types Used
- Dot Density
- 2025 Crime rates, by type, annual and for July of 2025
- Heat Map
- 2025 Crime rates, by type, annual and for July of 2025
- Choropleth
- Average Total Household Income, City of Toronto by Neighbourhood
- Unemployment Rates Across Toronto, 2021
- Design Tools e.g. convert to graphics
Based on literature review and analysis of the presented maps,  this model allows for us to further analyze, visually display and record the data and findings. This model will allow for users to see where points are clustering, and examine urban features, land use and the socio-economic context of cluster areas in order to address potential solutions, with equity in mind.

Supplies
- Thread,
- Painted tooth picks,
- Mini clothes pins,
- Highlighters, markers etc.
- Scissors,
- Hot glue
- Images of indicators
- Relevant/insightful literature research
- Socio-Economic Maps: Population Income, unemployment, and density
- Crime Maps: Dot density crime by type, heat map of crime distribution by type, from the select 5 crime types, all incidents to occur during the month of July, 2025

Process
1. Attach cork board to poster board;

2. Cut out and place down main maps that have been printed (maps created in ArcGIS Pro, some additional design edits made in Canva);

3. Outline the large or central base map with tacks; use string to connect the tacks outlining the City of Toronto's regional boundary line.

4. Using colour painted tooth picks (alternatively, tacks may be used depending on size limitations), crime incidents can be recorded in real time, using different colours to represent different crime types.

5. Additional data can be added on and joined to other map elements over time. This data could be: images and locations of crime indicators; new literature findings; news reports’ raw data; different map types presenting comparable socio-economic data; community input via email, from consultation meetings, 911 calls, or surveys; graphs; tables; land use type and features and more.

6. Thread is used to connect images and other information to associated areas on the map. In this case, blue string and tacks were used to highlight preventative crime measures and red to represent an indicator of crime.

7. Sticky notes can be used to update the day and month (using a new poster/cork board for each year), under “Time Stamp”

8. Use of Google Earth was applied to further analyze using satellite imagery, a terrestrial layer, and an urban features layer in order to further analyze land use, type, function, and significant features like Union Station - a major public transit connection point, and located within Toronto’s most dense and overall largest crime hot spot.

9. A satellite imagery base map in ArcGIS was used to compare large green spaces (parks, ravines, golf courses etc.) with the distribution of each incidence point on the dot map created. Select each point field individually for optimal view and map analysis.

10. Video and Photo content used to display the final results were created using an IPhone Camera and the "iMovie" video editing app.

See photos and videos for reference!

Socioeconomic and Environmental Indicators of Crime

A consistent theme across the literature and my own findings is the strong connection between neighborhood deprivation and crime. Mansourihanis et al. (2024) emphasize that understanding the “relationship between urban deprivation and crime patterns” supports targeted, long-term strategies for urban safety. Concentrated poverty, population density, and low social cohesion are significant predictors of violence (Mejia & Romero, 2025; M. C. Kondo et al., 2018). Similarly, poverty and weak rule of law correlate more strongly with homicide rates than gun laws alone (Menezes & Kavita, 2025).

Environmental characteristics also influence crime distribution. Multiple studies link greater green space to reduced crime, higher social cohesion, and stronger perceptions of safety (Mejia & Romero, 2025). Exposure to green infrastructure can foster community pride and engagement, further reinforcing crime-preventive effects (Mejia & Romero, 2025). Relatedly, Stalker et al. (2020) show that community violence contributes to poor mental and physical health, with feelings of unsafety directly associated with decreased physical activity and weaker social connectedness.

Other urban form indicators—including land-use mix, connectivity, and residential density—shape mobility patterns that, in turn, affect where crime occurs. Liu, Zhao, and Wang (2025) find that property crimes concentrate in dense commercial districts and transit hubs, while violent crimes occur more often in crowded tourist areas. These patterns reflect the role of population mobility, economic activity, and social network complexity in structuring urban crime.

Crime Prevention and Community-Based Solutions

Several authors highlight the value of integrating built-environment design, green spaces, and community-driven interventions. Baran et al. (2014) show that larger parks, active recreation features, sidewalks, and intersection density all promote park use, while crime, poverty, and disorder decrease utilization. Parks and walkable environments also support psychological health and encourage social interactions that strengthen community safety. In addition, green micro-initiatives—such as community gardens or small landscaped interventions—have been found to enhance residents’ emotional connection to their neighborhoods while reducing local crime (Mejia & Romero, 2025).

At the policy level, optimizing the distribution of public facilities and tailoring safety interventions to local conditions are essential for sustainable crime prevention (Liu, Zhao, & Wang, 2025). For gun violence specifically, trauma-informed mental health care, early childhood interventions, and focused deterrence are recommended as multidimensional responses (Menezes & Kavita, 2025).

Spatial Crime Patterns in Toronto

When mapped across Toronto’s geography, the crime data revealed distinct clustering patterns that mirror many of the relationships described in the literature. Assault, shootings, and homicides form a broad U- or O-shaped distribution that aligns with neighborhoods exhibiting lower average incomes and higher unemployment rates. These patterns echo global findings on deprivation and violence.

Downtown Toronto—particularly the area surrounding Union Station—emerges as the city’s highest-density crime hotspot. This zone features extremely high connectivity, car-centric infrastructure, dense commercial and mixed land use, and limited green space. These conditions resemble those identified by Liu, Zhao, and Wang (2025), where transit hubs and high-traffic commercial districts generate elevated rates of property and violent crime. Google Earth imagery further highlights the concentration of major built-form features that attract large daily populations and mobility flows, reinforcing the clustering of assaults and break-and-enter incidents in the downtown core.

Auto theft is relatively evenly distributed across the city and shows weaker clustering around transit or commercial nodes. However, areas with lower incomes and higher unemployment still show modestly higher auto-theft levels. Break and enter incidents, by contrast, concentrate more strongly in high-income neighborhoods with lower unemployment—suggesting that offenders selectively target areas with greater material assets.

Across all crime categories, one consistent pattern is the notable absence of incidents within large green spaces such as High Park and Rouge National Urban Park. This supports the broader literature connecting green space with lower crime and improved perceptions of safety (Mejia & Romero, 2025; Baran et al., 2014). Furthermore, as described, different kinds of crime occur in low versus high income neighbourhoods emphasizing a need for context specific resolutions that take into consideration crime type and socio-economics.

Synthesis and Relevance for Toronto

Collectively, these findings indicate that crime in Toronto is shaped by intersecting socioeconomic factors, environmental features, and mobility patterns. Downtown crime clustering reflects high density, transit connectivity, and land-use complexity; outer-neighborhood violence aligns with deprivation; and green spaces consistently correspond with lower crime. These patterns mirror global research emphasizing the role of social cohesion, urban form, and economic inequality in shaping crime distribution.

Understanding these relationships is essential for planning decisions around green infrastructure investments, targeted social services, transit-area safety strategies, and neighborhood-specific interventions. Ultimately, integrating environmental design, socioeconomic supports, and community-based programs that support safer, healthier, and more equitable outcomes for Toronto residents.

Mapping and Printing Toronto’s Socioeconomic Status Index in 3D

Menusan Anantharajah, Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

Hello, this is my blog post!

My Geovis project will explore the realms of 3D mapping and printing through a multi-stage process that utilizes various tools. I have always had a small interest in 3D modelling and printing, so I selected this medium for the project. Although this is my first attempt, I was quite pleased with the process and the results.

I decided to map out a simplified Socioeconomic Status (SES) Index of Toronto’s neighbourhoods in 2021 using the following three variables:

  • Median household income
  • Percentage of population with a university degree
  • Employment rate

It should be noted that since these variables exist on different scales, they were standardized using z-scores and then scaled to a 0-100 range. The neighbourhoods will be extruded by the SES index value, meaning that neighbourhoods scoring high will be taller in height. I chose SES as my variable of choice since it would be interesting to physically visualize the disparities and differences between the neighbourhoods by height.

Data Sources

Software

A variety of tools were used for this project, including:

  • Excel (calculating the SES index and formatting the table for spatial analysis)
  • ArcGIS Pro (spatially joining the neighbourhood shapefile with the SES table)
  • shp2stl* (takes the spatially joined shapefile and converts it to a 3D model)
  • Blender (used to add other elements such as title, north arrow, legend, etc.)
  • Microsoft 3D Builder** (cleaning and fixing the 3D model)
  • Ultimaker Cura (preparing the model for printing)

* shp2stl would require an older node.js installation
** Microsoft 3D Builder is discontinued, though you can sideload it

Process

Step 1: Calculate the SES index values from the Neighbourhood Profiles

The three SES variables (median household income, percentage of population with a university degree, employment rate) were extracted from the Neighbourhood Profiles table. Using Microsoft Excel, these variables were standardized using z-scores, then combined into a single average score, and finally rescaled to a 0-100 range. I then prepared the final table for use in ArcGIS Pro, which included the identifiers (neighbourhood names) with their corresponding SES values. After this was done, the table was exported as a .csv file and brought over to ArcGIS Pro.

Step 2: Create the Spatially Joined Shapefile using ArcGIS Pro

The neighbourhood boundary file and the newly created SES table were imported into ArcGIS Pro. Using the Add Join feature, the two data sets were combined into one unified shapefile, which was then exported as a .shp file.

The figure above shows what the SES map looks like in a two-dimensional view. The areas with lighter hues represent neighbourhoods with low SES values, while the ones in dark green represent neighbourhoods with high SES values.

Step 3: Convert the shapefile into a 3D model file using shp2stl

Before using shp2stl, make sure that you have an older version of node.js (v11.15.0) and npm (6.7.0) installed. I would also recommend placing your shapefile in a new directory, as it can later be utilized as a Node project folder. Once the shapefile is placed in a new folder, you can open the folder in Windows Terminal (or Command Prompt) and run the following:

npm install shp2stl

This will bring in all the necessary modules into the project folder. After that, the script can be written. I created the following script:

const fs = require('fs');
const shp2stl = require('shp2stl');

shp2stl.shp2stl('TO_SES.shp', {
  width: 150,
  height: 25,
  extraBaseHeight: 3,
  extrudeBy: "SES_z",
  binary: true,
  verbose: true
}, function(err, stl) {
  if (err) throw err;
  fs.writeFileSync('TO_NH_SES.stl', stl);
});

This script was ‘compiled’ using Visual Studio Code; however, you can use any compiler or processor (even Notepad works). This script was then saved to a .js file in the project folder. The script was then executed in Terminal using this:

node shapefile_convert.js

The result is a 3D model that looks like this:

Since we only have Toronto’s neighbourhoods, we have to import this into Blender and create the other elements.

Step 4: Add the Title, Legend, North Arrow and Scale Bar in Blender

The 3D model was brought into Blender, where the other map elements were created and added alongside the core model. To create the scale bar for the map, the 3D model was overlaid onto a 2D map that already contained a scale bar, as shown in the following image.

After creating the necessary elements, the model needs to be cleaned for printing.

Step 5: Cleaning the model using Microsoft 3D Builder

When importing the model into 3D Builder, you may encounter this:

Once you click to repair, the program should be able to fix various mesh errors like non-manifold edges, inverted faces or holes.

After running the repair tool, the model can be brought into Ultimaker Cura.

Step 6: Preparing the model for printing

The model was imported into Ultimaker Cura to determine the optimal printing settings. As I had to send this model to my local library to print, this step was crucial to see how the changes in the print settings (layer height, infill density, support structures) could impact the print time and quality. As the library had an 8-hour print limit, I had to ensure that the model was able to be printed out within that time limit.

With this tool, I was able to determine the best print settings (0.1 mm fine resolution, 10% infill density).

With everything finalized from my side, I sent the model over to be printed at the library; this was the result:

Overall, the print of the model was mostly successful. Most of the elements were printed out cleanly and as intended. However, the 3D text could not be printed with the same clarity, so I decided to print out the textual elements on paper and layer them on top of the 3D forms.

The following is the final resulting product:

Limitations

While I am still satisfied with the end result, there were some limitations to the model. The model still required further modifications and cleaning before printing; this was handled by the library staff at Burnhamthorpe and Central Library in Mississauga (huge shoutout to them). The text elements were also messy, which was expected given the size and width of the typeface used. One improvement to the model would be to print the elements separately and at a larger scale; this would ensure that each part is printed more clearly.

Closing Thoughts

This project was a great learning experience, especially for someone who had never tried 3D modelling and printing before. It was also interesting to see the 3D map highlighting the disparities between neighbourhoods; some neighbourhoods with high SES index values were literally towering over the disadvantaged bordering neighbourhoods. Although this project began as an experimental and exploratory endeavour, the process of 3D mapping revealed another dimension of data visualization.

References

City of Toronto. (2025). Neighbourhoods [Data set]. City of Toronto Open Data Portal. https://open.toronto.ca/dataset/neighbourhoods/ 

City of Toronto. (2023). Neighbourhood profiles [Data set]. City of Toronto Open Data Portal. https://open.toronto.ca/dataset/neighbourhood-profiles/

Demographics of Chicago Neighbourhoods and Gang Boundaries in 2024

By: Ganesha Loree

Geovis Project Assignment, TMU Geography, SA8905, Fall 2025

INTRODUCTION

`Chicago is considered the most gang-occupied city in the United States, with 150,000 gang-affiliated residents, representing more than 100 gangs. In 2024, 46 gangs and their boundaries across Chicago were mapped by the City of Chicago. Factors about the formation of gangs have been of interest and a topic of research for many years all over the world (Assari et al., 2020), but for the purpose of this project, these factors are going to stem from demographics of Chicago. For instance, Chicago has deep roots within gang history and culture. Not only gangs but violent crimes are also dense. Demographics such as income, education, housing, race, etc., play factors within the neighbourhoods of Chicago and could be part of the cause of gang history.

METHODOLOGY

Step 1: Data Preparation

Chicago Neighbourhood Census Data (2025): Over 200 socioeconomic and demographic data for each neighbourhood was obtained from the Chicago Metropolitan Agency for Planning (CMAP) (Figure 1). In July 2025 their Community Data Snapshot portal released granular insights into population characteristics, income levels, housing, education, and employment metrics across Chicago’s neighbourhoods.

Figure 1: Census data for Chicago, 2024

Chicago Neighbourhood Boundary Files: official geographic boundaries for Chicago neighbourhoods were downloaded from the City of Chicago’s open data portal (Figure 2). These shapefiles were used to spatially join census data and support geospatial visualization.

Figure 2: Chicago Data Portal – Neighborhood Boundaries

Chicago Gang Territory Boundaries (2024): Gang territory data from 2024 was sourced from the Chicago Police Department’s GIS portal (Figure 3). These boundaries depict areas of known gang influence and were integrated into the spatial database to support comparative analysis with neighbourhood-level census indicators.

Step 2: Technology

Once the data was downloaded, they were applied to software to visualize the data. A combination of technologies was used, ArcGIS Pro and Sketchup (Web). ArcGIS Pro was used to import all boundary files, where neighbourhood census data was joined to Chicago boundary shapefile using unique identifier such as Neighbourhood Name (Figure 4).

Figure 4: ArcGIS Pro Data Join Table

Gang territory boundary polygons overlaid with neighborhood boundaries to enable spatial intersection and proximity analysis (Figure 5).

Figure 5: Shapefiles of Chicago’s Neighbourhoods and Gangs

Within ArcGIS Pro, the combined map of both boundaries allowed for analysis of the neighbourhoods with the most gang boundaries. Rough sketch of these neighbourhoods was made by circling the neighbourhoods of a clean map of Chicago, where the bigger circles show the areas with more gang areas and the stars indicate the neighborhoods with no gang boundaries (Figure 6). The CMAP was used to look at the demographics of the neighborhoods with the most area of gangs and compared to the areas with no gang areas (e.g. O’Hare).

Figure 6: Chicago neighborhood outlines with markers

SketchUp

SketchUp is 3D modeling tool that is used to generate and manipulate 3D models and is often used in architecture and interior design. Using this software for this project was a different purpose of the software; by importing Chicago neighborhoods outline as an image I was able to trace the neighborhoods.

Step 3: Visualization with 3D Extrusions (Sketch Up)

Determined the highest height of the 3D maps models, was based on the total number of neighborhoods (98) and total number of gangs records/areas (46). Determining which neighbourhoods had the most gang boundaries was based on the gang area number which was provided in the Gang Boundary file. The gang with the most area totaled to shape area of 587,893,900m2, where the smallest shape area is 217,949m2. Similar process was done with neighbourhood area measurements. Neighbourhoods were raised based on the number of gang areas that were present within that neighbourhood (as previously shown in Figure 5). 5’ (feet) is the highest neighbourhood, and 4” (inches) is the lowest neighbourhood where gangs are present, neighbourhoods that do not have gangs are not elevated.

A different approach was applied to the top 3 gangs map model, where the highest remains same in each gang, but are placed in the neighbourhoods that have that gang present. For instance, Gangster Disciples were set at approximately 5 feet (5′ 3/16″ or 1528.8 mm), Black P Stones at almost 4 feet (3′ 7/8″ or 936.6 mm), and Latin Kings at a little over 1 foot (1′ 8 1/4″ or 514.4 mm).

Map Design

Determined what demographic factors were going to be used to compare with gang areas, for example, income, race, and top 3 gangs (Gangster Disciples, Black P Stones, and Latin Kings). Two elements present with the two demographic maps (height and colour), where colour indicates the demographic factor and the height represents the gang presence (Figure 7).

Figure 7: 3D map models of Chicago gangs based on Race and Income

There was limited information available about the gang areas, which only consisted of gang name, shape area, and length measurements. In terms of SketchUp’s limitations, the free web version as some restrictions, had to manually draw the outline of Chicago neighbourhoods which was time consuming. In addition, SketchUp scale system was complex and was not consistent between maps. To address tis, each corner of the map was measured with the Tape Measure Tool to ensure uniformity. Lastly, when the final product was viewed in augmented reality (AR), the map quality was limited such as the neighbourhood outlines were gone, and the only parts that were visible were the colour parts of the models.

The most visual pattern shown from the race map is the areas with more gang activity have a large population of African Americans (Figure 7). For the income map, indicated in green, more gang areas have lower income whereas the areas with higher income do not have gangs in those neighborhoods. Based on the top three gangs, Gangster Disciples have the most gang boundaries across Chicago neighborhoods (Figure 8). Gangster Disciples takes up 33.6% of the area in km2, founded in 1964 in Englewood.

Figure 8: 3D map of the top 3 gangs in Chicago, 2024

FINAL PRODUCT

The final product, is user interactive through a QR code that allows viewers to look at the map models using augmented reality (AR) just by pointing your mobile device camera at the QR code below.

Being aware that the quality of the AR has its limits, the SketchUp map models can be viewed using the Geovis Map Models button below.

Reference

Assari, S., Boyce, S., Caldwell, C. H., Bazargan, M., & Mincy, R. (2020). Family income and gang presence in the neighborhood: Diminished returns of black families. Urban Science4(2), 29.

3D String Mapping and Textured Animation: An Exploration of Subway Networks in Toronto and Athens

BY: SARAH DELIMA

SA8905 – Geovis Project, MSA Fall 2024

INTRODUCTION:

Greetings everyone! For my geo-visualization project, I wanted to combine my creative skills of Do It Yourself (DIY) crafting with the technological applications utilized today. This project was an opportunity to be creative using resources I had from home as well as utilizing the awesome applications and features of Microsoft Excel, ArcGIS Online, ArcGIS Pro, and Clipchamp.

In this blog, I’ll be sharing my process for creating a 3D physical string map model. To mirror my physical model, I’ll be creating a textured animated series of maps. My models display the subway networks of two cities. The first being the City of Toronto, followed by the metropolitan area of Athens, Greece.

Follow along this tutorial to learn how I completed this project!

PROJECT BACKGROUND:

For some background, I am more familiar with Toronto’s subway network. Fortunately enough, I was able to visit Athens and explore the city by relying on their subway network. As of now, both of these cities have three subway lines, and are both undergoing construction of additional lines. My physical model displays the present subway networks to date for both cities, as the anticipated subway lines won’t be opening until 2030. Despite the hands-on creativity of the physical model, it cannot be modified or updated as easily as a virtual map. This is where I was inspired to add to my concept through a video animated map, as it visualizes the anticipated changes to both subway networks!

PHYSICAL MODEL:

Materials Used:

  • Paper (used for map tracing)
  • Pine wood slab
  • Hellman ½ inch nails
  • Small hammer
  • Assorted colour cotton string
  • Tweezers
  • Krazy glue

Methods and Process:

For the physical model, I wanted to rely on materials I had at home. I also required a blank piece of paper for a tracing the boundary and subway network for both cities. This was done by acquiring open data and inputting it into ArcGIS Pro. The precise data sets used are discussed further in my virtual model making. Once the tracings were created, I taped it to a wooden base. Fortunately, I had a perfect base which was pine wood. I opted for hellman 1/2 inch nails as the wood was not too thick and these nails wouldn’t split the wood. Using a hammer, each nail was carefully placed onto the the tracing outline of the cities and subway networks .

I did have to purchase thread so that I could display each subway line to their corresponding colour. The process of placing the thread around the nails did require some patience. I cut the thread into smaller pieces to avoid knots. I then used tweezers to hold the thread to wrap around the nails. When a new thread was added, I knotted it tightly around a nail and applied krazy glue to ensure it was tightly secured. This same method was applied when securing the end of a string.

Images of threading process:

City of Toronto Map Boundary with Tracing

After threading the city boundary and subway network, the paper tracing was removed. I could then begin filling in the space of the boundary. I opted to use black thread for the boundary and fill, to contrast both the base and colours of the subway lines. The City of Toronto thread map was completed prior to the Athens thread map. The same steps were followed. Each city is on opposite sides of the wood base for convenience and to minimize the use of an additional wood base.

Of course, every map needs a title , legend, north star, projection, and scale. Once both of the 3D string maps were complete, the required titles and text were printed and laminated and added to the wood base for both 3D string maps. I once again used the nails and hammer with the threads to create both legends. Below is an image of the final physical products of my maps!

FINAL PHYSICAL MODELS:

City of Toronto Subway Network Model:

Athens Metropolitan Area Metro Network Model:

VIRTUAL MODEL:

To create the virtual model, I used ArcGIS Pro software to create my two maps and apply picture fill symbology to create a thread like texture. I’ll begin by discussing the open data acquired for the City of Toronto, followed by the Census Metropolitan Area of Athens to achieve these models.

The City of Toronto:

Data Acquisition:

For Toronto, I relied on the City of Toronto open data portal to retrieve the Toronto Municipal Boundary as well as TTC Subway Network dataset. The most recent dataset still includes Line 3, but was kept for the purpose of the time series map. As for the anticipated Eglinton line and Ontario line, I could not find open data for these networks. However, Metrolinx created interactive maps displaying the Ontario Line and Eglinton Crosstown (Line 5) stations and names. To note, the Eglinton Crosstown is identified as a light rail transit line, but is considered as part of the TTC subway network. 

To compile the coordinates for each station for both subway routes, I utilized Microsoft Excel to create 2 sheets, one for the Eglinton line and one for the Ontario line. To determine the location of each subway station, I used google maps to drop a pin in the correct location by referencing the map visual published by Metrolinx. 

Ontario Line Excel Table :

Using ArcGIS Pro, I used the XY Table to Point tool to insert the coordinates from each separate excel sheet, to establish points on the map. After successfully completing this, I had to connect each point to create a continuous line. For this, I used the Point to Line tool also in ArcGIS Pro.

XY Table to Point tool and Points to Line tool used to add coordinates to map as points and connect points into a continuous line to represent the subway route:

After achieving this, I did have to adjust the subway routes to be clipped within the boundary for The City of Toronto as well as Athens Metropolitan Area. I used the Pairwise Clip in the Geoprocessing pane to achieve this.

Geoprocessing pairwise clip tool parameters used. Note: The input features were the subway lines withe the city boundary as the clip features.

Athens Metropolitan Area:

Data Acquisition:

For retrieving data for Athens, I was able to access open data from Athens GeoNode I imported the following layers to ArcGIS Online; Athens Metropolitan Area, Athens Subway Network, and proposed Athens Line 4 Network which I added as accessible layers to ArcGIS online. I did have to make minor adjustments to the data, as the Athens metropolitan area data displays the neighbourhood boundaries as well. For the purpose of this project, only the outer boundaries were necessary. To overcome this, I used the merge modify feature to merge all the individual polygons within the metropolitan area boundary into one. I also had to use the pairwise clipping tool once again as the line 4 network exceeds the metropolitan boundary, thus being beyond the area of study for this project.

Adding Texture Symbology:

ArcGIS has a variety of tools and features that can enhance a map’s creativity and visualization. For this project , I was inspired by an Esri Yarn Map Tutorial. Given the physical model used thread, I wanted to create a textured map with thread. To achieve this, I utilized the public folder provided with the tutorial. This included portable network graphics (.png) cutouts of several fabrics as well as pen and pencil textures. To best mirror my physical model, I utilized a thread .png.

ESRI yarn map tutorial public folder:

I added the thread .png images by replacing the solid fill of the boundaries and subway networks with a picture fill. This symbology works best with a .png image for lines as it seamlessly blends with the base and surrounding features of the map. The thread .png image uploaded as a white colour, which I was able to modify its colour according to the boundary or particular subway line without distorting the texture it provides. 

For both the Toronto and Athens maps, the picture fill for each subway line and boundary was set to a thread .png with its corresponding colour. The boundaries for both maps were set to black as in the physical model, where the subway lines also mirror the physical model which is inspired by the existing/future colours used for subway routes. Below displays the picture symbology with the thread .png selected and tint applied for the subway lines.

City of Toronto subway Networks with picture fill of thread symbology applied:

The base map for the map was also altered, as the physical model is placed on a wood base. To mirror that, I extracted a Global Background layer from ArcGIS online, which I modified using the picture fill to upload a high resolution image of pine wood to be the base map for this model. For the city boundaries for both maps, the thread .png imagery was also applied with a black tint.

PUTTING IT ALL TOGETHER:

After creating both maps for Toronto and Athens, it was time to put it into an animation! The goal of the animation was to display each route, and their opening year(s) to visually display the evolution of the subway system, as my physical model merely captures the current subway networks. 

I did have to play around with the layers to individually capture each subway line. The current subway network data for both Toronto and Athens contain all 3 of their routes in one layer, in which I had to isolate each for the purpose of the time lapse in which each route had to be added in accordance to their initial opening date and year of most recent expansion. To achieve this, I set a Definition Query for each current subway route I was mapping whilst creating the animation.

Definition query tool accessed under layer properties:

Once I added each keyframe in order of the evolution of each subway route, I created a map layout for each map to add in the required text and titles as I did with the physical model. The layouts were then exported into Microsoft Clipchamp to create the video animation. I imported each map layout in .png format. From there, I added transitions between my maps, as well as sound effects !

CITY OF TORONTO SUBWAY NETWORK TIMELNE:

Geovis Project, TMU Geography, SA8905 Sarah Delima

(@s1delima.bsky.social) 2024-11-19T15:05:37.007Z

ATHENS METROPOLITAN AREA METRO TIMELINE:

Geovis Project, TMU Geography, SA8905 Sarah Delima

(@s1delima.bsky.social) 2024-11-19T15:12:18.523Z

LIMITATIONS: 

While this project allowed me to be creative both with my physical and virtual models, it did present certain limitations. A notable limitation to this geovisualization for the physical model is that it is meant to be a mere visual representation of the subway networks.

As for the virtual map, although open data was accessible for some of the subway routes, I did have to manually enter XY coordinates for future subway networks. I did reference reputable maps of the anticipated future subway routes to ensure accuracy.  Furthermore, given my limited timeline, I was unable to map the proposed extensions of current subway routes. Rather, I focused on routes currently under construction with an anticipated completion date. 

CONCLUSION: 

Although I grew up applying my creativity through creating homemade crafts, technology and applications such as ArcGIS allow for creativity to be expressed on a virtual level. Overall, the concept behind this project is an ode to the evolution of mapping, from physical carvings to the virtual cartographic and geo-visualization applications utilized today.

Visualizing Aerial Photogrammetry to Minecraft Java Edition 1.21.1

Andrea Santoso-Pardi
SA8905 Geovis project, Fall 2024

Introduction

Using aerial photogrammetry into Minecraft builds is an interesting way to combine real-world data with a video game that many people play. Adding aerial photogrammetry of a building and city is a way to get people interested in GIS technology and can be used for accessibility reasons to understand where different buildings are in the world. This workflow will introduce the process finding aerial building photogrammetry, using the .obj file to process it with Blender plugins (BlockBlender 1.41 and BlockBlender to Minecraft .Schem 1.42), exporting it as a .schem file for use in single player Minecraft Java Edition 1.21.1 by using the Litematica to paste the schematic, converting the model from latitude and longitude coordinates to Minecraft coordinates and editing the schematic

List of things you will need for this

  • Photogrammetry – preferably one that is watertight with no holes. If holes are present, one will have to manually close the holes.
  • Blender 3.6.2 – a free 3D modelling software. This does not work with the latest realease of 4.3 as of when I am writing this
    • Addons to use:
      • BlockBlender 1.41 ($20 Version) – Paid by the TMU Library Collaboratory, used to convert the photogrammetry into minecraft block textures
      • BlockBlender to Minecraft .Schem 1.42 – used to export the file into .schem file, a file which minecraft can read
  • Minecraft Java Edition ($29.99) – a video game played on a computer. This is different to Minecraft Bedrock Edition

Gathering Data: What is Aerial Photogrammetry & What is the best model to use?

Aerial photogrammetry is a technique that uses overlapping photographs captured from above and various angles to create accurate, measurable 3D model or maps of real-world landscapes, structures, or objects. However, photogrammetry is becoming a lot more accessible, it is now able to be created by just using a phone camera. The dataprocessing for drone imagery of a building includes:
Point Clouds which are a dense collection of points representing the object or terrain in 3D space. And also 3D Meshes which are surfaces created by connecting points into a polygonal network. The polygonal network of Aerial photogrammetry of a building is usually many triangles.

If you are going to search up a photogrammetry model to use, here is what made me choose this one of a government building and also know that it was photogrammetry.

  1. Large number of triangles and vertices. The model had 1.5 Million Triangles and 807.4k Vertices. 3D models made using 3D modeling Software will have lower counts of both of in the tens of thousands. This is how I knew it was photogrammetry.
  2. Minimal clean up. There was little to no clean-up required on the model for it to be able to be put into minecraft. Of course if you do not care that a lot of clean-up needs to happen before being able to convert the photogrammetry into blocks then you can do so. But know it will take hours depending on how many holes the model has.
    • I spent too many hours trying to clean-up Kerr Hall photogrammetry and it still had all the holes associated with it. If you want to do Kerr Hall please contact the Facilities for Campus Data for floor plans and walls for what it is supposed to look like outside to ensure the trees aren’t in the photogrammetry. Then use Blender Architecture and BlenderGIS plugins to scale the building accordingly
  3. States the Location/Coordinates. If you want the elevation of the model, you will need to know where it is geolocated in the world. Having the coordinates makes this processes easier in BlenderGIS
  4. Minimal/Zero Objects around the wall of the building. When getting photogrammetry, objects too close to the wall can merge with the building wall. Things like trees make it very hard to get clear viewing of the wall to the point that there might not even be a wall in the photogrammetry.
    • The topology of trees makes it so many tiny holes may happen instead. Making sure no objects are around the buildings ensures that I know that the walls are and will be visible in the final product. Do a quick 360 of the photogrammetry to ensure this is the case for the one you want
  5. Ensure to be able to download as a .OBJ file. For Blockblender to work the building textures need to have photos for blockblender to assign a block to the photo pixel
  6. Consistent Lighting all around. If different areas of the building have different lighting it does not make for a consistent model as I don’t want to change the brightness of the photo.

When exporting the model I chose an OBJ format as I knew that it was compatiable with the Blockblender addon to work.

When exporting, ensure you know where it downloads to. Extra steps like unzipping the file may occur depending on how it is formatted.

Blender

Blender is a free 3D modeling software that was chosen due to its high customizable editing options. If you haven’t used blender before, I suggest learning the basic controls this is a playlist to help understand each function.

Installing Addons

Download all the files you need as .zip files
Go to Edit > Preferences > Install From Disk and import the .zip files of the add-ons. Make sure you save your preferences. Just as a reminder, the ones needed for this tutorial are: BlockBlender 1.41 ($20 Version) and Minecraft .Schem 1.42

Import & Cleaning Up the .obj Model

To import the model go to File > Import > Wavefront OBJ .
The file does not have to be an .obj to work. But it does have to textures that are separate from the 3D Model if you want to use the Blockblender add-on.

Import the same model twice. One to make into Minecraft blocks and the other to use as a reference. Put them into different collections. You can name them “Reference” and “Editing” . Press M to Create two separate collections for each model.

To clean up the model to have it ready for use in blockblender, the model has to have a solid , watertight, mesh. In short, what this means is that the mesh of the model needs to have closed edges. It’s a bit hard to explain. Its not necessary to learn if your 3D model requires minimal clean up. But if you want to understand more of what I mean this resource might be helpful. https://davidstutz.de/a-formal-definition-of-watertight-meshes/

Go into Edit Mode. Click on the model (it should have an orange outline) and go into edit mode (see top left corner). Alternatively you can hit Tab to switch between Edit and Object Mode

Press A to Select All

Go Above into Select > Select Loops > Select Boundary Loop

It should look like this afterwards, with only the boundary loops selcted

Press Alt + F to fill in the faces
If you look underneath the model, you can see how it makes the mesh watertight

Before Pressing Alt+ F, Model viewed from below, with boundary loops selected in Blender 3.6.2
After Pressing Alt + F, Model viewed from below, with boundary loops selected in Blender 3.6.2

You can now exit edit mode. You can see in Object mode how the hole in the model is now enclosed. This has created a watertight solid mesh.

Model Before Edits, viewed from below in Blender 3.6.2

Model Before Edits, viewed from below in Blender 3.6.2


You can also clean up models with holes the same way. For complex models however, select the area around where the hole in the model is instead of select all.

If you would like an only visual explanation here is a video. Don’t switch over to sculpt mode and don’t enable Dyntopo and go into the sculpting mode as you will lose textures. The textures are needed for blockblender. If you do accidentally do dynotopo, Ctrl + Z can be used to undo or you can copy and paste your reference and do this section over again.

BlockBlender

Blockblender is an add-on for blender created by Joey Carolino, if you want to know how to visually see how blockblender is used better, below is a youtube video of how to use more functions in BlockBlender. There is a free version and a paid version of blockblender so if you cannot contact the Library Collaboratory to use the computer with the paid Blockblender then you can use the free version

Using Blockblender

Before doing this step, save your work to ensure that nothing goes away
Select the model and press Turn Selected Into Blocks. This will take a while to fully load. When it does, the model will look like glass. If blender becomes too laggy, exit blender and dont save. You can reduce the size of your model before doing this section to ensure you can add all the textures needed

To find out the image ID and what order to use them, go to the Material Properties It should look like a Red circle.

The names of the photo are shown and to ensure the model looks like the picture you must put it in that order or else it will not look like the reference model.

Here is what the Blockblender Model looks like

From here, Blockblender has different tools to choose the block selection. Each block is categorized into these areas in the Collections Area. However You can select individual blocks and move them into the unused collection by dragging and dropping. Alternatively press CTRL to select multiple to drag and drop

I also felt that the scale of 1 Block = 1m did not give enough detail so the block size was changed to 0.5m

The final model I ended up going with is below. Although it is not perfect, I can manual edit, use Litematica or Minecraft commands afterwards. It is hard to show how the workflow with just pictures so highly suggest the video above to see more of the functionality.

Government building when converted into Minecraft blocks using Blockblender 1.4.2. The N-Panel of blockblender is to the right of the screen

Blockblender to .Schem

This add-on was created by EpicSpartanRyan#8948 on Discord. Special thanks to him. They are also available for hire if someone wanted to put buildings into minecraft to make a campus server with a 30 minute free consultation and aims to respond in 12 hours.

Putting this into a .schem file allows it to be read in a format that minecraft understands.

To quickly see how it would work to export and put into Minecraft but using World Edit and in multiplayer server, please see his video below. It also compares what the textures in blender look like to what it looks like inside of minecraft

Using Blockblender to .Schem

To prepare the file to export,
Uncheckmark “Make Instances Real”

Click the model. Press Convert to Mesh in the N-panel to make the mesh look more like minecraft blocks rather than triangles. You can see if the mesh has changed by selecting the object and going into Edit Mode or by looking at the viewport wireframe

Click the model. Press Ctrl + A and apply All Transforms This will ensure all the textures will be there

The model with the viewport wire frame and the menu to press

Next, you want to go into File > Export > Minecraft (.schem) or press Export as schem on the N-panel Blockblender options. The N-panel can be seen in the previous section

Save the file whatever name you want but to ensure the .schem file is saved to your schematics folder. This is to save time trying to find where you put the model later. This can be found by searching %appdata% on your file pathway area. The file path should be
C:\Users\[YourComputerProfileName]\AppData\Roaming\.minecraft\schematics

If a schematics folder is not present, make one inside the .minecraft folder

Minecraft

Installing Minecraft, Fabric Loader and Mods

If you need help downloading Minecraft look at this article. https://www.minecraft.net/en-us/updates/instructions . I bought Minecraft in 2013 so I’m unsure of the process of what buying and downloading Minecraft is like now as I refuse to buy something that I already have. This video here may also be helpful but I have not followed along but I did watch it to ensure the video makes sense.

Fabric Loader

Fabric Loader is used as a way to change the minecraft experience from vanilla (default minecraft) to whatever experience you want by downloading other mods. It acts as bridge between the game and the mods you want to use.

To download, Choose the download that works best for the device you are on. For me that was Download for Windows x64, the latest version of Fabric Loader which is named fabric-installer-1.0.1 but it may change in the future.
Press to run the installer until it opens up to here. Since I am not running fabric on a server but on a client (single player usually), I downloaded it to Minecraft 1.21.1 and the Latest Loader Version.

Mods: Litematica and MaLiLib

Before entering Minecraft download the mods and add them to your mods folder. You do not need to do anything to the mod after it is downloaded except to move them into the Minecraft mod folder.

The general pathway would be C:\Users[YourComputerProfileName]\AppData\Roaming.minecraft\mods
It should all keep as the WinRAR archive

  • Litematica (litematica-fabric-1.21.1-0.19.50)
  • MaLiLib (malilib-fabric-1.21.1-0.21.0)
View of My Mods Folder

Launching Minecraft Java

Minecraft Launcher should show the fabric loader like this

Ensure to change the loader to be fabric-loader-1.21.1 so the mods will be attached. Once it is changed, press the big green button that says Play

Create a New World

This is just to import the model into Minecraft Java 1.21.1 SinglePlayer so I went into Singleplayer > Create New World and Here are the options chosen
Game Tab
Game mode : Creative
Difficulty: Peaceful
Allow Commands On
World Tab
World Type : Superflat
Generate Structures : Off
Bonus Chest : Off

Once having the options you like, you can create a New World.

Using Litematica

The building can be placed down in any world using the Litematica Mod. If you have any troubles using it, for the basic commands How To Use Litematica by @ryanthescion helped a lot in learning how to use the different commands

The minecraft stick is used in Litematica to toggle between modes. To get a minecraft stick, press E to open up the inventory / creative menu and search up Stick (which it opens to the search automatically) or find it under the Ingredients Tab

Left Click and Drag the stick into your hotbar (the area where one can see the multiple wooden sticks) and exit out of the inventory pressing E
Note that one stick is enough for the mod to work as it has to be held in your hand to use. The multiple sticks there are to show where the hotbar is.

With the Stick in your hand, one can toggle between the different modes by pressing CTRL + Scroll Wheel to go between 9 different modes.

Adding The Model

What I did in short was open the Litematica menu by pressing M , went to the Configuration menu

Hotkeys is a place to create custom keyboard and/or mouse shortcuts for different commands. Create a shortcut that is has no existing shortcut for it already. The tutorial used J + K for “executefunction” to paste the building so I followed the tutorial and use those also so now I will have to press J and K to execute a command. If there is a problem with the hotkeys used, it would turn a yellow/orange colour instead of white.


Next I went back to the Litematica menu went to Load Schematics added the folder pathway were I keep the schematics. Pressed the schematic build file I wanted to Load then pressed Load Schematic at the bottom of the page. Thus the government building was pasted into minecraft.

Converting Latitude and Longitude to Minecraft Coordinates

In the Litematica menu press the Loaded Schematics button then go to Schematic Placements > Configure > Schematic placement and you can change the building to be the same coordinates as in real life. Y is 18 because using the “What is My Elevation” website at the coordinates states 9m. Since 1 block is equal to 0.5m in our model, 9m divded by 0.5 is 18m.

The X and Z coordinates are if the geographic coordinate system of Earth converted with what the minecraft coordinate system is (Cartesian Coordinates). The conversion between the geographic coordinate system uses the WGS84 coordinate system (World Geodetic System 1984) and Cartesian Coordinates assumes both origins start at 0,0,0 and 1 block = 0.5 metres. If 1 degree of latitude and 1 degree of longitude both are 111,320 metres (for this projection)2:
Latitude in blocks per degree = 222,640 blocks per degree
Metres per degree of longitude = [111,320 × cos(latitude in radians) ] / 0.5

To align this with real-world geographic coordinates (latitude and longitude), one needs to define a reference point. Since the the real-world origin (0° latitude and 0°, longitude) is set to correspond to X = 0 and Z = 0 in minecraft. The formulas below is used to calculate the difference in Latitude and Longitude based off of this

The Formulas to Convert to Minecraft are:
Minecraft Z Coordinates = [ΔLatitude × 111,320] / [Scale (meters per block)]
Minecraft X Coordinates = [ΔLatitude × ( 111,320 × cos(Origin Latitude in radians))] / [Scale (meters per block)]
Minecraft Y Coordinates = Elevation in metres / Scale (metres per block)

Where:
ΔLatitude = Target Latitude − Origin Latitude
ΔLongitude = Target Longitude − Origin Longitude
Target Latitude is 47.621474856679534°
Target Longitude is −65.65655551636287°
If Origin is 0° latitude and 0°, longitude
Scale (metres per block) = 0.5 metres

Using cosine has it so the conversion better reflects real-world distances as Earth is a spheroid an minecraft is flat

Therefore the Minecraft coordinates are

Minecraft X Coordinates = −9,858,611
Minecraft Y Coordinates = 18
Minecraft Z Coordinates = 10,606,309


Note: You will have to Teleport to where the model is put do /tp <playername> x y z to where the building is loaded

Fixing The Model

There were many edits that needed to happen. I fixed the trees to actually have trunks as the textures did not load them in properly. I used what generated as a guide for what the shapes for the trees should look like

I also tried to change the pattern on the wall to more accurately reflect what it looks like in the photogrammetry

Blender Render of the 3D Model (before using Blockblender) compared with what I changed it to in Minecraft
Helpful Tips

/time set day
/effect give <targets> <effect> infinite [<amplifier>] [<hideParticles>]

To edit the schematic Minecraft Litematica schematic editing by @waynestir on Youtube was the most helpful this allowed me to replace blocks and have them as the schematic.


Limitations

Using this approach of taking aerial building photogrammetry, using blender to make it minecraft blocks and then trying to convert the latitude and longitude coordinates to minecraft to put the building in the exact right spot is that Minecraft is a fixed grid cubic Block representation which will lack the detail of the 3D aerial building photogrammetry model on any given day. To try to make a scale that allows for the geolocation correctness and building height but transferred over to minecraft is a fine detail task that has to try to balance the artistry with reality.

In Blockblender, fine details like the antennae at the top of the building don’t come through as it only uses blocks for the representation. so railings, window frames and more could be lost or require block subsitutes.

The Photogrammetry can be very complex and very noisy with shadows that may make blockblender interpret the data wrong. Blockblender as an add-on is limited to the minecraft default colours which may not accurately reflect what real-world surfaces look like or are made out of.

The Minecraft height limit can be an issue depending on how tall the building is you want to convert.

Geolocating the building from latitude and longitude to minecraft coordinates will not work on a much larger scale (i.e keeping the scale at 0.5m is 1 block) as the minecraft world is 30 million by 30 million.

Litematica also has limited functionality until one has to do a lot more manually or use another plug in.

Conclusion

This workflow is an excellent way to bring real-world data into Minecraft, but it requires balancing the complexity of photogrammetry models with Minecraft’s block-based limitations. Understanding and addressing these challenges produce detailed, manageable builds that work well in Minecraft’s unique environment.

Footnotes

  1. “Canadian Government Building Photogrammetry” (https://skfb.ly/oLZyt) by Air Digital Historical Scanning Archive is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/) ↩︎
  2. https://www.esri.com/arcgis-blog/products/arcgis-desktop/defense/determining-a-z-factor-for-scaling-linear-elevation-units-to-match-geographic-coordinate-values/ ↩︎

Visualizing select waterfalls of Hamilton, Ontario through 3D modelling using Blender and BlenderGIS

By: Darith Tran|Geovisualization Project Assignment|TMU Geography|SA8905|Fall 2024

Introduction/Background

The city of Hamilton, Ontario is home to many trails and waterfalls and offers many scenic and nature-focused areas. The city is situated along the Niagara Escarpment, which allows for unique topography and is the main reason for the high frequency of waterfalls that exist across the city. Hamilton is dubbed as the waterfall capital of the world, being home to over 100 waterfalls within the city’s boundaries. Despite this, Hamilton is still under the radar for tourists as it sits between 2 other major cities that see higher tourist traffic such as Niagara Falls (which is home to one of the world’s most known waterfall) and Toronto (popular for the CN Tower and hustle bustle city atmosphere).

The main purpose of this project was to increase awareness for the beauty of the Southern Ontario wonder and to provide prospective visitors, or even citizens of Hamilton, with an interactive story map to provide some general information on the trails connected to the waterfalls and the details of the waterfalls themselves. The 3D modelling aspect of the project aims to provide a unique visualization of how the waterfalls look in order to provide a quick, yet creative visual for those looking into visiting the city to see the waterfalls in person.

Data, Processing and Workflow (Blender + OpenTopography DEMs)

The first step of this project was to obtain DEMs for the regions of interest (Hamilton, Ontario) to be used as the foundation of the 3D model. The primary software used for this project was Blender (a 3D modeling software) leveraged by a GIS oriented plugin called “BlenderGIS” which is direct plugin available created by GitHub user domlysz allowing users to directly import GIS related files and elements such as shapefiles and base maps into the Blender editing and modelling pane. The plugin also allows users to load and access DEMs straight into Blender to be extracted and edited sourced through OpenTopography.

The first step is to open Blender and navigate towards the GIS tab in the object mode in the application :

Under the GIS tab, there are many options and hovering over “web geodata” prompts the following options:

In this case, we want to start off with a base map and the plugin has many sources available including the default Google Maps, ESRI Base maps as well as OpenStreetMap (Google Satellite was used for this project)

Once the base map is loaded into the Blender plane, I zoomed into the area of interest #1, being the Dundas Peak region, which is home to both Tew’s Falls and Webster’s Falls. The screenshot below shows the 2D image of Tew’s Falls in the object plane:

Once an area of interest is defined and all information is loaded, the elevation model is requested to generate the 3D plane of the land region:

The screenshot above shows the general 3D plane being created from a 30m DEM extracted from OpenTopography through the BlenderGIS plugin. The screenshot below showcases the modification of the 3D plane through the extrusion tool which adds depth and edges to create the waterfall look. Below is the foundation used specifically for Tew’s Falls.

Following this, imagery from the basemap was merged with the 3D extrusted plane to produce a the 3D render of the waterfall plane. To add the waterfall animation, the physics module was activated, allowing for various types of motion to be added to the 3D plane. Fluid was selected with the outflow behavior to simulate the movement of water coming down from a waterfall. This was then overlayed onto the 3D plane of the waterfall to simulate water flowing down from the waterfall.

These steps were then essentially repeated for Webster’s Falls and Devil’s Punchbowl waterfalls to produce 3D models with waterflow animations!

Link to ArcGIS Story Map: https://arcg.is/05Lr8T

Conclusion and Limitations

Overall, I found this to be a cool and fun way to visualize the waterfalls of Hamilton, Ontario and adding the rendered product directly onto ArcGIS Story Maps makes for an immersive experience. The biggest learning curve for this project was the use of the application Blender as I have never used the software before and have only briefly explored 3D modelling in the past. Originally, I planned to create 10 renders and animations of 10 waterfalls in Hamilton however, this became a daunting task after realizing the rendering and export times after completing the 3 models shown in the Story Map. Additionally, the render quality was rather low since 2D imagery was interpolated into a 3D plane which caused some distortions and warped shapes which would require further processing.

Visualizing Population on a 3D-Printed Terrain of Ontario

Xingyu Zeng

Geovisual Project Assignment @RyersonGeo, SA8905, Fall 2022

Introduction

3D visualization is an essential and popular category in geovisualization. After a period of development, 3D printing technology has become readily available in people’s daily lives. As a result, 3D printable geovisualization project was relatively easy to implement at the individual level. Also, compared to electronic 3D models, the advantages of explaining physical 3D printed models are obvious when targeting non-professional users.

Data and Softwares

3D model in Materialise Magics
  • Data Source: Open Topography – Global Multi-Resolution Topography (GMRT) Data Synthesis
  • DEM Data to a 3D Surface: AccuTrans 3D – which provides translation of 3D geometry between the formats used by many 3D modeling programs.
  • Converting a 3D Surface to a Solid: Materialise Magics – Converting surface to a solid with thickness and the model is cut according to the boundaries of the 5 Transitional Regions of Ontario. Using different thicknesses representing the differences in total population between Transitional Regions. (e.g. The central region has a population of 5 million, and the thickness is 10 mm; the west region has a population of 4 million the thickness is 8 mm)
  • Slicing & Printing: This step is an indispensable step for 3D printing, but because of the wide variety of printer brands on the market, most of them have their own slicing software developed by the manufacturers, so the specific operation process varies. But there is one thing in common, after this step, the file will be transferred to the 3D printer, and what follows is a long wait.

Visualization

The 5 Transitional Regions is reorganized by the 14 Local Health Integration Network (LHIN), and the corresponding population and model heights (thicknesses) for each of the five regions of Ontario are:

  • West, clustering of: Erie-St. Clair, South West, Hamilton Niagara Haldimand Brant, Waterloo Wellington, has a total population of about 4 million, the thickness is 8mm.
  • Central, clustering of: Mississauga Halton, Central West, Central, North Simcoe Muskoka, has a total population of about 5 million, the thickness is 10mm.
  • Toronto, clustering of: Toronto Central, has a total population of about 1.4 million, the thickness is 2.8mm.
  • East, clustering of: Central East, South East, Champlain, has a total population of about 3.7 million, the thickness is 7.4mm.
  • North, clustering of: North West, North East, has a total population of about 1.6 million, the thickness is 3.2mm.
Different thicknesses
Dimension Comparison
West region
Central region
Toronto
East region
North region

Limitations

The most unavoidable limitation of 3D printing is the accuracy of the printer itself. It is not only about the mechanical performance of the printer, but also about the materials used, the operating environment (temperature, UV intensity) and other external factors. The result of these factors is that the printed models do not match exactly, even though they are accurate on the computer. On the other hand, the 3D printed terrain can only represent variables that can be presented by unique values, such as the total population of my choice.

Visualizing Flow Regulation at the Shand Dam

Hannah Gordon

GeovisProject Assignment @RyersonGeo, SA8905, Fall 2022

Concept

When presented with this geovisualization opportunity I knew I wanted my final deliverable to be interactive and novel. The idea I decided on was a 3D printed topographic map with interactive elements that would allow the visualization of flow regulation from the Shand Dam by placing wooden dowels in holes of the 3D model above and below the dam to see how the dam regulated flow. This concept visualizes flow (cubic meters of water a second) in a way similar to a hydrograph, but brings in 3D elements and is novel and fun as opposed to a traditional chart.   Shand Dam on the Grand River was chosen as the site to visualize flow regulation as the Grand River is the largest river system in Southern Ontario, Shand Dam is a Dam of Significance, and  there are hydrometric stations that record river discharge above and below the dam for the same time periods (~1970-2022). 

About Shand Dam

Dams and reservoirs like the Shand Dam are designed to provide maximum flood storage following peak flows. During high flows (often associated with spring snow melt) water is held in the reservoir to reduce the amount of flow downstream, lowering flood peak flows (Grand River Conservation Authority, 2014). Shand Dam (constructed in 1942 as Grand Valley Dam) is located just south of Belwood Lake (an artificial reservoir) in Southern Ontario, and provides significant flow regulation and low flow augmentation that prevents flooding south of the dam (Baine, 2009). Shand Dam proved a valuable investment in 1954 after Hurricane Hazel when no lives were lost in the Grand River Watershed from the hurricane.

Shand Dam (at the time Grand Valley Dam) in 1942. Photographer: Walker, A., 1942

Today, the dam continues to prevent  and lessen the devastation from flooding (especially spring high-flows) through the use of four large gates and three ‘low-flow discharge tubes’ (Baine, 2009).   Dam discharge from dams on the Grand River may continue for some time after the storm is over to regain reservoir storage space and prepare for the next storm  (Grand River Conservation Authority, 2014). This is illustrated in the below hydrographs where the flow above and below the dam is plotted over a time series of one week prior to the peak flow and one week post the peak flow, and the dam delays and ‘flattens’ the peak discharge flow.

Data & Process

This project required two data sources – the hydrometric data for river discharge and a DEM (digital elevation model) from which a 3D printed model will be created. Hydrometric data for the two stations (02GA014 and 02GA016) was downloaded from the Government of Canada, Environment and Natural resources in the format of a .csv (comma separated value) table. Two datasets for hydrometric data were downloaded – the annual extreme peak data for both stations and the daily discharge data for both stations  in date-data format.  The hydrometric data provided river discharge as daily averages in cubic meters a second.   The DEM was downloaded from the Government of Canada’s Geospatial Data Extraction Tool. This website makes it simple and easy to download a DEM for a specific region of canada at a variety of spatial resolutions. I chose to extract my data for the area around Shand Dam that included the hydrometric stations, at a 20 meter resolution (finest resolution available).

3D Printing the DEM

The first step in creating the interactive 3D model was becoming 3D printer certified at Toronto Metropolitan University’s  Digital Media Experience Lab (DME). While I already knew how to 3D print this step was crucial as it allowed me to have access to the 3D printers in the DME for free. Becoming certified with the DME was a simple process of watching some videos, taking an online test, then booking an in person test. Once I had passed I was able to book my prints. The DME has two PRUSA brand printers. These 3D printers require a .gcode file to print models. Initially my data was in a .tiff file, and creating a .gcode file would first involve creating an STL (standard triangle language), then creating a gcode file from the STL. The gcode file acts as a set of ‘instructions’ for the 3D printer.

Exporting the STL with QGIS

First the plugin ‘DEM to 3D print’ had to be installed for QGIS. This plugin creates an STL file from the DEM (tiff). When exporting the digital elevation model to an STL (standard triangle language) file a few constraints had to be enforced.

  • The final size of the STL had to be under 25 mb so it could be uploaded and edited in tinkercad to add holes for the dowels.
  • The final size of the STL file had to be less than ~20cm by ~20cm to fit on the 3D printers bed. 
  • The final .gcode file created from the STL would have to print in under 6 hours to be printed at  the DME. This created a size constraint on the model I would be able to 3D print.

It took multiple experimentations of the QGIS DEM to 3D plugin to create the two STL files that would each print in under 6 hours, and be smaller than 25mb. The DEM was exported as an STL using the plugin and the following settings;

  • The spacing was 0.6mm. Spacing reflects the amount of detail in the STL, and while a spacing of 0.2 mm would have been more suitable for the project it would have created too large of a file to be imported to tinkercad. 
  • The final model size is 6 cm by 25cm and divided into two parts of 6 by 12.5cm. 
  • The model height of the STL was set to 400m, as the lowest elevation to be printed was 401m. This ensured an unnecessarily thick model would not be created. A thick model was to be avoided as it would waste precious 3D printing time.
  • The base height of the model was 2mm. This means that below the lowest elevation an additional 2 mm of model will be created.
  • The final scale of the model is approximately 1:90,000 (1:89,575), with a vertical exaggeration of 15 times. 

Printing with the DME

These STL that were exported from QGIS were opened in PRUSA slicer to create gcode files. The 3D printer configuration of the DME printers were imported and the infill density was set to 10%. This is the lowest infill density the DME will permit, and helps lower the print time by printing a lattice on the interior of the print as opposed to solid fill. Both the gcode files would print in just under 6 hours. 

Part one of the 3D elevation model printing in the DME, the ‘holes’ seen in the top are the infill grid.

3D printing the files at the DME proved more challenging than initially expected. When the slots were booked on the website I made it clear that the two files were components of a larger project, however when I arrived to print my two files the 3D printers had two different colors of filament (one of which was a blue-yellow blend). As the two 3D prints would be assembled together I was not willing to create a model that was half white, half blue/yellow. Therefore the second print had to be unfortunately pushed to the following week. At this point I was glad I had been proactive and booked the slots early otherwise I would have been forced to assemble an unattractive model.  The DME staff were very understanding and found humor in the situation,  immediately moving  my second print to the following week so the two files could use the same filament color. 

Modeling Hydrometric Data with Dowels

To choose the days used to display discharge in the interactive model the csv file of annual extreme peak data was opened in excel and maximum annual discharge was sorted in descending order. The top three discharge events at station 02GA014 (above the dam), that would have had data on the same days below the dam  were:

  • 1975-04-19 (average daily discharge of 306 cubic meters a second)
  • 1976-03-21 (average daily discharge of 289 cubic meters a second)
  • 2008-12-28 (average daily discharge of 283 cubic meters a second)

I also chose 2018’s peak discharge event (average daily discharge of 244 cubic meters a second on February 21st) to be included as it was a significant more recent flow event (top 6)

Once the four peak flow events had been decided on, their corresponding data in the daily discharge data were found, and  a scaling factor of 0.05 was applied in excel so I would know the proportional length to cut the dowels. This meant that every 0.5cm of dowel would indicate 10 cubic meters a second of discharge.

As the dowels sit within the 3D print, prior to cutting the dowels I had to find out the depth of the holes in the model. The hole for station 02GA014 (above the dam) was 15mm deep and the holes for station 02GA016 (below the dam) were 75mm deep. This meant that I would have to add 15mm or 75mm to the dowel length to ensure the dowels would accurately reflect discharge when viewed above the model. The dowels were then cut to size, painted to reflect the peak discharge event they correspond to and labeled with the date the data was from. Three dowels for the legend were also cut that reflected discharge of 100, 200, and 300 cubic meters a second. Three pilot holes then three 3/16” holes were drilled into the base for the project (two finished 1 x4’s) for these dowels to sit.

Assembling the Model

Once all the parts were ready the model could be assembled. The necessary information about the project and legend was then printed and carefully transferred to the wood with acetone. Then the base of the 3D print was aggressively sanded to provide better adhesion and glued onto the wood and clamped in place. I had to be careful with this as too tight of clamps would crack the print, but too loose of clamps and the print wouldn’t stay in place as it dried.

Final model showing 2018 peak flow
Final model showing 1976 peak flow
Final model showing 1975 peak flow
Final model showing 2008 peak flow

Applications

The finished interactive model allows the visualization of flow regulation from the Shand Dam, for different peak flow events, and highlights the value of this particular dam. Broadly, this project idea was a way to visualize hydrographs, and showed the differences in discharge over a spatial and temporal scale that resulted from the dam. The top dowel shows the flow above the dam for the peak flow event, and the three dowels below the dam show the flow below the dam for the day of the peak discharge, one day after, and two days after, to show the flow regulation over a period of days and illustrate the delayed and moderated hydrograph peak. The legend dowels are easily removable to line them up with the dowels in the 3D print to get a better idea of ow much flow there was on a given day at a given place. The project idea I used in  creating this model can easily be modified for other dams (provided there is suitable hydrometric data). Beyond visualizing flow regulation the same idea and process could be used to create models that show discharge at different stations over a watershed, or over a continuous period of time – such as monthly averages over a year. These models could have a variety of uses such as showing how river discharge changed in response to urbanization, or how climate change is causing more significant spring peak flows from snowmelt. 

References

Baine, J. (2009). Shand Dam a First For Canada. Grand Actions: The Grand Strategy Newsletter. Vol. 14, Issue 2. https://www.grandriver.ca/en/learn-get-involved/resources/Documents/Grand_Actions/Publications_GA_2009_2_MarApr.pdf

Grand River Conservation Authority (2014). Grand River Watershed Water Management Plan. Prepared by the Project Team, Water Management Plan., Cambridge, ON. 137p. + appendices. Retrieved from https://www.grandriver.ca/en/our-watershed/resources/Documents/WMP/Water_WMP_Plan_Complete.pdf

Walker, A. (April 18th, 1942). The dam is 72 feet high, 300 feet wide at the base, and more than a third of a mile long [photograph]. Toronto Star Photograph Archive, Toronto Public Library Digital Archives. Retrieved from https://digitalarchive.tpl.ca/objects/228722/the-dam-is-72-feet-high-300-feet-wide-at-the-base-and-more

Drone Package Deployment Tutorial / Animation

Anugraha Udas

SA8905 – Cartography & Visualization
@RyersonGeo

Introduction

Automation’s prevalence in society is becoming normalized as corporations have begun noticing its benefits and are now utilizing artificial intelligence to streamline everyday processes. Previously, this may have included something as basic as organizing customer and product information, however, in the last decade, the automation of delivery and transportation has exponentially grown, and a utopian future of drone deliveries may soon become a reality. The purpose of this visualization project is to convey what automated drone deliveries may resemble in a small city and what types of obstacles they may face as a result of their deployment. A step-by-step process will also be provided so that users can learn how to create a 3D visualization of cities, import 3D objects into ArcGIS Pro, convert point data into 3D visualizations, and finally animate a drone flying through a city. This is extremely useful as 3D visualization provides a different perspective that allows GIS users to perceive study areas from the ground level instead of the conventional birds-eye view.

Area of Study

The focus area for this pilot study is Niagara Falls in Ontario, Canada. The city of Niagara Falls was chosen due to its characteristics of being a smaller city but nonetheless still containing buildings over 120 meters in height. These buildings sizes provide a perfect obstruction for simulating drone flights as Transport Canada has set a maximum altitude limit of 120 meters for safety reasons. Niagara Falls also contains a good distribution of Canada Post locations that will be used as potential drone deployment centres for the package deliveries. Additionally, another hypothetical scenario where all drones deploy from one large building will be visualized. In this instance, London’s gherkin will be utilized as a potential drone-hive (hypothetically owned by Amazon) that drones can deploy from (See https://youtu.be/mzhvR4wm__M). Due to the nature of this project being a pilot study, this method be further expanded in the future to larger dense areas, however, a computer with over 16GB of RAM and a minimum of 8GB of video memory is highly recommended for video rendering purposes. In the video below, we can see the city of Niagara Falls rendered in ArcPro with the gherkin represented in a blue cone shape, similarly, the Canada Post buildings are also represented with a dark blue colour.

City of Niagara Falls (Rendered in ArcPro)

Data   

The data for this project was derived from numerous sources as a variety of file types were required. Regarding data directly relating to the city of Niagara Falls – Cellular Towers, Street Lights, Roads, Property parcel lines, Building Footprints and the Niagara Falls Municipal Boundary Shapefiles were all obtained from Niagara Open data and imported into ArcPro. Similarly, the Canada Post Locations Shapefile was derived from Scholar’s Geoportal. In terms of the 3D objects – London’s Gherkin, was obtained from TurboSquid in and the helipad was obtained from CGTrader in the form of DAE files. The Gherkin was chosen because it serves as a hypothetic hive building that can be employed in cities by corporations such as Amazon. Regarding the helipad 3D model, it will be distributed in numerous neighbourhoods around Niagara Falls as a drop-off zones for the drones to deliver packages. In a hypothetical scenario, people would be alerted on their phones as to when their package is securely arriving, and they would visit the loading zone to pick up their package. It should be noted that all files were copyright-free and allowed for personal use.

Process (Step by step)

Importing Files

Figure 1. TurboSquid 3D DAE Download

First, access the Niagara Open Data website and download all the aforementioned files in the search datasets box. Ensure that the files are downloaded in SHP format for recognition in ArcPro (Names are listed at the end of this blog). Next, go on TurboSquid and search for the Gherkin and make sure that the price drop down has a minimum and maximum value of $0 (Figure 1). Additionally, search for ‘Simple helipad free 3D model’ on CGtrader. Ensure that these files are downloaded in DAE format for recognition in ArcPro. Once all files are downloaded open ArcPro and import the Shape files (via Add Data) to first conduct some basic analysis.

Basic GIS Analysis

First, double click on the symbology box for each imported layer, and a symbology dialog should open on the right-hand side of the screen. Click on the symbol box and assign each layer with a distinct yet subtle colour. Once this is finished, select the Canada Post Locations layer, and go to the analysis tab and select the buffer icon to create a buffer around the Canada Post Locations. Input features – The Canada Post Locations. Provide a file location and name in the output feature class and enter a value of 5 kilometres for distance and dissolve the buffers (Figure 2). The reason why 5km was chosen is that regular consumer drones have a battery that can last up to ten kilometres (or 30 min flight time), thus traveling to the parcel destination and back would use up this allotted flight time.

Figure 2. Buffer option on ArcPro
Figure 3. Extent of Drone Deployment

Once this buffer is created the symbology is adjusted to a gradient fill within the layer tab of the symbol. This is to show the groupings of clusters and visualize furthering distance from the Canada Post Locations. In this project we are assuming that the Canada Post Locations are where the drones are deploying from, thus this buffer shows the extent of the drones from the location (Figure 3). As we can see, most residential areas are covered by the drone package service. Next, we are going to give the Canada post buildings a distinct colour from the other buildings. Go to ‘Select by Location’ in the ‘Map’ tab and click ‘Select by Location’. In this dialog box, an intersection relationship is created where the input features are the buildings, and the selecting features is the Canada Post location point data. Hit okay, and now create a new layer from the selection and name it Canada Post buildings. Assign a distinct colour to separate the Canada Post buildings from the rest of the buildings.

3D Visualization – Buildings

Now we are going to extrude our buildings in terms of their height in feet. Click on the View tab in ArcPro and click on the Convert to local scene tab. This process essentially creates a 3D visual of your current map. Next you will notice that all of the layers are under 2D view, once we adjust the settings of the layers, we will drag these layers to the 3D layers section. To extrude the buildings, click on the layer and the appearance tab should come up under the feature layer. Click on the Type diagram drop down and select ‘Max Height’. Thereafter, select the field and choose ‘SHAPE_leng’ as this is the vertical height of the buildings and select feet as the unit. Give ArcPro some time and it should automatically move your building’s layer from the 2D to 3D layers section. Perform this same process with the Canada Post Buildings layer.

Figure 4. Extruded Buildings

Now you should have a 3D view of the city of Niagara Falls. Feel free to move around with the small circle on the bottom left of the display page (Figure 4). You can even click the up arrow to show full control and move around the city. Furthermore, can also add shadows to the buildings by right clicking the map 3D layers tab and selecting ‘Display shadows in 3D’ under Illumination.

Converting Point Data into 3D Objects

In this step, we are going to convert our point data into 3D objects to visualize obstructions such as lamp posts and cell phone towers. First click the Street Lights symbol under 2D layers and the symbology pane should open up on the right side of Arc Pro. Click the current symbol box beside Symbol and under the layer’s icon change the type from ‘Shape Marker’ to 3D model marker (Figure 5).

Figure 5. 3D Shape Marker

Next, click style, search for ‘street-light’, and choose the overhanging streetlight. Drag the Street Light layer from the 2D layer to the 3D layer. Finally, right-click on the layer and navigate to display under properties. Enable ‘Display 3D symbols in real-world units’ and now the streetlamp point data should be replaced by 3D overhanging streetlights. Repeat this same process for the cellphone tower locations but use a different model.

Importing 3D objects & Texturing

Figure 6. Create Features Dialog

Finally, we are going to import the 3D DAE helipad and tower files, place them in our local scene and apply textures from JPG files. First, go on the view tab, click on Catalog Pane and a Catalog should show up on the right side of the viewer. Expand the Databases folder and your saved project should show up as a GDB. Right-click on the GDB and create a new feature class. Name it ‘Amazon Tower’ and change the type from polygon to 3D object and click finish. You should notice that under Drawing Order there should be a new 3D layer with the ‘Amazon Tower’ file name. Select the layer, go on the edit tab and click create to open up the ‘Create Features’ dialog on the right side of the display panel (Figure 6). Click on the Model File tab, click the blue arrow and finally, click the + button. Navigate to your DAE file location, select it and now your model should show up in the view pane and it will allow you to place it on a certain spot. For our purposes, we’ll reduce the height to 30 feet and adjust the Z position to -40 to get rid of the square base under the tower. Click on the location of where you want to place the tower, close the create feature box, apply the multi-patch tool and clear the selection. Finally, to texture the tower, select the tower 3D object, click on the edit tab and this time hit modify. Under the new modify features pane select multi patch features under reshape. Now go on to Google and find a glass building texture JPG file that you like. Click load texture, choose the file, check the ‘Apply to all’ box and click apply. Now the Amazon tower should have the texture applied on it (Figure 7).

Figure 7. Textured Amazon Building

Animation

Finally, now that all of the obstructions are created, we are going to animate a drone flying through the city. Navigate to the animation tab on the top pane and click on timeline. This is where individual keyframes will be combined for the purpose of creating a drone package delivery. Navigate your view so that it is resting on a Canada Post Building and you have your desired view. Click on ‘Create first key frame’ to create your first view, next click up on the ‘full control view’ so that the drone flies up in elevation, and click the + to designate this as a new keyframe. Ensure that the height does not exceed 120 meters as this is the maximum altitude for drones, provided by Transport Canada (Bottom left box). Next, click and drag the hand on the viewer to move forward and back and click + for a new keyframe. Repeat this process and navigate the proposed drone to a helipad (Figure 8). Finally, press the ‘Move down’ button to land the done on the helipad and create a new key frame. Congratulations, you have created your first animation in ArcPro!

Figure 8. Animation in ArcPro

Discussion

Through the process of extruding buildings, maintaining a height less than 120 meters, adding in proposed landing spaces, and turning point data into real-world 3D objects we can visualize many obstructions that drones may face if drone delivery were to be implemented in the city of Niagara Falls. Although this is a basic example, creating an animation of a drone flying through certain neighbourhoods will allow analysts to determine which areas are problematic for autonomous flying and which paths would provide a safer option. Regarding the animation portion, there are two possible scenarios that have been created. First, is a drone deployment from the aforementioned Canada Post Locations. This scenario envisions Niagara Falls as having drone package deployment set out directly from their locations. This option would cover a larger area of Niagara Falls as seen through the buffer, however, having multiple locations may be hard to get funding for. Also, people may not want to live close to a Canada Post due to the noise pollution that comes from drones.

Scenario 1. Canada Post Delivery

The second scenario is to utilize a central building that drones can pickup packages from. This is exemplified as the hive delivery building as seen below. In sharp contrast to option 1, a central location may not be able to reach rural areas of Niagara Falls due to the distance limitations of current drones. However, two major benefits are that all drone deliveries could come from a central location and less noise pollution would occur as a result of this.

Scenario 2. Single HIVE Building

Conclusions & Future Research

Overall, it is evident that drone package deliveries are completely possible within the city of Niagara Falls. Through 3D visualizations in ArcPro, we are able to place simple obstructions such as conventional street lights and cell phone towers within the roads. Through this analysis and animation it is evident that they may not pose an issue to package delivery drones when incorporating communal landing zones. For future studies, this research can be furthered by incorporating more obstructions into the map; such as electricity towers, wiring, and trees. Likewise, future studies can also incorporate the fundamentals of drone weight capacity in relation to how far they can travel and overall speed of deliveries. In doing so, the feasibility of drone package deployment can be better assessed and hopefully implemented in future smart cities.

References

https://www.dji.com/ca/phantom-4/info

https://youtu.be/mzhvR4wm__M

3D Files

Gerkin Model DAE File https://www.turbosquid.com/3d-models/free-30-st-mary-axe-3d-model/991165

Simple Helipad DAE File – https://cgtrader.com/items/212615/download-page

Shape Files

Postal Outlet Points (2020) – Scholar’s GeoPortal

Niagara Falls Building Footprints (2010) – Niagara Open Data

Road Segments (2021) – Niagara Open Data

Niagara Falls Cellular Tower Locations (2021) – Niagara Open Data

Street Lighting Pilot Project (2021) – Niagara Open Data

Niagara Falls Municipal Boundary (2021) – Niagara Open Data

Niagara Falls Property Parcels (2021) – Niagara Open Data

3D Approach to Visualizing Crime on Campus: Laser-Cut Acrylic Hexbins

By: Lindi Jahiu

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2021

INTRODUCTION

Crime on campus has long been at the forefront of discussion regarding safety of community members occupying the space. Despite efforts to mitigate the issue—vis-à-vis increased surveillance cameras, increased hiring of security personnel, etc.—, it continues to persist on X University’s campus. In an effort to quantify this phenomenon, the university’s website collates each security incident that takes place on campus and details its location, time (reported and occurrence), and crime type, and makes it readily available for the public to view through web browser or email notifications. This effort to collate security incidents can be seen as a way for the university to first and foremost, quickly notify students of potential harm, but also as a means to understanding where incidents may be clustering. The latter is to be explored in the subsequent geo-visualization project which attempts to visualize three years worth of security incidents data, through the creation of a 3D laser-cut acrylic hexbin model. Hexbinning refers to the process of aggregating point data into a predefined hexagon that represents a given area, in this case, the vertex-to-vertex measurement is 200 metres. By proxy of creating a 3D model, it is hoped that the tangibility, interchangeability, and gamified aspects of the project will effectively re-conceptualize the phenomena to the user, and in-turn, stress the importance of the issue at hand. 

DATA AND METHODS

The data collection and methodology can be divided into two main parts: 2D mapping and 3D modelling. For the 2D version, security incidents from July 2nd, 2018 to October 15th, 2021 were manually scraped from the university’s website (https://www.ryerson.ca/community-safety-security/security-incidents/list-of-security-incidents/) and parsed into columns necessary for geocoding purposes (see Figure 1). Once all the data was placed into the excel file, they would be converted into a .csv file and imported into the ArcGIS Pro environment. Once there, one simply right clicks on the .csv and clicks “Geocode Table”, and follows the prompts for inputting the data necessary for the process (see inputs in Figure 2). Once ran, the geocoding process showed a 100% match, meaning there was no need for any alterations, and now shows a layer displaying the spatial distribution of every security incident (n = 455) (see Figure 3). To contextualize these points, a base map of the streets in-and-around the campus was extracted from the “Road Network File 2016 Census” from Scholars GeoPortal using the “Split Line Features” tool (see output in Figure 3). 

Figure 1. Snippet of spreadsheet containing location, postal code, city, incident date, time of incident, and crime type, for each of the security incidents.

Figure 2. Inputs for the Geocoding table, which corresponds directly to the values seen in Figure 1.

Figure 3. Base map of streets in-and-around X University’s campus. Note that the geo-coded security incidents were not exported to .SVG – only visible here for demonstration purposes.

To aggregate these points into hexbins, a certain series of steps had to be followed. First, a hexagonal tessellation layer was produced using the “Generate Tessellation” tool, with the security incidents .shp serving as the extent (see snippet of inputs in Figure 4 and output in Figure 5). Second, the “Summarize Within” tool was used to count the number of security incidents that fell within a particular polygon (see snippet of inputs in Figure 6 and output in Figure 7). Lastly, the classification method applied to the symbology (i.e. hexbins) was “Natural Breaks”, with a total of 5 classes (see Figure 7). Now that the two necessary layers have been created, namely, the campus base map (see Figure 3 – base map along with scale bar and north arrow) and tessellation layer (see Figure 5 – hexagons only), they would both be exported as separate images to .SVG format – a format compatible with the laser cutter. The hexbin layer that was classified will simply serve as a reference point for the 3D model, and was not exported to .SVG (see Figure 7).

Figure 4. Snippet of input when using the “Generate Tessellation” geoprocessing tool. Note that these were not the exact inputs, spatial reference left blank merely to allow the viewer to see what options were available.

Figure 5. Snippet of output when using the “Generate Tessellation” geoprocessing tool. Note that the geo-coded security incidents were not exported to .SVG – only visible here for demonstration purposes.

Figure 6. Snippet of input when using the “Summarize Within” geoprocessing tool.

Figure 7. Snippet of output when using the “Summarize Within” geoprocessing tool. Note that this image was not exported to .SVG but merely serves as a guide for the physical model.

When the project idea was first conceived, it was paramount that I familiarized myself with the resources available and necessary for this project. To do so, I applied for membership to the Library’s Collaboratory research space for graduate students and faculty members (https://library.ryerson.ca/collab/ – many thanks to them for making this such a pleasurable experience). Once accepted, I was invited to an orientation, followed by two virtual consultations with the Research Technology Officer, Dr. Jimmy Tran. Once we fleshed out the idea through discussion, I was invited to the Collaboratory to partake in mediated appointments. Once in the space, the aforementioned .SVG files were opened in an image editing program where various aspects of the .SVG were segmented into either Red, Green, or Blue, in order for the laser cutter to distinguish different features. Furthermore, the tessellation layer was altered to now include a 5mm (diameter) circle in the centre of each hexagon to allow for the eventual insertion of magnets. The base map would be etched onto an 11×8.5 sheet of clear acrylic (3mm thick), whereas the hexagons would be cut-out into individual pieces at a size of 1.83in vertex-to-vertex. Atop of this, a black 11×8.5 sheet of black acrylic would be cut-out to serve as the background for the clear base map (allowing for increased contrast to accentuate finer details). Once in hand, the hexagons would be fixed with 5x3mm magnets (into the aforementioned circles) to allow for seamless stacking between pieces. Stacks of hexagons (1 to 5) would represent the five classes in the 2D map, but with height now replacing the graduated colour schema (see Figure 7 and Figure 9 – although the varying translucency of the clear hexagons is also quite evident and communicates the classes as well). The completed 3D model is captured in Figure 8, along with the legend in Figure 9 that was printed out and is to always be presented in tandem with the model. The legend was not etched into the base map so as to allow it to be used for other projects that do not use the same classification schema, and in-case I had changed my mind about a detail at some point.

Figure 8. 3D Laser-Cut Acrylic Hexbin Model depicting three-years worth of security incidents on campus. Multiple angles provided.

Figure 9. Legend which corresponds the physical model displayed in Figure 8. Physical version has been created as well and will be shown in presentation.

FUTURE RESEARCH DIRECTIONS AND LIMITATIONS

The geo-visualization project at-hand serves as a foundation for a multitude of future research avenues, such as: exploring other 3D modalities to represent human geography phenomenon; as a learning tool for those not privy to cartography; and as a tool to collect further data regarding perceived and experienced areas of crime. All of which expand on the aspects tangibility, interchangeability, and gamification harped on in the project at-hand. With the latter point, imagine a situation where a booth is set up on campus and one were to simply ask “using these hexagon pieces, tell us where you feel the most security incidents on campus would occur.” The answers provided would be invaluable, as they would yield great insight into what areas of campus community members feel are most unsafe, and what factors may be contributing to it (e.g. built environment features such as poor lighting, lack of cameras, narrowness, etc.), resulting in a synthesis between the qualitative and quantitative. Or on the point of interchangeability, if someone wanted to explore the distribution of trees on campus for instance, they could very well laser-cut their own hexbins out of green acrylic at their own desired size (e.g. 100m), and simply use the same base map.

Despite the fairly robust nature of the project, some limitations became apparent, more specifically: issues with the way a few security incident’s data were collected and displayed on the university’s website (e.g. non-existent street names, non-existent intersections, missing street suffixes, etc.); an issue where the exportation of a layer to .SVG resulted in the creation of repeated overlapping of the same images that had to be deleted before laser cutting; and lastly, future iterations may consider exaggerating finer features (e.g. street names) to make the physical model even more legible.