3D String Mapping and Textured Animation: An Exploration of Subway Networks in Toronto and Athens

BY: SARAH DELIMA

SA8905 – Geovis Project, MSA Fall 2024

INTRODUCTION:

Greetings everyone! For my geo-visualization project, I wanted to combine my creative skills of Do It Yourself (DIY) crafting with the technological applications utilized today. This project was an opportunity to be creative using resources I had from home as well as utilizing the awesome applications and features of Microsoft Excel, ArcGIS Online, ArcGIS Pro, and Clipchamp.

In this blog, I’ll be sharing my process for creating a 3D physical string map model. To mirror my physical model, I’ll be creating a textured animated series of maps. My models display the subway networks of two cities. The first being the City of Toronto, followed by the metropolitan area of Athens, Greece.

Follow along this tutorial to learn how I completed this project!

PROJECT BACKGROUND:

For some background, I am more familiar with Toronto’s subway network. Fortunately enough, I was able to visit Athens and explore the city by relying on their subway network. As of now, both of these cities have three subway lines, and are both undergoing construction of additional lines. My physical model displays the present subway networks to date for both cities, as the anticipated subway lines won’t be opening until 2030. Despite the hands-on creativity of the physical model, it cannot be modified or updated as easily as a virtual map. This is where I was inspired to add to my concept through a video animated map, as it visualizes the anticipated changes to both subway networks!

PHYSICAL MODEL:

Materials Used:

  • Paper (used for map tracing)
  • Pine wood slab
  • Hellman ½ inch nails
  • Small hammer
  • Assorted colour cotton string
  • Tweezers
  • Krazy glue

Methods and Process:

For the physical model, I wanted to rely on materials I had at home. I also required a blank piece of paper for a tracing the boundary and subway network for both cities. This was done by acquiring open data and inputting it into ArcGIS Pro. The precise data sets used are discussed further in my virtual model making. Once the tracings were created, I taped it to a wooden base. Fortunately, I had a perfect base which was pine wood. I opted for hellman 1/2 inch nails as the wood was not too thick and these nails wouldn’t split the wood. Using a hammer, each nail was carefully placed onto the the tracing outline of the cities and subway networks .

I did have to purchase thread so that I could display each subway line to their corresponding colour. The process of placing the thread around the nails did require some patience. I cut the thread into smaller pieces to avoid knots. I then used tweezers to hold the thread to wrap around the nails. When a new thread was added, I knotted it tightly around a nail and applied krazy glue to ensure it was tightly secured. This same method was applied when securing the end of a string.

Images of threading process:

City of Toronto Map Boundary with Tracing

After threading the city boundary and subway network, the paper tracing was removed. I could then begin filling in the space of the boundary. I opted to use black thread for the boundary and fill, to contrast both the base and colours of the subway lines. The City of Toronto thread map was completed prior to the Athens thread map. The same steps were followed. Each city is on opposite sides of the wood base for convenience and to minimize the use of an additional wood base.

Of course, every map needs a title , legend, north star, projection, and scale. Once both of the 3D string maps were complete, the required titles and text were printed and laminated and added to the wood base for both 3D string maps. I once again used the nails and hammer with the threads to create both legends. Below is an image of the final physical products of my maps!

FINAL PHYSICAL MODELS:

City of Toronto Subway Network Model:

Athens Metropolitan Area Metro Network Model:

VIRTUAL MODEL:

To create the virtual model, I used ArcGIS Pro software to create my two maps and apply picture fill symbology to create a thread like texture. I’ll begin by discussing the open data acquired for the City of Toronto, followed by the Census Metropolitan Area of Athens to achieve these models.

The City of Toronto:

Data Acquisition:

For Toronto, I relied on the City of Toronto open data portal to retrieve the Toronto Municipal Boundary as well as TTC Subway Network dataset. The most recent dataset still includes Line 3, but was kept for the purpose of the time series map. As for the anticipated Eglinton line and Ontario line, I could not find open data for these networks. However, Metrolinx created interactive maps displaying the Ontario Line and Eglinton Crosstown (Line 5) stations and names. To note, the Eglinton Crosstown is identified as a light rail transit line, but is considered as part of the TTC subway network. 

To compile the coordinates for each station for both subway routes, I utilized Microsoft Excel to create 2 sheets, one for the Eglinton line and one for the Ontario line. To determine the location of each subway station, I used google maps to drop a pin in the correct location by referencing the map visual published by Metrolinx. 

Ontario Line Excel Table :

Using ArcGIS Pro, I used the XY Table to Point tool to insert the coordinates from each separate excel sheet, to establish points on the map. After successfully completing this, I had to connect each point to create a continuous line. For this, I used the Point to Line tool also in ArcGIS Pro.

XY Table to Point tool and Points to Line tool used to add coordinates to map as points and connect points into a continuous line to represent the subway route:

After achieving this, I did have to adjust the subway routes to be clipped within the boundary for The City of Toronto as well as Athens Metropolitan Area. I used the Pairwise Clip in the Geoprocessing pane to achieve this.

Geoprocessing pairwise clip tool parameters used. Note: The input features were the subway lines withe the city boundary as the clip features.

Athens Metropolitan Area:

Data Acquisition:

For retrieving data for Athens, I was able to access open data from Athens GeoNode I imported the following layers to ArcGIS Online; Athens Metropolitan Area, Athens Subway Network, and proposed Athens Line 4 Network which I added as accessible layers to ArcGIS online. I did have to make minor adjustments to the data, as the Athens metropolitan area data displays the neighbourhood boundaries as well. For the purpose of this project, only the outer boundaries were necessary. To overcome this, I used the merge modify feature to merge all the individual polygons within the metropolitan area boundary into one. I also had to use the pairwise clipping tool once again as the line 4 network exceeds the metropolitan boundary, thus being beyond the area of study for this project.

Adding Texture Symbology:

ArcGIS has a variety of tools and features that can enhance a map’s creativity and visualization. For this project , I was inspired by an Esri Yarn Map Tutorial. Given the physical model used thread, I wanted to create a textured map with thread. To achieve this, I utilized the public folder provided with the tutorial. This included portable network graphics (.png) cutouts of several fabrics as well as pen and pencil textures. To best mirror my physical model, I utilized a thread .png.

ESRI yarn map tutorial public folder:

I added the thread .png images by replacing the solid fill of the boundaries and subway networks with a picture fill. This symbology works best with a .png image for lines as it seamlessly blends with the base and surrounding features of the map. The thread .png image uploaded as a white colour, which I was able to modify its colour according to the boundary or particular subway line without distorting the texture it provides. 

For both the Toronto and Athens maps, the picture fill for each subway line and boundary was set to a thread .png with its corresponding colour. The boundaries for both maps were set to black as in the physical model, where the subway lines also mirror the physical model which is inspired by the existing/future colours used for subway routes. Below displays the picture symbology with the thread .png selected and tint applied for the subway lines.

City of Toronto subway Networks with picture fill of thread symbology applied:

The base map for the map was also altered, as the physical model is placed on a wood base. To mirror that, I extracted a Global Background layer from ArcGIS online, which I modified using the picture fill to upload a high resolution image of pine wood to be the base map for this model. For the city boundaries for both maps, the thread .png imagery was also applied with a black tint.

PUTTING IT ALL TOGETHER:

After creating both maps for Toronto and Athens, it was time to put it into an animation! The goal of the animation was to display each route, and their opening year(s) to visually display the evolution of the subway system, as my physical model merely captures the current subway networks. 

I did have to play around with the layers to individually capture each subway line. The current subway network data for both Toronto and Athens contain all 3 of their routes in one layer, in which I had to isolate each for the purpose of the time lapse in which each route had to be added in accordance to their initial opening date and year of most recent expansion. To achieve this, I set a Definition Query for each current subway route I was mapping whilst creating the animation.

Definition query tool accessed under layer properties:

Once I added each keyframe in order of the evolution of each subway route, I created a map layout for each map to add in the required text and titles as I did with the physical model. The layouts were then exported into Microsoft Clipchamp to create the video animation. I imported each map layout in .png format. From there, I added transitions between my maps, as well as sound effects !

CITY OF TORONTO SUBWAY NETWORK TIMELNE:

Geovis Project, TMU Geography, SA8905 Sarah Delima

(@s1delima.bsky.social) 2024-11-19T15:05:37.007Z

ATHENS METROPOLITAN AREA METRO TIMELINE:

Geovis Project, TMU Geography, SA8905 Sarah Delima

(@s1delima.bsky.social) 2024-11-19T15:12:18.523Z

LIMITATIONS: 

While this project allowed me to be creative both with my physical and virtual models, it did present certain limitations. A notable limitation to this geovisualization for the physical model is that it is meant to be a mere visual representation of the subway networks.

As for the virtual map, although open data was accessible for some of the subway routes, I did have to manually enter XY coordinates for future subway networks. I did reference reputable maps of the anticipated future subway routes to ensure accuracy.  Furthermore, given my limited timeline, I was unable to map the proposed extensions of current subway routes. Rather, I focused on routes currently under construction with an anticipated completion date. 

CONCLUSION: 

Although I grew up applying my creativity through creating homemade crafts, technology and applications such as ArcGIS allow for creativity to be expressed on a virtual level. Overall, the concept behind this project is an ode to the evolution of mapping, from physical carvings to the virtual cartographic and geo-visualization applications utilized today.

Visualizing Aerial Photogrammetry to Minecraft Java Edition 1.21.1

Andrea Santoso-Pardi
SA8905 Geovis project, Fall 2024

Introduction

Using aerial photogrammetry into Minecraft builds is an interesting way to combine real-world data with a video game that many people play. Adding aerial photogrammetry of a building and city is a way to get people interested in GIS technology and can be used for accessibility reasons to understand where different buildings are in the world. This workflow will introduce the process finding aerial building photogrammetry, using the .obj file to process it with Blender plugins (BlockBlender 1.41 and BlockBlender to Minecraft .Schem 1.42), exporting it as a .schem file for use in single player Minecraft Java Edition 1.21.1 by using the Litematica to paste the schematic, converting the model from latitude and longitude coordinates to Minecraft coordinates and editing the schematic

List of things you will need for this

  • Photogrammetry – preferably one that is watertight with no holes. If holes are present, one will have to manually close the holes.
  • Blender 3.6.2 – a free 3D modelling software. This does not work with the latest realease of 4.3 as of when I am writing this
    • Addons to use:
      • BlockBlender 1.41 ($20 Version) – Paid by the TMU Library Collaboratory, used to convert the photogrammetry into minecraft block textures
      • BlockBlender to Minecraft .Schem 1.42 – used to export the file into .schem file, a file which minecraft can read
  • Minecraft Java Edition ($29.99) – a video game played on a computer. This is different to Minecraft Bedrock Edition

Gathering Data: What is Aerial Photogrammetry & What is the best model to use?

Aerial photogrammetry is a technique that uses overlapping photographs captured from above and various angles to create accurate, measurable 3D model or maps of real-world landscapes, structures, or objects. However, photogrammetry is becoming a lot more accessible, it is now able to be created by just using a phone camera. The dataprocessing for drone imagery of a building includes:
Point Clouds which are a dense collection of points representing the object or terrain in 3D space. And also 3D Meshes which are surfaces created by connecting points into a polygonal network. The polygonal network of Aerial photogrammetry of a building is usually many triangles.

If you are going to search up a photogrammetry model to use, here is what made me choose this one of a government building and also know that it was photogrammetry.

  1. Large number of triangles and vertices. The model had 1.5 Million Triangles and 807.4k Vertices. 3D models made using 3D modeling Software will have lower counts of both of in the tens of thousands. This is how I knew it was photogrammetry.
  2. Minimal clean up. There was little to no clean-up required on the model for it to be able to be put into minecraft. Of course if you do not care that a lot of clean-up needs to happen before being able to convert the photogrammetry into blocks then you can do so. But know it will take hours depending on how many holes the model has.
    • I spent too many hours trying to clean-up Kerr Hall photogrammetry and it still had all the holes associated with it. If you want to do Kerr Hall please contact the Facilities for Campus Data for floor plans and walls for what it is supposed to look like outside to ensure the trees aren’t in the photogrammetry. Then use Blender Architecture and BlenderGIS plugins to scale the building accordingly
  3. States the Location/Coordinates. If you want the elevation of the model, you will need to know where it is geolocated in the world. Having the coordinates makes this processes easier in BlenderGIS
  4. Minimal/Zero Objects around the wall of the building. When getting photogrammetry, objects too close to the wall can merge with the building wall. Things like trees make it very hard to get clear viewing of the wall to the point that there might not even be a wall in the photogrammetry.
    • The topology of trees makes it so many tiny holes may happen instead. Making sure no objects are around the buildings ensures that I know that the walls are and will be visible in the final product. Do a quick 360 of the photogrammetry to ensure this is the case for the one you want
  5. Ensure to be able to download as a .OBJ file. For Blockblender to work the building textures need to have photos for blockblender to assign a block to the photo pixel
  6. Consistent Lighting all around. If different areas of the building have different lighting it does not make for a consistent model as I don’t want to change the brightness of the photo.

When exporting the model I chose an OBJ format as I knew that it was compatiable with the Blockblender addon to work.

When exporting, ensure you know where it downloads to. Extra steps like unzipping the file may occur depending on how it is formatted.

Blender

Blender is a free 3D modeling software that was chosen due to its high customizable editing options. If you haven’t used blender before, I suggest learning the basic controls this is a playlist to help understand each function.

Installing Addons

Download all the files you need as .zip files
Go to Edit > Preferences > Install From Disk and import the .zip files of the add-ons. Make sure you save your preferences. Just as a reminder, the ones needed for this tutorial are: BlockBlender 1.41 ($20 Version) and Minecraft .Schem 1.42

Import & Cleaning Up the .obj Model

To import the model go to File > Import > Wavefront OBJ .
The file does not have to be an .obj to work. But it does have to textures that are separate from the 3D Model if you want to use the Blockblender add-on.

Import the same model twice. One to make into Minecraft blocks and the other to use as a reference. Put them into different collections. You can name them “Reference” and “Editing” . Press M to Create two separate collections for each model.

To clean up the model to have it ready for use in blockblender, the model has to have a solid , watertight, mesh. In short, what this means is that the mesh of the model needs to have closed edges. It’s a bit hard to explain. Its not necessary to learn if your 3D model requires minimal clean up. But if you want to understand more of what I mean this resource might be helpful. https://davidstutz.de/a-formal-definition-of-watertight-meshes/

Go into Edit Mode. Click on the model (it should have an orange outline) and go into edit mode (see top left corner). Alternatively you can hit Tab to switch between Edit and Object Mode

Press A to Select All

Go Above into Select > Select Loops > Select Boundary Loop

It should look like this afterwards, with only the boundary loops selcted

Press Alt + F to fill in the faces
If you look underneath the model, you can see how it makes the mesh watertight

Before Pressing Alt+ F, Model viewed from below, with boundary loops selected in Blender 3.6.2
After Pressing Alt + F, Model viewed from below, with boundary loops selected in Blender 3.6.2

You can now exit edit mode. You can see in Object mode how the hole in the model is now enclosed. This has created a watertight solid mesh.

Model Before Edits, viewed from below in Blender 3.6.2

Model Before Edits, viewed from below in Blender 3.6.2


You can also clean up models with holes the same way. For complex models however, select the area around where the hole in the model is instead of select all.

If you would like an only visual explanation here is a video. Don’t switch over to sculpt mode and don’t enable Dyntopo and go into the sculpting mode as you will lose textures. The textures are needed for blockblender. If you do accidentally do dynotopo, Ctrl + Z can be used to undo or you can copy and paste your reference and do this section over again.

BlockBlender

Blockblender is an add-on for blender created by Joey Carolino, if you want to know how to visually see how blockblender is used better, below is a youtube video of how to use more functions in BlockBlender. There is a free version and a paid version of blockblender so if you cannot contact the Library Collaboratory to use the computer with the paid Blockblender then you can use the free version

Using Blockblender

Before doing this step, save your work to ensure that nothing goes away
Select the model and press Turn Selected Into Blocks. This will take a while to fully load. When it does, the model will look like glass. If blender becomes too laggy, exit blender and dont save. You can reduce the size of your model before doing this section to ensure you can add all the textures needed

To find out the image ID and what order to use them, go to the Material Properties It should look like a Red circle.

The names of the photo are shown and to ensure the model looks like the picture you must put it in that order or else it will not look like the reference model.

Here is what the Blockblender Model looks like

From here, Blockblender has different tools to choose the block selection. Each block is categorized into these areas in the Collections Area. However You can select individual blocks and move them into the unused collection by dragging and dropping. Alternatively press CTRL to select multiple to drag and drop

I also felt that the scale of 1 Block = 1m did not give enough detail so the block size was changed to 0.5m

The final model I ended up going with is below. Although it is not perfect, I can manual edit, use Litematica or Minecraft commands afterwards. It is hard to show how the workflow with just pictures so highly suggest the video above to see more of the functionality.

Government building when converted into Minecraft blocks using Blockblender 1.4.2. The N-Panel of blockblender is to the right of the screen

Blockblender to .Schem

This add-on was created by EpicSpartanRyan#8948 on Discord. Special thanks to him. They are also available for hire if someone wanted to put buildings into minecraft to make a campus server with a 30 minute free consultation and aims to respond in 12 hours.

Putting this into a .schem file allows it to be read in a format that minecraft understands.

To quickly see how it would work to export and put into Minecraft but using World Edit and in multiplayer server, please see his video below. It also compares what the textures in blender look like to what it looks like inside of minecraft

Using Blockblender to .Schem

To prepare the file to export,
Uncheckmark “Make Instances Real”

Click the model. Press Convert to Mesh in the N-panel to make the mesh look more like minecraft blocks rather than triangles. You can see if the mesh has changed by selecting the object and going into Edit Mode or by looking at the viewport wireframe

Click the model. Press Ctrl + A and apply All Transforms This will ensure all the textures will be there

The model with the viewport wire frame and the menu to press

Next, you want to go into File > Export > Minecraft (.schem) or press Export as schem on the N-panel Blockblender options. The N-panel can be seen in the previous section

Save the file whatever name you want but to ensure the .schem file is saved to your schematics folder. This is to save time trying to find where you put the model later. This can be found by searching %appdata% on your file pathway area. The file path should be
C:\Users\[YourComputerProfileName]\AppData\Roaming\.minecraft\schematics

If a schematics folder is not present, make one inside the .minecraft folder

Minecraft

Installing Minecraft, Fabric Loader and Mods

If you need help downloading Minecraft look at this article. https://www.minecraft.net/en-us/updates/instructions . I bought Minecraft in 2013 so I’m unsure of the process of what buying and downloading Minecraft is like now as I refuse to buy something that I already have. This video here may also be helpful but I have not followed along but I did watch it to ensure the video makes sense.

Fabric Loader

Fabric Loader is used as a way to change the minecraft experience from vanilla (default minecraft) to whatever experience you want by downloading other mods. It acts as bridge between the game and the mods you want to use.

To download, Choose the download that works best for the device you are on. For me that was Download for Windows x64, the latest version of Fabric Loader which is named fabric-installer-1.0.1 but it may change in the future.
Press to run the installer until it opens up to here. Since I am not running fabric on a server but on a client (single player usually), I downloaded it to Minecraft 1.21.1 and the Latest Loader Version.

Mods: Litematica and MaLiLib

Before entering Minecraft download the mods and add them to your mods folder. You do not need to do anything to the mod after it is downloaded except to move them into the Minecraft mod folder.

The general pathway would be C:\Users[YourComputerProfileName]\AppData\Roaming.minecraft\mods
It should all keep as the WinRAR archive

  • Litematica (litematica-fabric-1.21.1-0.19.50)
  • MaLiLib (malilib-fabric-1.21.1-0.21.0)
View of My Mods Folder

Launching Minecraft Java

Minecraft Launcher should show the fabric loader like this

Ensure to change the loader to be fabric-loader-1.21.1 so the mods will be attached. Once it is changed, press the big green button that says Play

Create a New World

This is just to import the model into Minecraft Java 1.21.1 SinglePlayer so I went into Singleplayer > Create New World and Here are the options chosen
Game Tab
Game mode : Creative
Difficulty: Peaceful
Allow Commands On
World Tab
World Type : Superflat
Generate Structures : Off
Bonus Chest : Off

Once having the options you like, you can create a New World.

Using Litematica

The building can be placed down in any world using the Litematica Mod. If you have any troubles using it, for the basic commands How To Use Litematica by @ryanthescion helped a lot in learning how to use the different commands

The minecraft stick is used in Litematica to toggle between modes. To get a minecraft stick, press E to open up the inventory / creative menu and search up Stick (which it opens to the search automatically) or find it under the Ingredients Tab

Left Click and Drag the stick into your hotbar (the area where one can see the multiple wooden sticks) and exit out of the inventory pressing E
Note that one stick is enough for the mod to work as it has to be held in your hand to use. The multiple sticks there are to show where the hotbar is.

With the Stick in your hand, one can toggle between the different modes by pressing CTRL + Scroll Wheel to go between 9 different modes.

Adding The Model

What I did in short was open the Litematica menu by pressing M , went to the Configuration menu

Hotkeys is a place to create custom keyboard and/or mouse shortcuts for different commands. Create a shortcut that is has no existing shortcut for it already. The tutorial used J + K for “executefunction” to paste the building so I followed the tutorial and use those also so now I will have to press J and K to execute a command. If there is a problem with the hotkeys used, it would turn a yellow/orange colour instead of white.


Next I went back to the Litematica menu went to Load Schematics added the folder pathway were I keep the schematics. Pressed the schematic build file I wanted to Load then pressed Load Schematic at the bottom of the page. Thus the government building was pasted into minecraft.

Converting Latitude and Longitude to Minecraft Coordinates

In the Litematica menu press the Loaded Schematics button then go to Schematic Placements > Configure > Schematic placement and you can change the building to be the same coordinates as in real life. Y is 18 because using the “What is My Elevation” website at the coordinates states 9m. Since 1 block is equal to 0.5m in our model, 9m divded by 0.5 is 18m.

The X and Z coordinates are if the geographic coordinate system of Earth converted with what the minecraft coordinate system is (Cartesian Coordinates). The conversion between the geographic coordinate system uses the WGS84 coordinate system (World Geodetic System 1984) and Cartesian Coordinates assumes both origins start at 0,0,0 and 1 block = 0.5 metres. If 1 degree of latitude and 1 degree of longitude both are 111,320 metres (for this projection)2:
Latitude in blocks per degree = 222,640 blocks per degree
Metres per degree of longitude = [111,320 × cos(latitude in radians) ] / 0.5

To align this with real-world geographic coordinates (latitude and longitude), one needs to define a reference point. Since the the real-world origin (0° latitude and 0°, longitude) is set to correspond to X = 0 and Z = 0 in minecraft. The formulas below is used to calculate the difference in Latitude and Longitude based off of this

The Formulas to Convert to Minecraft are:
Minecraft Z Coordinates = [ΔLatitude × 111,320] / [Scale (meters per block)]
Minecraft X Coordinates = [ΔLatitude × ( 111,320 × cos(Origin Latitude in radians))] / [Scale (meters per block)]
Minecraft Y Coordinates = Elevation in metres / Scale (metres per block)

Where:
ΔLatitude = Target Latitude − Origin Latitude
ΔLongitude = Target Longitude − Origin Longitude
Target Latitude is 47.621474856679534°
Target Longitude is −65.65655551636287°
If Origin is 0° latitude and 0°, longitude
Scale (metres per block) = 0.5 metres

Using cosine has it so the conversion better reflects real-world distances as Earth is a spheroid an minecraft is flat

Therefore the Minecraft coordinates are

Minecraft X Coordinates = −9,858,611
Minecraft Y Coordinates = 18
Minecraft Z Coordinates = 10,606,309


Note: You will have to Teleport to where the model is put do /tp <playername> x y z to where the building is loaded

Fixing The Model

There were many edits that needed to happen. I fixed the trees to actually have trunks as the textures did not load them in properly. I used what generated as a guide for what the shapes for the trees should look like

I also tried to change the pattern on the wall to more accurately reflect what it looks like in the photogrammetry

Blender Render of the 3D Model (before using Blockblender) compared with what I changed it to in Minecraft
Helpful Tips

/time set day
/effect give <targets> <effect> infinite [<amplifier>] [<hideParticles>]

To edit the schematic Minecraft Litematica schematic editing by @waynestir on Youtube was the most helpful this allowed me to replace blocks and have them as the schematic.


Limitations

Using this approach of taking aerial building photogrammetry, using blender to make it minecraft blocks and then trying to convert the latitude and longitude coordinates to minecraft to put the building in the exact right spot is that Minecraft is a fixed grid cubic Block representation which will lack the detail of the 3D aerial building photogrammetry model on any given day. To try to make a scale that allows for the geolocation correctness and building height but transferred over to minecraft is a fine detail task that has to try to balance the artistry with reality.

In Blockblender, fine details like the antennae at the top of the building don’t come through as it only uses blocks for the representation. so railings, window frames and more could be lost or require block subsitutes.

The Photogrammetry can be very complex and very noisy with shadows that may make blockblender interpret the data wrong. Blockblender as an add-on is limited to the minecraft default colours which may not accurately reflect what real-world surfaces look like or are made out of.

The Minecraft height limit can be an issue depending on how tall the building is you want to convert.

Geolocating the building from latitude and longitude to minecraft coordinates will not work on a much larger scale (i.e keeping the scale at 0.5m is 1 block) as the minecraft world is 30 million by 30 million.

Litematica also has limited functionality until one has to do a lot more manually or use another plug in.

Conclusion

This workflow is an excellent way to bring real-world data into Minecraft, but it requires balancing the complexity of photogrammetry models with Minecraft’s block-based limitations. Understanding and addressing these challenges produce detailed, manageable builds that work well in Minecraft’s unique environment.

Footnotes

  1. “Canadian Government Building Photogrammetry” (https://skfb.ly/oLZyt) by Air Digital Historical Scanning Archive is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/) ↩︎
  2. https://www.esri.com/arcgis-blog/products/arcgis-desktop/defense/determining-a-z-factor-for-scaling-linear-elevation-units-to-match-geographic-coordinate-values/ ↩︎

Visualizing select waterfalls of Hamilton, Ontario through 3D modelling using Blender and BlenderGIS

By: Darith Tran|Geovisualization Project Assignment|TMU Geography|SA8905|Fall 2024

Introduction/Background

The city of Hamilton, Ontario is home to many trails and waterfalls and offers many scenic and nature-focused areas. The city is situated along the Niagara Escarpment, which allows for unique topography and is the main reason for the high frequency of waterfalls that exist across the city. Hamilton is dubbed as the waterfall capital of the world, being home to over 100 waterfalls within the city’s boundaries. Despite this, Hamilton is still under the radar for tourists as it sits between 2 other major cities that see higher tourist traffic such as Niagara Falls (which is home to one of the world’s most known waterfall) and Toronto (popular for the CN Tower and hustle bustle city atmosphere).

The main purpose of this project was to increase awareness for the beauty of the Southern Ontario wonder and to provide prospective visitors, or even citizens of Hamilton, with an interactive story map to provide some general information on the trails connected to the waterfalls and the details of the waterfalls themselves. The 3D modelling aspect of the project aims to provide a unique visualization of how the waterfalls look in order to provide a quick, yet creative visual for those looking into visiting the city to see the waterfalls in person.

Data, Processing and Workflow (Blender + OpenTopography DEMs)

The first step of this project was to obtain DEMs for the regions of interest (Hamilton, Ontario) to be used as the foundation of the 3D model. The primary software used for this project was Blender (a 3D modeling software) leveraged by a GIS oriented plugin called “BlenderGIS” which is direct plugin available created by GitHub user domlysz allowing users to directly import GIS related files and elements such as shapefiles and base maps into the Blender editing and modelling pane. The plugin also allows users to load and access DEMs straight into Blender to be extracted and edited sourced through OpenTopography.

The first step is to open Blender and navigate towards the GIS tab in the object mode in the application :

Under the GIS tab, there are many options and hovering over “web geodata” prompts the following options:

In this case, we want to start off with a base map and the plugin has many sources available including the default Google Maps, ESRI Base maps as well as OpenStreetMap (Google Satellite was used for this project)

Once the base map is loaded into the Blender plane, I zoomed into the area of interest #1, being the Dundas Peak region, which is home to both Tew’s Falls and Webster’s Falls. The screenshot below shows the 2D image of Tew’s Falls in the object plane:

Once an area of interest is defined and all information is loaded, the elevation model is requested to generate the 3D plane of the land region:

The screenshot above shows the general 3D plane being created from a 30m DEM extracted from OpenTopography through the BlenderGIS plugin. The screenshot below showcases the modification of the 3D plane through the extrusion tool which adds depth and edges to create the waterfall look. Below is the foundation used specifically for Tew’s Falls.

Following this, imagery from the basemap was merged with the 3D extrusted plane to produce a the 3D render of the waterfall plane. To add the waterfall animation, the physics module was activated, allowing for various types of motion to be added to the 3D plane. Fluid was selected with the outflow behavior to simulate the movement of water coming down from a waterfall. This was then overlayed onto the 3D plane of the waterfall to simulate water flowing down from the waterfall.

These steps were then essentially repeated for Webster’s Falls and Devil’s Punchbowl waterfalls to produce 3D models with waterflow animations!

Link to ArcGIS Story Map: https://arcg.is/05Lr8T

Conclusion and Limitations

Overall, I found this to be a cool and fun way to visualize the waterfalls of Hamilton, Ontario and adding the rendered product directly onto ArcGIS Story Maps makes for an immersive experience. The biggest learning curve for this project was the use of the application Blender as I have never used the software before and have only briefly explored 3D modelling in the past. Originally, I planned to create 10 renders and animations of 10 waterfalls in Hamilton however, this became a daunting task after realizing the rendering and export times after completing the 3 models shown in the Story Map. Additionally, the render quality was rather low since 2D imagery was interpolated into a 3D plane which caused some distortions and warped shapes which would require further processing.

Visualizing Population on a 3D-Printed Terrain of Ontario

Xingyu Zeng

Geovisual Project Assignment @RyersonGeo, SA8905, Fall 2022

Introduction

3D visualization is an essential and popular category in geovisualization. After a period of development, 3D printing technology has become readily available in people’s daily lives. As a result, 3D printable geovisualization project was relatively easy to implement at the individual level. Also, compared to electronic 3D models, the advantages of explaining physical 3D printed models are obvious when targeting non-professional users.

Data and Softwares

3D model in Materialise Magics
  • Data Source: Open Topography – Global Multi-Resolution Topography (GMRT) Data Synthesis
  • DEM Data to a 3D Surface: AccuTrans 3D – which provides translation of 3D geometry between the formats used by many 3D modeling programs.
  • Converting a 3D Surface to a Solid: Materialise Magics – Converting surface to a solid with thickness and the model is cut according to the boundaries of the 5 Transitional Regions of Ontario. Using different thicknesses representing the differences in total population between Transitional Regions. (e.g. The central region has a population of 5 million, and the thickness is 10 mm; the west region has a population of 4 million the thickness is 8 mm)
  • Slicing & Printing: This step is an indispensable step for 3D printing, but because of the wide variety of printer brands on the market, most of them have their own slicing software developed by the manufacturers, so the specific operation process varies. But there is one thing in common, after this step, the file will be transferred to the 3D printer, and what follows is a long wait.

Visualization

The 5 Transitional Regions is reorganized by the 14 Local Health Integration Network (LHIN), and the corresponding population and model heights (thicknesses) for each of the five regions of Ontario are:

  • West, clustering of: Erie-St. Clair, South West, Hamilton Niagara Haldimand Brant, Waterloo Wellington, has a total population of about 4 million, the thickness is 8mm.
  • Central, clustering of: Mississauga Halton, Central West, Central, North Simcoe Muskoka, has a total population of about 5 million, the thickness is 10mm.
  • Toronto, clustering of: Toronto Central, has a total population of about 1.4 million, the thickness is 2.8mm.
  • East, clustering of: Central East, South East, Champlain, has a total population of about 3.7 million, the thickness is 7.4mm.
  • North, clustering of: North West, North East, has a total population of about 1.6 million, the thickness is 3.2mm.
Different thicknesses
Dimension Comparison
West region
Central region
Toronto
East region
North region

Limitations

The most unavoidable limitation of 3D printing is the accuracy of the printer itself. It is not only about the mechanical performance of the printer, but also about the materials used, the operating environment (temperature, UV intensity) and other external factors. The result of these factors is that the printed models do not match exactly, even though they are accurate on the computer. On the other hand, the 3D printed terrain can only represent variables that can be presented by unique values, such as the total population of my choice.

Visualizing Flow Regulation at the Shand Dam

Hannah Gordon

GeovisProject Assignment @RyersonGeo, SA8905, Fall 2022

Concept

When presented with this geovisualization opportunity I knew I wanted my final deliverable to be interactive and novel. The idea I decided on was a 3D printed topographic map with interactive elements that would allow the visualization of flow regulation from the Shand Dam by placing wooden dowels in holes of the 3D model above and below the dam to see how the dam regulated flow. This concept visualizes flow (cubic meters of water a second) in a way similar to a hydrograph, but brings in 3D elements and is novel and fun as opposed to a traditional chart.   Shand Dam on the Grand River was chosen as the site to visualize flow regulation as the Grand River is the largest river system in Southern Ontario, Shand Dam is a Dam of Significance, and  there are hydrometric stations that record river discharge above and below the dam for the same time periods (~1970-2022). 

About Shand Dam

Dams and reservoirs like the Shand Dam are designed to provide maximum flood storage following peak flows. During high flows (often associated with spring snow melt) water is held in the reservoir to reduce the amount of flow downstream, lowering flood peak flows (Grand River Conservation Authority, 2014). Shand Dam (constructed in 1942 as Grand Valley Dam) is located just south of Belwood Lake (an artificial reservoir) in Southern Ontario, and provides significant flow regulation and low flow augmentation that prevents flooding south of the dam (Baine, 2009). Shand Dam proved a valuable investment in 1954 after Hurricane Hazel when no lives were lost in the Grand River Watershed from the hurricane.

Shand Dam (at the time Grand Valley Dam) in 1942. Photographer: Walker, A., 1942

Today, the dam continues to prevent  and lessen the devastation from flooding (especially spring high-flows) through the use of four large gates and three ‘low-flow discharge tubes’ (Baine, 2009).   Dam discharge from dams on the Grand River may continue for some time after the storm is over to regain reservoir storage space and prepare for the next storm  (Grand River Conservation Authority, 2014). This is illustrated in the below hydrographs where the flow above and below the dam is plotted over a time series of one week prior to the peak flow and one week post the peak flow, and the dam delays and ‘flattens’ the peak discharge flow.

Data & Process

This project required two data sources – the hydrometric data for river discharge and a DEM (digital elevation model) from which a 3D printed model will be created. Hydrometric data for the two stations (02GA014 and 02GA016) was downloaded from the Government of Canada, Environment and Natural resources in the format of a .csv (comma separated value) table. Two datasets for hydrometric data were downloaded – the annual extreme peak data for both stations and the daily discharge data for both stations  in date-data format.  The hydrometric data provided river discharge as daily averages in cubic meters a second.   The DEM was downloaded from the Government of Canada’s Geospatial Data Extraction Tool. This website makes it simple and easy to download a DEM for a specific region of canada at a variety of spatial resolutions. I chose to extract my data for the area around Shand Dam that included the hydrometric stations, at a 20 meter resolution (finest resolution available).

3D Printing the DEM

The first step in creating the interactive 3D model was becoming 3D printer certified at Toronto Metropolitan University’s  Digital Media Experience Lab (DME). While I already knew how to 3D print this step was crucial as it allowed me to have access to the 3D printers in the DME for free. Becoming certified with the DME was a simple process of watching some videos, taking an online test, then booking an in person test. Once I had passed I was able to book my prints. The DME has two PRUSA brand printers. These 3D printers require a .gcode file to print models. Initially my data was in a .tiff file, and creating a .gcode file would first involve creating an STL (standard triangle language), then creating a gcode file from the STL. The gcode file acts as a set of ‘instructions’ for the 3D printer.

Exporting the STL with QGIS

First the plugin ‘DEM to 3D print’ had to be installed for QGIS. This plugin creates an STL file from the DEM (tiff). When exporting the digital elevation model to an STL (standard triangle language) file a few constraints had to be enforced.

  • The final size of the STL had to be under 25 mb so it could be uploaded and edited in tinkercad to add holes for the dowels.
  • The final size of the STL file had to be less than ~20cm by ~20cm to fit on the 3D printers bed. 
  • The final .gcode file created from the STL would have to print in under 6 hours to be printed at  the DME. This created a size constraint on the model I would be able to 3D print.

It took multiple experimentations of the QGIS DEM to 3D plugin to create the two STL files that would each print in under 6 hours, and be smaller than 25mb. The DEM was exported as an STL using the plugin and the following settings;

  • The spacing was 0.6mm. Spacing reflects the amount of detail in the STL, and while a spacing of 0.2 mm would have been more suitable for the project it would have created too large of a file to be imported to tinkercad. 
  • The final model size is 6 cm by 25cm and divided into two parts of 6 by 12.5cm. 
  • The model height of the STL was set to 400m, as the lowest elevation to be printed was 401m. This ensured an unnecessarily thick model would not be created. A thick model was to be avoided as it would waste precious 3D printing time.
  • The base height of the model was 2mm. This means that below the lowest elevation an additional 2 mm of model will be created.
  • The final scale of the model is approximately 1:90,000 (1:89,575), with a vertical exaggeration of 15 times. 

Printing with the DME

These STL that were exported from QGIS were opened in PRUSA slicer to create gcode files. The 3D printer configuration of the DME printers were imported and the infill density was set to 10%. This is the lowest infill density the DME will permit, and helps lower the print time by printing a lattice on the interior of the print as opposed to solid fill. Both the gcode files would print in just under 6 hours. 

Part one of the 3D elevation model printing in the DME, the ‘holes’ seen in the top are the infill grid.

3D printing the files at the DME proved more challenging than initially expected. When the slots were booked on the website I made it clear that the two files were components of a larger project, however when I arrived to print my two files the 3D printers had two different colors of filament (one of which was a blue-yellow blend). As the two 3D prints would be assembled together I was not willing to create a model that was half white, half blue/yellow. Therefore the second print had to be unfortunately pushed to the following week. At this point I was glad I had been proactive and booked the slots early otherwise I would have been forced to assemble an unattractive model.  The DME staff were very understanding and found humor in the situation,  immediately moving  my second print to the following week so the two files could use the same filament color. 

Modeling Hydrometric Data with Dowels

To choose the days used to display discharge in the interactive model the csv file of annual extreme peak data was opened in excel and maximum annual discharge was sorted in descending order. The top three discharge events at station 02GA014 (above the dam), that would have had data on the same days below the dam  were:

  • 1975-04-19 (average daily discharge of 306 cubic meters a second)
  • 1976-03-21 (average daily discharge of 289 cubic meters a second)
  • 2008-12-28 (average daily discharge of 283 cubic meters a second)

I also chose 2018’s peak discharge event (average daily discharge of 244 cubic meters a second on February 21st) to be included as it was a significant more recent flow event (top 6)

Once the four peak flow events had been decided on, their corresponding data in the daily discharge data were found, and  a scaling factor of 0.05 was applied in excel so I would know the proportional length to cut the dowels. This meant that every 0.5cm of dowel would indicate 10 cubic meters a second of discharge.

As the dowels sit within the 3D print, prior to cutting the dowels I had to find out the depth of the holes in the model. The hole for station 02GA014 (above the dam) was 15mm deep and the holes for station 02GA016 (below the dam) were 75mm deep. This meant that I would have to add 15mm or 75mm to the dowel length to ensure the dowels would accurately reflect discharge when viewed above the model. The dowels were then cut to size, painted to reflect the peak discharge event they correspond to and labeled with the date the data was from. Three dowels for the legend were also cut that reflected discharge of 100, 200, and 300 cubic meters a second. Three pilot holes then three 3/16” holes were drilled into the base for the project (two finished 1 x4’s) for these dowels to sit.

Assembling the Model

Once all the parts were ready the model could be assembled. The necessary information about the project and legend was then printed and carefully transferred to the wood with acetone. Then the base of the 3D print was aggressively sanded to provide better adhesion and glued onto the wood and clamped in place. I had to be careful with this as too tight of clamps would crack the print, but too loose of clamps and the print wouldn’t stay in place as it dried.

Final model showing 2018 peak flow
Final model showing 1976 peak flow
Final model showing 1975 peak flow
Final model showing 2008 peak flow

Applications

The finished interactive model allows the visualization of flow regulation from the Shand Dam, for different peak flow events, and highlights the value of this particular dam. Broadly, this project idea was a way to visualize hydrographs, and showed the differences in discharge over a spatial and temporal scale that resulted from the dam. The top dowel shows the flow above the dam for the peak flow event, and the three dowels below the dam show the flow below the dam for the day of the peak discharge, one day after, and two days after, to show the flow regulation over a period of days and illustrate the delayed and moderated hydrograph peak. The legend dowels are easily removable to line them up with the dowels in the 3D print to get a better idea of ow much flow there was on a given day at a given place. The project idea I used in  creating this model can easily be modified for other dams (provided there is suitable hydrometric data). Beyond visualizing flow regulation the same idea and process could be used to create models that show discharge at different stations over a watershed, or over a continuous period of time – such as monthly averages over a year. These models could have a variety of uses such as showing how river discharge changed in response to urbanization, or how climate change is causing more significant spring peak flows from snowmelt. 

References

Baine, J. (2009). Shand Dam a First For Canada. Grand Actions: The Grand Strategy Newsletter. Vol. 14, Issue 2. https://www.grandriver.ca/en/learn-get-involved/resources/Documents/Grand_Actions/Publications_GA_2009_2_MarApr.pdf

Grand River Conservation Authority (2014). Grand River Watershed Water Management Plan. Prepared by the Project Team, Water Management Plan., Cambridge, ON. 137p. + appendices. Retrieved from https://www.grandriver.ca/en/our-watershed/resources/Documents/WMP/Water_WMP_Plan_Complete.pdf

Walker, A. (April 18th, 1942). The dam is 72 feet high, 300 feet wide at the base, and more than a third of a mile long [photograph]. Toronto Star Photograph Archive, Toronto Public Library Digital Archives. Retrieved from https://digitalarchive.tpl.ca/objects/228722/the-dam-is-72-feet-high-300-feet-wide-at-the-base-and-more

Drone Package Deployment Tutorial / Animation

Anugraha Udas

SA8905 – Cartography & Visualization
@RyersonGeo

Introduction

Automation’s prevalence in society is becoming normalized as corporations have begun noticing its benefits and are now utilizing artificial intelligence to streamline everyday processes. Previously, this may have included something as basic as organizing customer and product information, however, in the last decade, the automation of delivery and transportation has exponentially grown, and a utopian future of drone deliveries may soon become a reality. The purpose of this visualization project is to convey what automated drone deliveries may resemble in a small city and what types of obstacles they may face as a result of their deployment. A step-by-step process will also be provided so that users can learn how to create a 3D visualization of cities, import 3D objects into ArcGIS Pro, convert point data into 3D visualizations, and finally animate a drone flying through a city. This is extremely useful as 3D visualization provides a different perspective that allows GIS users to perceive study areas from the ground level instead of the conventional birds-eye view.

Area of Study

The focus area for this pilot study is Niagara Falls in Ontario, Canada. The city of Niagara Falls was chosen due to its characteristics of being a smaller city but nonetheless still containing buildings over 120 meters in height. These buildings sizes provide a perfect obstruction for simulating drone flights as Transport Canada has set a maximum altitude limit of 120 meters for safety reasons. Niagara Falls also contains a good distribution of Canada Post locations that will be used as potential drone deployment centres for the package deliveries. Additionally, another hypothetical scenario where all drones deploy from one large building will be visualized. In this instance, London’s gherkin will be utilized as a potential drone-hive (hypothetically owned by Amazon) that drones can deploy from (See https://youtu.be/mzhvR4wm__M). Due to the nature of this project being a pilot study, this method be further expanded in the future to larger dense areas, however, a computer with over 16GB of RAM and a minimum of 8GB of video memory is highly recommended for video rendering purposes. In the video below, we can see the city of Niagara Falls rendered in ArcPro with the gherkin represented in a blue cone shape, similarly, the Canada Post buildings are also represented with a dark blue colour.

City of Niagara Falls (Rendered in ArcPro)

Data   

The data for this project was derived from numerous sources as a variety of file types were required. Regarding data directly relating to the city of Niagara Falls – Cellular Towers, Street Lights, Roads, Property parcel lines, Building Footprints and the Niagara Falls Municipal Boundary Shapefiles were all obtained from Niagara Open data and imported into ArcPro. Similarly, the Canada Post Locations Shapefile was derived from Scholar’s Geoportal. In terms of the 3D objects – London’s Gherkin, was obtained from TurboSquid in and the helipad was obtained from CGTrader in the form of DAE files. The Gherkin was chosen because it serves as a hypothetic hive building that can be employed in cities by corporations such as Amazon. Regarding the helipad 3D model, it will be distributed in numerous neighbourhoods around Niagara Falls as a drop-off zones for the drones to deliver packages. In a hypothetical scenario, people would be alerted on their phones as to when their package is securely arriving, and they would visit the loading zone to pick up their package. It should be noted that all files were copyright-free and allowed for personal use.

Process (Step by step)

Importing Files

Figure 1. TurboSquid 3D DAE Download

First, access the Niagara Open Data website and download all the aforementioned files in the search datasets box. Ensure that the files are downloaded in SHP format for recognition in ArcPro (Names are listed at the end of this blog). Next, go on TurboSquid and search for the Gherkin and make sure that the price drop down has a minimum and maximum value of $0 (Figure 1). Additionally, search for ‘Simple helipad free 3D model’ on CGtrader. Ensure that these files are downloaded in DAE format for recognition in ArcPro. Once all files are downloaded open ArcPro and import the Shape files (via Add Data) to first conduct some basic analysis.

Basic GIS Analysis

First, double click on the symbology box for each imported layer, and a symbology dialog should open on the right-hand side of the screen. Click on the symbol box and assign each layer with a distinct yet subtle colour. Once this is finished, select the Canada Post Locations layer, and go to the analysis tab and select the buffer icon to create a buffer around the Canada Post Locations. Input features – The Canada Post Locations. Provide a file location and name in the output feature class and enter a value of 5 kilometres for distance and dissolve the buffers (Figure 2). The reason why 5km was chosen is that regular consumer drones have a battery that can last up to ten kilometres (or 30 min flight time), thus traveling to the parcel destination and back would use up this allotted flight time.

Figure 2. Buffer option on ArcPro
Figure 3. Extent of Drone Deployment

Once this buffer is created the symbology is adjusted to a gradient fill within the layer tab of the symbol. This is to show the groupings of clusters and visualize furthering distance from the Canada Post Locations. In this project we are assuming that the Canada Post Locations are where the drones are deploying from, thus this buffer shows the extent of the drones from the location (Figure 3). As we can see, most residential areas are covered by the drone package service. Next, we are going to give the Canada post buildings a distinct colour from the other buildings. Go to ‘Select by Location’ in the ‘Map’ tab and click ‘Select by Location’. In this dialog box, an intersection relationship is created where the input features are the buildings, and the selecting features is the Canada Post location point data. Hit okay, and now create a new layer from the selection and name it Canada Post buildings. Assign a distinct colour to separate the Canada Post buildings from the rest of the buildings.

3D Visualization – Buildings

Now we are going to extrude our buildings in terms of their height in feet. Click on the View tab in ArcPro and click on the Convert to local scene tab. This process essentially creates a 3D visual of your current map. Next you will notice that all of the layers are under 2D view, once we adjust the settings of the layers, we will drag these layers to the 3D layers section. To extrude the buildings, click on the layer and the appearance tab should come up under the feature layer. Click on the Type diagram drop down and select ‘Max Height’. Thereafter, select the field and choose ‘SHAPE_leng’ as this is the vertical height of the buildings and select feet as the unit. Give ArcPro some time and it should automatically move your building’s layer from the 2D to 3D layers section. Perform this same process with the Canada Post Buildings layer.

Figure 4. Extruded Buildings

Now you should have a 3D view of the city of Niagara Falls. Feel free to move around with the small circle on the bottom left of the display page (Figure 4). You can even click the up arrow to show full control and move around the city. Furthermore, can also add shadows to the buildings by right clicking the map 3D layers tab and selecting ‘Display shadows in 3D’ under Illumination.

Converting Point Data into 3D Objects

In this step, we are going to convert our point data into 3D objects to visualize obstructions such as lamp posts and cell phone towers. First click the Street Lights symbol under 2D layers and the symbology pane should open up on the right side of Arc Pro. Click the current symbol box beside Symbol and under the layer’s icon change the type from ‘Shape Marker’ to 3D model marker (Figure 5).

Figure 5. 3D Shape Marker

Next, click style, search for ‘street-light’, and choose the overhanging streetlight. Drag the Street Light layer from the 2D layer to the 3D layer. Finally, right-click on the layer and navigate to display under properties. Enable ‘Display 3D symbols in real-world units’ and now the streetlamp point data should be replaced by 3D overhanging streetlights. Repeat this same process for the cellphone tower locations but use a different model.

Importing 3D objects & Texturing

Figure 6. Create Features Dialog

Finally, we are going to import the 3D DAE helipad and tower files, place them in our local scene and apply textures from JPG files. First, go on the view tab, click on Catalog Pane and a Catalog should show up on the right side of the viewer. Expand the Databases folder and your saved project should show up as a GDB. Right-click on the GDB and create a new feature class. Name it ‘Amazon Tower’ and change the type from polygon to 3D object and click finish. You should notice that under Drawing Order there should be a new 3D layer with the ‘Amazon Tower’ file name. Select the layer, go on the edit tab and click create to open up the ‘Create Features’ dialog on the right side of the display panel (Figure 6). Click on the Model File tab, click the blue arrow and finally, click the + button. Navigate to your DAE file location, select it and now your model should show up in the view pane and it will allow you to place it on a certain spot. For our purposes, we’ll reduce the height to 30 feet and adjust the Z position to -40 to get rid of the square base under the tower. Click on the location of where you want to place the tower, close the create feature box, apply the multi-patch tool and clear the selection. Finally, to texture the tower, select the tower 3D object, click on the edit tab and this time hit modify. Under the new modify features pane select multi patch features under reshape. Now go on to Google and find a glass building texture JPG file that you like. Click load texture, choose the file, check the ‘Apply to all’ box and click apply. Now the Amazon tower should have the texture applied on it (Figure 7).

Figure 7. Textured Amazon Building

Animation

Finally, now that all of the obstructions are created, we are going to animate a drone flying through the city. Navigate to the animation tab on the top pane and click on timeline. This is where individual keyframes will be combined for the purpose of creating a drone package delivery. Navigate your view so that it is resting on a Canada Post Building and you have your desired view. Click on ‘Create first key frame’ to create your first view, next click up on the ‘full control view’ so that the drone flies up in elevation, and click the + to designate this as a new keyframe. Ensure that the height does not exceed 120 meters as this is the maximum altitude for drones, provided by Transport Canada (Bottom left box). Next, click and drag the hand on the viewer to move forward and back and click + for a new keyframe. Repeat this process and navigate the proposed drone to a helipad (Figure 8). Finally, press the ‘Move down’ button to land the done on the helipad and create a new key frame. Congratulations, you have created your first animation in ArcPro!

Figure 8. Animation in ArcPro

Discussion

Through the process of extruding buildings, maintaining a height less than 120 meters, adding in proposed landing spaces, and turning point data into real-world 3D objects we can visualize many obstructions that drones may face if drone delivery were to be implemented in the city of Niagara Falls. Although this is a basic example, creating an animation of a drone flying through certain neighbourhoods will allow analysts to determine which areas are problematic for autonomous flying and which paths would provide a safer option. Regarding the animation portion, there are two possible scenarios that have been created. First, is a drone deployment from the aforementioned Canada Post Locations. This scenario envisions Niagara Falls as having drone package deployment set out directly from their locations. This option would cover a larger area of Niagara Falls as seen through the buffer, however, having multiple locations may be hard to get funding for. Also, people may not want to live close to a Canada Post due to the noise pollution that comes from drones.

Scenario 1. Canada Post Delivery

The second scenario is to utilize a central building that drones can pickup packages from. This is exemplified as the hive delivery building as seen below. In sharp contrast to option 1, a central location may not be able to reach rural areas of Niagara Falls due to the distance limitations of current drones. However, two major benefits are that all drone deliveries could come from a central location and less noise pollution would occur as a result of this.

Scenario 2. Single HIVE Building

Conclusions & Future Research

Overall, it is evident that drone package deliveries are completely possible within the city of Niagara Falls. Through 3D visualizations in ArcPro, we are able to place simple obstructions such as conventional street lights and cell phone towers within the roads. Through this analysis and animation it is evident that they may not pose an issue to package delivery drones when incorporating communal landing zones. For future studies, this research can be furthered by incorporating more obstructions into the map; such as electricity towers, wiring, and trees. Likewise, future studies can also incorporate the fundamentals of drone weight capacity in relation to how far they can travel and overall speed of deliveries. In doing so, the feasibility of drone package deployment can be better assessed and hopefully implemented in future smart cities.

References

https://www.dji.com/ca/phantom-4/info

https://youtu.be/mzhvR4wm__M

3D Files

Gerkin Model DAE File https://www.turbosquid.com/3d-models/free-30-st-mary-axe-3d-model/991165

Simple Helipad DAE File – https://cgtrader.com/items/212615/download-page

Shape Files

Postal Outlet Points (2020) – Scholar’s GeoPortal

Niagara Falls Building Footprints (2010) – Niagara Open Data

Road Segments (2021) – Niagara Open Data

Niagara Falls Cellular Tower Locations (2021) – Niagara Open Data

Street Lighting Pilot Project (2021) – Niagara Open Data

Niagara Falls Municipal Boundary (2021) – Niagara Open Data

Niagara Falls Property Parcels (2021) – Niagara Open Data

3D Approach to Visualizing Crime on Campus: Laser-Cut Acrylic Hexbins

By: Lindi Jahiu

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2021

INTRODUCTION

Crime on campus has long been at the forefront of discussion regarding safety of community members occupying the space. Despite efforts to mitigate the issue—vis-à-vis increased surveillance cameras, increased hiring of security personnel, etc.—, it continues to persist on X University’s campus. In an effort to quantify this phenomenon, the university’s website collates each security incident that takes place on campus and details its location, time (reported and occurrence), and crime type, and makes it readily available for the public to view through web browser or email notifications. This effort to collate security incidents can be seen as a way for the university to first and foremost, quickly notify students of potential harm, but also as a means to understanding where incidents may be clustering. The latter is to be explored in the subsequent geo-visualization project which attempts to visualize three years worth of security incidents data, through the creation of a 3D laser-cut acrylic hexbin model. Hexbinning refers to the process of aggregating point data into a predefined hexagon that represents a given area, in this case, the vertex-to-vertex measurement is 200 metres. By proxy of creating a 3D model, it is hoped that the tangibility, interchangeability, and gamified aspects of the project will effectively re-conceptualize the phenomena to the user, and in-turn, stress the importance of the issue at hand. 

DATA AND METHODS

The data collection and methodology can be divided into two main parts: 2D mapping and 3D modelling. For the 2D version, security incidents from July 2nd, 2018 to October 15th, 2021 were manually scraped from the university’s website (https://www.ryerson.ca/community-safety-security/security-incidents/list-of-security-incidents/) and parsed into columns necessary for geocoding purposes (see Figure 1). Once all the data was placed into the excel file, they would be converted into a .csv file and imported into the ArcGIS Pro environment. Once there, one simply right clicks on the .csv and clicks “Geocode Table”, and follows the prompts for inputting the data necessary for the process (see inputs in Figure 2). Once ran, the geocoding process showed a 100% match, meaning there was no need for any alterations, and now shows a layer displaying the spatial distribution of every security incident (n = 455) (see Figure 3). To contextualize these points, a base map of the streets in-and-around the campus was extracted from the “Road Network File 2016 Census” from Scholars GeoPortal using the “Split Line Features” tool (see output in Figure 3). 

Figure 1. Snippet of spreadsheet containing location, postal code, city, incident date, time of incident, and crime type, for each of the security incidents.

Figure 2. Inputs for the Geocoding table, which corresponds directly to the values seen in Figure 1.

Figure 3. Base map of streets in-and-around X University’s campus. Note that the geo-coded security incidents were not exported to .SVG – only visible here for demonstration purposes.

To aggregate these points into hexbins, a certain series of steps had to be followed. First, a hexagonal tessellation layer was produced using the “Generate Tessellation” tool, with the security incidents .shp serving as the extent (see snippet of inputs in Figure 4 and output in Figure 5). Second, the “Summarize Within” tool was used to count the number of security incidents that fell within a particular polygon (see snippet of inputs in Figure 6 and output in Figure 7). Lastly, the classification method applied to the symbology (i.e. hexbins) was “Natural Breaks”, with a total of 5 classes (see Figure 7). Now that the two necessary layers have been created, namely, the campus base map (see Figure 3 – base map along with scale bar and north arrow) and tessellation layer (see Figure 5 – hexagons only), they would both be exported as separate images to .SVG format – a format compatible with the laser cutter. The hexbin layer that was classified will simply serve as a reference point for the 3D model, and was not exported to .SVG (see Figure 7).

Figure 4. Snippet of input when using the “Generate Tessellation” geoprocessing tool. Note that these were not the exact inputs, spatial reference left blank merely to allow the viewer to see what options were available.

Figure 5. Snippet of output when using the “Generate Tessellation” geoprocessing tool. Note that the geo-coded security incidents were not exported to .SVG – only visible here for demonstration purposes.

Figure 6. Snippet of input when using the “Summarize Within” geoprocessing tool.

Figure 7. Snippet of output when using the “Summarize Within” geoprocessing tool. Note that this image was not exported to .SVG but merely serves as a guide for the physical model.

When the project idea was first conceived, it was paramount that I familiarized myself with the resources available and necessary for this project. To do so, I applied for membership to the Library’s Collaboratory research space for graduate students and faculty members (https://library.ryerson.ca/collab/ – many thanks to them for making this such a pleasurable experience). Once accepted, I was invited to an orientation, followed by two virtual consultations with the Research Technology Officer, Dr. Jimmy Tran. Once we fleshed out the idea through discussion, I was invited to the Collaboratory to partake in mediated appointments. Once in the space, the aforementioned .SVG files were opened in an image editing program where various aspects of the .SVG were segmented into either Red, Green, or Blue, in order for the laser cutter to distinguish different features. Furthermore, the tessellation layer was altered to now include a 5mm (diameter) circle in the centre of each hexagon to allow for the eventual insertion of magnets. The base map would be etched onto an 11×8.5 sheet of clear acrylic (3mm thick), whereas the hexagons would be cut-out into individual pieces at a size of 1.83in vertex-to-vertex. Atop of this, a black 11×8.5 sheet of black acrylic would be cut-out to serve as the background for the clear base map (allowing for increased contrast to accentuate finer details). Once in hand, the hexagons would be fixed with 5x3mm magnets (into the aforementioned circles) to allow for seamless stacking between pieces. Stacks of hexagons (1 to 5) would represent the five classes in the 2D map, but with height now replacing the graduated colour schema (see Figure 7 and Figure 9 – although the varying translucency of the clear hexagons is also quite evident and communicates the classes as well). The completed 3D model is captured in Figure 8, along with the legend in Figure 9 that was printed out and is to always be presented in tandem with the model. The legend was not etched into the base map so as to allow it to be used for other projects that do not use the same classification schema, and in-case I had changed my mind about a detail at some point.

Figure 8. 3D Laser-Cut Acrylic Hexbin Model depicting three-years worth of security incidents on campus. Multiple angles provided.

Figure 9. Legend which corresponds the physical model displayed in Figure 8. Physical version has been created as well and will be shown in presentation.

FUTURE RESEARCH DIRECTIONS AND LIMITATIONS

The geo-visualization project at-hand serves as a foundation for a multitude of future research avenues, such as: exploring other 3D modalities to represent human geography phenomenon; as a learning tool for those not privy to cartography; and as a tool to collect further data regarding perceived and experienced areas of crime. All of which expand on the aspects tangibility, interchangeability, and gamification harped on in the project at-hand. With the latter point, imagine a situation where a booth is set up on campus and one were to simply ask “using these hexagon pieces, tell us where you feel the most security incidents on campus would occur.” The answers provided would be invaluable, as they would yield great insight into what areas of campus community members feel are most unsafe, and what factors may be contributing to it (e.g. built environment features such as poor lighting, lack of cameras, narrowness, etc.), resulting in a synthesis between the qualitative and quantitative. Or on the point of interchangeability, if someone wanted to explore the distribution of trees on campus for instance, they could very well laser-cut their own hexbins out of green acrylic at their own desired size (e.g. 100m), and simply use the same base map.

Despite the fairly robust nature of the project, some limitations became apparent, more specifically: issues with the way a few security incident’s data were collected and displayed on the university’s website (e.g. non-existent street names, non-existent intersections, missing street suffixes, etc.); an issue where the exportation of a layer to .SVG resulted in the creation of repeated overlapping of the same images that had to be deleted before laser cutting; and lastly, future iterations may consider exaggerating finer features (e.g. street names) to make the physical model even more legible.

Creating a 3D Holographic Map Display for Real-World Driving and Flight Navigation

By: Dylan Oldfield

Geovis Class Project @RyersonGeo, SA8905, Fall 2018

Introduction:

The inspiration for this project came from the visual utility and futuristic look of holographic maps from the 2009 movie Avatar by James Cameron. Wherein there were multiple uses for holographic uses in several unique scenarios; such as within aerial vehicles, conference tables, and on air traffic control desks. Through this, the concept to create, visualize and present a current day possibility of this technology began. This technology is a form of hologram that visualizes geographically, where the user is, while operating a vehicle. For instance, the use of a hologram in a car for the everyday person displaying their navigation in the city guiding them to their destination. Imagine a 3D hologram real-time version replacing the 2D screen of google maps or any dashboard mounted navigation in a car. This application can even be used in aerial vehicles as well, imagine planes landing at airports close to urban areas, but fog or other weather conditions making safe landing and take-off difficult. With the use of the 3D hologram, visualization of where to go and how to navigate the difficult weather would be significantly easier and safer. For these 2 unique reasons, 2 scenarios or maps, were recorded into videos and made into 3D holograms to give a proof of concept for the use of the technology in cars and planes.

Data:

The data to make this project possible was taken from the City of Toronto Open Data Portal and consisted of the 3D massing and Street .shp files. It is important to note that in order for the video to work and be seen properly, the background within the video and in the real-world had to have been as dark as possible otherwise the video will not appear fully. To make this effect, features were created in ArcGIS-Pro that ensured that the background, base, and ceiling in the 3D scene of the map were black. These features were, a simple polygon for the ceiling given a different base height, and the ‘walls’ for the scene was a line surrounding the scene and extruded to the ceiling. The base of the scene was an imported night-time base map.

Methodology:

  1. Map / Scene Creation Within ArcGIS-Pro

Within the mapping program ArcGIS Pro the function to visualize 3D features was used to extrude the aforementioned .shp files for the scene. All features were extruded in 3D from the base height with meters as the measurement. The buildings were extruded to their real-world dimensions and given the colour scheme of fluorescent blue so as to provide contrast for buildings in the video. The roads were extruded in such a way so as to give the impression that sidewalks existed. The first part for making this was with buffering the roads to a 6 meter buffer, dissolving it to make it seamless, and extruding it from the base, creating the roads. The inverse polygon from the newly created roads was created and extruded slightly higher than the roads. The roads were then given differing shades of grey so as to adhere to the darkness of the scene but also to provide contrast to each other. This effect is seen in the picture below.

 

  1. Animation Videos Creation and Export

Following the creation of the scene the animation or videos of “driving” through the city and “flying” into Billy Bishop Airport were created. Within ArcGIS-Pro the function to create Animations through the consecutive placements of key frames allows for the seamless running of a video in any 3D scene created. The key frames are essentially checkpoints in a video and the program fills the time and space between each frame by traveling between the frames as a video. The key frames are the boxes at the bottom of the image below.

Additionally, as seen in the image above, is the exporting options ArcGIS-Pro makes available for the user. The video can be exported at differing qualities to YouTube, Vimeo, Twitter, MP4, and as a Gif among other options. The 2 videos created for this project were at 1080p, 60 frames a second in MP4 format. Due to the large size of the videos with these chosen options, the exporting process took over 2 hours for each video.

  1. PowerPoint Video Transposition and Formatting

The way the hologram functions is by refracting the videos through each of the lenses into the center creating the floating effect of an image. For this effect to work the video exported from ArcGIS-Pro was inserted into PowerPoint and transposed 3 times into the format seen in the image below. Once the placements were equal and exact the background, as mentioned previously, was turned black. The videos were made to play at the same time and then was exported for a second time into a MP4 as the final products.

  1. Hologram Lenses Template Creation

The hologram lenses were created out of 4 clear CD cases. The templates for the lenses needed to be physically compatible with the screen display of the video created. The screen used was from a 5th Generation iPad. After the template was defined they were cut out of the 4 CD cases with a box cutter and lightly sanded at all cut edges so as to ensure they would not cut anyone, and so that the surfaces in contact with the epoxy would bond without issue. After this an epoxy resin was used to glue the 4 lenses into their final shape. While the epoxy had a 10 setting time, it was left for 3 hours to ensure it was fully set. After this the lenses was complete and ready for use. The final lens and the iPad used for the display are seen in the image below.

Finally, here is a screen shot of the City of Toronto “Driving Navigation” video:

Using LiDAR to create a 3D Basemap

By: Jessie Smith
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

INTRO

My Geovisualization Project focused on the use of LiDAR to create a 3D Basemap. LiDAR, which stands for Light Detection and Ranging, is a form of active remote sensing. Pulses of light are sent from a laser towards the ground. The time it takes for the light pulse to be returned is measured, which determines the distance between where the light touched a surface and the laser in which it was sent from. By measuring all light returns, millions of x,y,z points are created and allow for 3D representation of the ground whether it be just surface topography or elements such as vegetation or buildings etc. The LiDAR points can then be used in a dataset to create DEMs or TINs, and imagery is draped over them to create a 3D representation. The DEMs could also be used in ArcPro to create 3D buildings and vegetation, as seen in this project.

ArcGIS SOLUTIONS

ArcGIS solutions are a series of resources made available by Esri. They are resources marketed for industry and government use. I used the Local Government Solutions which has a series of focused maps and applications to help local governments maximize their GIS efficiency to improve their workflows and enhance services to the public. I looked specifically at the Local Government 3D Basemaps solution. This solution included a ArcGIS Pro package with various files, and an add-in to deploy the solution. Once the add-in is deployed a series of tasks are made available that include built in tools and information on how to use them. There is also a sample data set included that can be used to run all tasks as a way to explore the process with appropriate working data.

IMPLEMENTATION

The tasks that are provided have three different levels: basic, schematic and realistic. Each task only requires 2 data sources, a las(LiDAR) dataset and building footprints. Based on the task chosen, a different degree of detail in the base map will be produced. For my project I used a mix of realistic and schematic tasks. Each task begins with the same steps: classifying the LiDAR by returns, creating a DTM and DSM, and assigning building heights and elevation to the building footprints attribute table. From there the tasks diverge. The schematic task then extracted roof forms to determine the shape of the roofs, such as a gabled type, where in the Basic task the roofs remain flat and uniform. Then the DEMS were used in conjunction with the building footprints and the rooftop types to 3D enable buildings. The realistic scheme created vegetation points data with z values using the DEMs. Next, a map preset was added to assign a 3D realistic tree shape that corresponds with the tree heights.

DEMs Created

DSM

DTM

Basic Scene Example

Realistic Scene

 

ArcGIS ONLINE

The newly created 3D basemap, which can be seen and used on ArcGIS Pro, can also be used on AGOL with the newly available Web Scene. The 3D data cannot be added to ArcGIS online directly like 2D data would be. Instead, a package for each scene was created, then was published directly to ArcGIS online. The next step is to open this package on AGOL and create a hosted layer. This was done for both the 3D trees and buildings, and then these hosted layers were added to a Web Scene. In the scene viewer, colours and basemaps can be edited, or additional contextual layers could be added. As an additional step, the scene was then used to create a web mapping application using Story Map template. The Story Map can then be viewed on ArcGIS Online and the data can be rotated and explored.

Scene Viewer

Story Map

You can find my story map here:
http://ryerson.maps.arcgis.com/apps/Styler/index.html?appid=a3bb0e27688b4769a6629644ea817d94

APPLICATIONS

This type of project would be very doable for many organizations, especially local government. All that is needed is LiDAR data and building footprints. This type of 3D map is often outsourced to planners or consulting companies when a 3D model is needed. Now government GIS employees could create a 3D model themselves. The tasks can either be followed exactly with your own data, or the general work flow could be recreated. The tasks are mostly clear as to the required steps and processes being followed, but there could be more reasoning provided when setting values or parameters specific to the data being used inside the tool. This will make it easier to create a better model with less trial and error.

 

 

 

Surfer 15 Whistler-Blackcomb Geovisualization Using Data Retrieved From Google Earth

By :Ryan Wilkinson

Geovizualization Project Assingment, @RyersonGEO, SA8905, Fall 2018

 

In this project a 3D Surface Map of Whistler-Blackcomb in British Columbia was created using XYZ data retrieved from Google Earth and the geovisualization software program Surfer 15. Surfer is an excellent geovisualization software program capable of creating 2D contour maps and 3D surface maps from XYZ and DEM data. The following method can work for any terrain location in the world that can be viewed on google earth and is certainly not limited to my chosen location.

Collection of Data from Google Earth:

  

The path tool on google earth was used to drop points on the Whistler-Blackcomb area, each red square represents a point that has corresponding latitude, longitude, and elevation values.

The image above shows the trace of the path that was drawn in order to collect the XYZ data from the Whistler area necessary for adequate creation of an accurate 3D surface map in Surfer.

Once the desired path was drawn it was saved under “My places” in google earth as a .kml file.

Data Conversion:

The .kml file was then uploaded into TCX converter. The altitude values are commonly not present during this stage therefore TCX converter can be used to add the altitudes using its “update altitude tool”. Once the altitudes were successfully calculated TCX converter was used to convert the file from a .KML to a .CSV in preparation for visualization in Surfer.

 

Grid File and 3D Surface Map Creation:

The .CSV File was then uploaded into Surfer’s grid data tool which is capable of creating grid files (.grd) from XYZ and DEM data. Grid files can be used to create 2D contour maps and 3D surface maps in Surfer.

The grid file was then used by the 3D Surface tool to create a 3D surface map of the Whistler area. Colour scales and variations can be easily changed in Surfer to achieve desired effect and convey information in the way the user chooses. The above colour scheme is called “terrain” and effectively visualize elevation change. The model can also be rotated and viewed from any desired angle in surfer using the “trackball” tool, multiple angles of the 3D surface map above can be seen in the finish product at the beginning of this blog post.