Visualizing Aerial Photogrammetry to Minecraft Java Edition 1.21.1

Andrea Santoso-Pardi
SA8905 Geovis project, Fall 2024

Introduction

Using aerial photogrammetry into Minecraft builds is an interesting way to combine real-world data with a video game that many people play. Adding aerial photogrammetry of a building and city is a way to get people interested in GIS technology and can be used for accessibility reasons to understand where different buildings are in the world. This workflow will introduce the process finding aerial building photogrammetry, using the .obj file to process it with Blender plugins (BlockBlender 1.41 and BlockBlender to Minecraft .Schem 1.42), exporting it as a .schem file for use in single player Minecraft Java Edition 1.21.1 by using the Litematica to paste the schematic, converting the model from latitude and longitude coordinates to Minecraft coordinates and editing the schematic

List of things you will need for this

  • Photogrammetry – preferably one that is watertight with no holes. If holes are present, one will have to manually close the holes.
  • Blender 3.6.2 – a free 3D modelling software. This does not work with the latest realease of 4.3 as of when I am writing this
    • Addons to use:
      • BlockBlender 1.41 ($20 Version) – Paid by the TMU Library Collaboratory, used to convert the photogrammetry into minecraft block textures
      • BlockBlender to Minecraft .Schem 1.42 – used to export the file into .schem file, a file which minecraft can read
  • Minecraft Java Edition ($29.99) – a video game played on a computer. This is different to Minecraft Bedrock Edition

Gathering Data: What is Aerial Photogrammetry & What is the best model to use?

Aerial photogrammetry is a technique that uses overlapping photographs captured from above and various angles to create accurate, measurable 3D model or maps of real-world landscapes, structures, or objects. However, photogrammetry is becoming a lot more accessible, it is now able to be created by just using a phone camera. The dataprocessing for drone imagery of a building includes:
Point Clouds which are a dense collection of points representing the object or terrain in 3D space. And also 3D Meshes which are surfaces created by connecting points into a polygonal network. The polygonal network of Aerial photogrammetry of a building is usually many triangles.

If you are going to search up a photogrammetry model to use, here is what made me choose this one of a government building and also know that it was photogrammetry.

  1. Large number of triangles and vertices. The model had 1.5 Million Triangles and 807.4k Vertices. 3D models made using 3D modeling Software will have lower counts of both of in the tens of thousands. This is how I knew it was photogrammetry.
  2. Minimal clean up. There was little to no clean-up required on the model for it to be able to be put into minecraft. Of course if you do not care that a lot of clean-up needs to happen before being able to convert the photogrammetry into blocks then you can do so. But know it will take hours depending on how many holes the model has.
    • I spent too many hours trying to clean-up Kerr Hall photogrammetry and it still had all the holes associated with it. If you want to do Kerr Hall please contact the Facilities for Campus Data for floor plans and walls for what it is supposed to look like outside to ensure the trees aren’t in the photogrammetry. Then use Blender Architecture and BlenderGIS plugins to scale the building accordingly
  3. States the Location/Coordinates. If you want the elevation of the model, you will need to know where it is geolocated in the world. Having the coordinates makes this processes easier in BlenderGIS
  4. Minimal/Zero Objects around the wall of the building. When getting photogrammetry, objects too close to the wall can merge with the building wall. Things like trees make it very hard to get clear viewing of the wall to the point that there might not even be a wall in the photogrammetry.
    • The topology of trees makes it so many tiny holes may happen instead. Making sure no objects are around the buildings ensures that I know that the walls are and will be visible in the final product. Do a quick 360 of the photogrammetry to ensure this is the case for the one you want
  5. Ensure to be able to download as a .OBJ file. For Blockblender to work the building textures need to have photos for blockblender to assign a block to the photo pixel
  6. Consistent Lighting all around. If different areas of the building have different lighting it does not make for a consistent model as I don’t want to change the brightness of the photo.

When exporting the model I chose an OBJ format as I knew that it was compatiable with the Blockblender addon to work.

When exporting, ensure you know where it downloads to. Extra steps like unzipping the file may occur depending on how it is formatted.

Blender

Blender is a free 3D modeling software that was chosen due to its high customizable editing options. If you haven’t used blender before, I suggest learning the basic controls this is a playlist to help understand each function.

Installing Addons

Download all the files you need as .zip files
Go to Edit > Preferences > Install From Disk and import the .zip files of the add-ons. Make sure you save your preferences. Just as a reminder, the ones needed for this tutorial are: BlockBlender 1.41 ($20 Version) and Minecraft .Schem 1.42

Import & Cleaning Up the .obj Model

To import the model go to File > Import > Wavefront OBJ .
The file does not have to be an .obj to work. But it does have to textures that are separate from the 3D Model if you want to use the Blockblender add-on.

Import the same model twice. One to make into Minecraft blocks and the other to use as a reference. Put them into different collections. You can name them “Reference” and “Editing” . Press M to Create two separate collections for each model.

To clean up the model to have it ready for use in blockblender, the model has to have a solid , watertight, mesh. In short, what this means is that the mesh of the model needs to have closed edges. It’s a bit hard to explain. Its not necessary to learn if your 3D model requires minimal clean up. But if you want to understand more of what I mean this resource might be helpful. https://davidstutz.de/a-formal-definition-of-watertight-meshes/

Go into Edit Mode. Click on the model (it should have an orange outline) and go into edit mode (see top left corner). Alternatively you can hit Tab to switch between Edit and Object Mode

Press A to Select All

Go Above into Select > Select Loops > Select Boundary Loop

It should look like this afterwards, with only the boundary loops selcted

Press Alt + F to fill in the faces
If you look underneath the model, you can see how it makes the mesh watertight

Before Pressing Alt+ F, Model viewed from below, with boundary loops selected in Blender 3.6.2
After Pressing Alt + F, Model viewed from below, with boundary loops selected in Blender 3.6.2

You can now exit edit mode. You can see in Object mode how the hole in the model is now enclosed. This has created a watertight solid mesh.

Model Before Edits, viewed from below in Blender 3.6.2

Model Before Edits, viewed from below in Blender 3.6.2


You can also clean up models with holes the same way. For complex models however, select the area around where the hole in the model is instead of select all.

If you would like an only visual explanation here is a video. Don’t switch over to sculpt mode and don’t enable Dyntopo and go into the sculpting mode as you will lose textures. The textures are needed for blockblender. If you do accidentally do dynotopo, Ctrl + Z can be used to undo or you can copy and paste your reference and do this section over again.

BlockBlender

Blockblender is an add-on for blender created by Joey Carolino, if you want to know how to visually see how blockblender is used better, below is a youtube video of how to use more functions in BlockBlender. There is a free version and a paid version of blockblender so if you cannot contact the Library Collaboratory to use the computer with the paid Blockblender then you can use the free version

Using Blockblender

Before doing this step, save your work to ensure that nothing goes away
Select the model and press Turn Selected Into Blocks. This will take a while to fully load. When it does, the model will look like glass. If blender becomes too laggy, exit blender and dont save. You can reduce the size of your model before doing this section to ensure you can add all the textures needed

To find out the image ID and what order to use them, go to the Material Properties It should look like a Red circle.

The names of the photo are shown and to ensure the model looks like the picture you must put it in that order or else it will not look like the reference model.

Here is what the Blockblender Model looks like

From here, Blockblender has different tools to choose the block selection. Each block is categorized into these areas in the Collections Area. However You can select individual blocks and move them into the unused collection by dragging and dropping. Alternatively press CTRL to select multiple to drag and drop

I also felt that the scale of 1 Block = 1m did not give enough detail so the block size was changed to 0.5m

The final model I ended up going with is below. Although it is not perfect, I can manual edit, use Litematica or Minecraft commands afterwards. It is hard to show how the workflow with just pictures so highly suggest the video above to see more of the functionality.

Government building when converted into Minecraft blocks using Blockblender 1.4.2. The N-Panel of blockblender is to the right of the screen

Blockblender to .Schem

This add-on was created by EpicSpartanRyan#8948 on Discord. Special thanks to him. They are also available for hire if someone wanted to put buildings into minecraft to make a campus server with a 30 minute free consultation and aims to respond in 12 hours.

Putting this into a .schem file allows it to be read in a format that minecraft understands.

To quickly see how it would work to export and put into Minecraft but using World Edit and in multiplayer server, please see his video below. It also compares what the textures in blender look like to what it looks like inside of minecraft

Using Blockblender to .Schem

To prepare the file to export,
Uncheckmark “Make Instances Real”

Click the model. Press Convert to Mesh in the N-panel to make the mesh look more like minecraft blocks rather than triangles. You can see if the mesh has changed by selecting the object and going into Edit Mode or by looking at the viewport wireframe

Click the model. Press Ctrl + A and apply All Transforms This will ensure all the textures will be there

The model with the viewport wire frame and the menu to press

Next, you want to go into File > Export > Minecraft (.schem) or press Export as schem on the N-panel Blockblender options. The N-panel can be seen in the previous section

Save the file whatever name you want but to ensure the .schem file is saved to your schematics folder. This is to save time trying to find where you put the model later. This can be found by searching %appdata% on your file pathway area. The file path should be
C:\Users\[YourComputerProfileName]\AppData\Roaming\.minecraft\schematics

If a schematics folder is not present, make one inside the .minecraft folder

Minecraft

Installing Minecraft, Fabric Loader and Mods

If you need help downloading Minecraft look at this article. https://www.minecraft.net/en-us/updates/instructions . I bought Minecraft in 2013 so I’m unsure of the process of what buying and downloading Minecraft is like now as I refuse to buy something that I already have. This video here may also be helpful but I have not followed along but I did watch it to ensure the video makes sense.

Fabric Loader

Fabric Loader is used as a way to change the minecraft experience from vanilla (default minecraft) to whatever experience you want by downloading other mods. It acts as bridge between the game and the mods you want to use.

To download, Choose the download that works best for the device you are on. For me that was Download for Windows x64, the latest version of Fabric Loader which is named fabric-installer-1.0.1 but it may change in the future.
Press to run the installer until it opens up to here. Since I am not running fabric on a server but on a client (single player usually), I downloaded it to Minecraft 1.21.1 and the Latest Loader Version.

Mods: Litematica and MaLiLib

Before entering Minecraft download the mods and add them to your mods folder. You do not need to do anything to the mod after it is downloaded except to move them into the Minecraft mod folder.

The general pathway would be C:\Users[YourComputerProfileName]\AppData\Roaming.minecraft\mods
It should all keep as the WinRAR archive

  • Litematica (litematica-fabric-1.21.1-0.19.50)
  • MaLiLib (malilib-fabric-1.21.1-0.21.0)
View of My Mods Folder

Launching Minecraft Java

Minecraft Launcher should show the fabric loader like this

Ensure to change the loader to be fabric-loader-1.21.1 so the mods will be attached. Once it is changed, press the big green button that says Play

Create a New World

This is just to import the model into Minecraft Java 1.21.1 SinglePlayer so I went into Singleplayer > Create New World and Here are the options chosen
Game Tab
Game mode : Creative
Difficulty: Peaceful
Allow Commands On
World Tab
World Type : Superflat
Generate Structures : Off
Bonus Chest : Off

Once having the options you like, you can create a New World.

Using Litematica

The building can be placed down in any world using the Litematica Mod. If you have any troubles using it, for the basic commands How To Use Litematica by @ryanthescion helped a lot in learning how to use the different commands

The minecraft stick is used in Litematica to toggle between modes. To get a minecraft stick, press E to open up the inventory / creative menu and search up Stick (which it opens to the search automatically) or find it under the Ingredients Tab

Left Click and Drag the stick into your hotbar (the area where one can see the multiple wooden sticks) and exit out of the inventory pressing E
Note that one stick is enough for the mod to work as it has to be held in your hand to use. The multiple sticks there are to show where the hotbar is.

With the Stick in your hand, one can toggle between the different modes by pressing CTRL + Scroll Wheel to go between 9 different modes.

Adding The Model

What I did in short was open the Litematica menu by pressing M , went to the Configuration menu

Hotkeys is a place to create custom keyboard and/or mouse shortcuts for different commands. Create a shortcut that is has no existing shortcut for it already. The tutorial used J + K for “executefunction” to paste the building so I followed the tutorial and use those also so now I will have to press J and K to execute a command. If there is a problem with the hotkeys used, it would turn a yellow/orange colour instead of white.


Next I went back to the Litematica menu went to Load Schematics added the folder pathway were I keep the schematics. Pressed the schematic build file I wanted to Load then pressed Load Schematic at the bottom of the page. Thus the government building was pasted into minecraft.

Converting Latitude and Longitude to Minecraft Coordinates

In the Litematica menu press the Loaded Schematics button then go to Schematic Placements > Configure > Schematic placement and you can change the building to be the same coordinates as in real life. Y is 18 because using the “What is My Elevation” website at the coordinates states 9m. Since 1 block is equal to 0.5m in our model, 9m divded by 0.5 is 18m.

The X and Z coordinates are if the geographic coordinate system of Earth converted with what the minecraft coordinate system is (Cartesian Coordinates). The conversion between the geographic coordinate system uses the WGS84 coordinate system (World Geodetic System 1984) and Cartesian Coordinates assumes both origins start at 0,0,0 and 1 block = 0.5 metres. If 1 degree of latitude and 1 degree of longitude both are 111,320 metres (for this projection)2:
Latitude in blocks per degree = 222,640 blocks per degree
Metres per degree of longitude = [111,320 × cos(latitude in radians) ] / 0.5

To align this with real-world geographic coordinates (latitude and longitude), one needs to define a reference point. Since the the real-world origin (0° latitude and 0°, longitude) is set to correspond to X = 0 and Z = 0 in minecraft. The formulas below is used to calculate the difference in Latitude and Longitude based off of this

The Formulas to Convert to Minecraft are:
Minecraft Z Coordinates = [ΔLatitude × 111,320] / [Scale (meters per block)]
Minecraft X Coordinates = [ΔLatitude × ( 111,320 × cos(Origin Latitude in radians))] / [Scale (meters per block)]
Minecraft Y Coordinates = Elevation in metres / Scale (metres per block)

Where:
ΔLatitude = Target Latitude − Origin Latitude
ΔLongitude = Target Longitude − Origin Longitude
Target Latitude is 47.621474856679534°
Target Longitude is −65.65655551636287°
If Origin is 0° latitude and 0°, longitude
Scale (metres per block) = 0.5 metres

Using cosine has it so the conversion better reflects real-world distances as Earth is a spheroid an minecraft is flat

Therefore the Minecraft coordinates are

Minecraft X Coordinates = −9,858,611
Minecraft Y Coordinates = 18
Minecraft Z Coordinates = 10,606,309


Note: You will have to Teleport to where the model is put do /tp <playername> x y z to where the building is loaded

Fixing The Model

There were many edits that needed to happen. I fixed the trees to actually have trunks as the textures did not load them in properly. I used what generated as a guide for what the shapes for the trees should look like

I also tried to change the pattern on the wall to more accurately reflect what it looks like in the photogrammetry

Blender Render of the 3D Model (before using Blockblender) compared with what I changed it to in Minecraft
Helpful Tips

/time set day
/effect give <targets> <effect> infinite [<amplifier>] [<hideParticles>]

To edit the schematic Minecraft Litematica schematic editing by @waynestir on Youtube was the most helpful this allowed me to replace blocks and have them as the schematic.


Limitations

Using this approach of taking aerial building photogrammetry, using blender to make it minecraft blocks and then trying to convert the latitude and longitude coordinates to minecraft to put the building in the exact right spot is that Minecraft is a fixed grid cubic Block representation which will lack the detail of the 3D aerial building photogrammetry model on any given day. To try to make a scale that allows for the geolocation correctness and building height but transferred over to minecraft is a fine detail task that has to try to balance the artistry with reality.

In Blockblender, fine details like the antennae at the top of the building don’t come through as it only uses blocks for the representation. so railings, window frames and more could be lost or require block subsitutes.

The Photogrammetry can be very complex and very noisy with shadows that may make blockblender interpret the data wrong. Blockblender as an add-on is limited to the minecraft default colours which may not accurately reflect what real-world surfaces look like or are made out of.

The Minecraft height limit can be an issue depending on how tall the building is you want to convert.

Geolocating the building from latitude and longitude to minecraft coordinates will not work on a much larger scale (i.e keeping the scale at 0.5m is 1 block) as the minecraft world is 30 million by 30 million.

Litematica also has limited functionality until one has to do a lot more manually or use another plug in.

Conclusion

This workflow is an excellent way to bring real-world data into Minecraft, but it requires balancing the complexity of photogrammetry models with Minecraft’s block-based limitations. Understanding and addressing these challenges produce detailed, manageable builds that work well in Minecraft’s unique environment.

Footnotes

  1. “Canadian Government Building Photogrammetry” (https://skfb.ly/oLZyt) by Air Digital Historical Scanning Archive is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/) ↩︎
  2. https://www.esri.com/arcgis-blog/products/arcgis-desktop/defense/determining-a-z-factor-for-scaling-linear-elevation-units-to-match-geographic-coordinate-values/ ↩︎

Visualizing the Influence of Afghanistan’s Geography on Its History and Culture Using 3D Animation in ArcGIS Pro

Hello everyone! I’m excited to share my tutorial on how to use the animation capabilities in ArcGIS Pro to visualize 3D data and create an animated video.

My inspiration for this project was learning more about my ancestral homeland, Afghanistan, whose history and culture are known to have been heavily influenced by its location and topography.

Since I also wanted to gain experience working with the 3D layers and animation tools available in ArcGIS Pro, I decided to create a 3D animation of how geography has influenced Afghanistan’s history and culture.

My end product was an educational video that I narrated and posted on Youtube.

The GIS software I used in this project was ArcGIS Pro 3.3.1. I also used the Voice Memos app to record my narration, and iMovie to compile the audio recordings and the exported ArcGIS Pro movie into one video.

For my data sources, I derived the historical information presented in the animation from a textbook by Jalali (2021), the political administrative boundary of Afghanistan from geoBoundaries (Runfola et al., 2020), and the World Elevation 3D/Terrain 3D and World Imagery basemap layers from ArcGIS Pro (Esri et al., 2024; Maxar et al., 2024).

For this tutorial, I will only be providing a broad overview of the steps I took to create my end product. For additional details on how to use the animation capabilities in ArcGIS Pro, please refer to Esri’s free online Help documentation.

Now, without further ado, let’s get started!

To design and create a geographic-based animation involving 3D data using ArcGIS Pro.

The following convention was used to represent the process of navigating the ArcGIS Pro ribbon: Tab (Group) > Command

Since I wanted to create a narrated video as my end product, I first had to research my topic and decide what kind of story I wanted to tell by writing the script that would go along with each keyframe.

The next step was to record the narration using the script I wrote so that I could have a reference point for my keyframe transitions.

This process was as simple as hitting record on Voice Memos, then uploading each audio file to a new iMovie project.

The audio files were trimmed and aligned until a seamless transition between each clip was achieved.

To create the animation, the following steps were taken:

In my case, the Terrain 3D layer was automatically loaded as the elevation surface. To load the World Imagery layer, I had to navigate to Map (Layer) > Basemap and select “Imagery”.

I then added and symbolized the political administrative boundary shapefile I downloaded for Afghanistan.

To mark the locations of the three cities I included in some of the keyframes, I also created my own point geometry using the graphical notation layer available through Insert (Layer Templates) > Point Map Notes. The Create tool under Edit (Features) was used to digitize the points.

Finally, I downloaded two PNG images to insert into the animation at a later time (Anonymous, 2014; Khan, 2010).

GeoVis

Although an animation can be created regardless, bookmarking the view you intend to use for each keyframe is a good way of planning out your animation. The Scene’s view can be adjusted and updated at a later time, but this allows you to have an initial framework to start with.

ArcGIS Pro also allows you to import your bookmarks to automatically create keyframes using preconfigured playback styles.

Creating a Bookmark

To open the Bookmarks pane, click on “Manage Bookmarks” under Map (Navigate) > Bookmarks. Zoom to your desired keyframe location and create a bookmark using the New Bookmark subcommand.

The Locate command under Map (Inquiry) can be used to quickly search for and zoom to any geocoded location on the Earth’s surface.

Adjusting the View

To change the camera angle of your current view, use the on-screen navigator in the lower left corner of the Scene window. Click on the chevron to access full control.

By clicking and holding down on the bottom of the arrow on the outer ring of the on-screen navigator, you can rotate the view around the current target by 360o.

Clicking and holding down on the outer ring only will allow you to pan the Scene towards the selected heading.

To change the pitch of the camera angle or rotate the view around the current target, click and hold down on the inner ring around the globe, then drag your mouse in the desired direction.

Finally, clicking and holding down on the globe allows you to change your current target.

If your current Scene has never been initialized for an animation, the Animation tab can be activated through View (Animation) > Add.

To ensure you design the animation to fit the resolution you intend to export to, click on Animation (Export) > Movie.

In the Export Movie pane, under “Advanced Movie Export Settings”, select your desired “Resolution”. You could also use one of the “Movie Export Presets”  if desired. I chose “1080p HD Letterbox (1920 x 1080)” to produce a good quality video.

This step is very important, as the view of your keyframes and the placement of any overlays you add are directly affected by the aspect ratio of your export, which is directly tied to your selected resolution.

GeoVis

Start off by opening the Animation Timeline pane through Animation (Playback) > Timeline.

In the Bookmarks pane, click on your first bookmark. With your view set, click “Create first keyframe” in the Animation Timeline pane to add a keyframe.

Repeat this process until all of your keyframes are added.

Alternatively, as mentioned before, the Import command in Animation (Create) can be used to automatically load all of the bookmarks in your project as keyframes using a preconfigured playback style.

GeoVis

If you need to adjust the view of a keyframe, adjust your current view in the Scene window, then select the keyframe in the Animation Timeline pane and hit Update in Animation (Edit).

To configure the transition, time, and layer visibility of each keyframe, open the Animation Properties pane through Animation (Edit) > Properties and click on the Keyframe tab in this pane.

Choose one of the five transition types to animate the camera path: “Fixed”, “Adjustable”, “Linear”, “Hop”, or “Stepped”.

To create a tour animation that pans between geographic locations, a combination of “Hold” and “Hop” can be used. “Fixed” can be used to create a fly-through that navigates along a topographic feature.

Hit the play button in the Animation Timeline pane to view your animation and adjust accordingly.

Although the Terrain 3D and World Imagery layers may not draw well in ArcGIS Pro due to their sheer size, they should appear fine in the exported video.

Text, images, and other graphics can be added using the commands available in Animation (Overlay). Acceptable image file formats are JPG, TIFF, PNG, and BMP.

The position and timing of an overlay can be adjusted in the Overlays tab in the Animation Properties pane.

GeoVis

Once you’re satisfied with your animation, you can export by clicking on Animation (Export) > Movie again.

Name the file and select your desired “Media Format” and “Frames Per Second” settings.

Your resolution should already be set, but you can adjust the “Quality” to determine the size of your file.

Hit “Export” once you’re ready. Depending on the size of your animation, it can take several hours for the video to export. Mine took over 10 hours.

You can also export a subsection of your animation by specifying a “Start Time” and “End Time”. This can be useful to preview the end result of your animation bit by bit without having to export the entire video, which can take a lot of time.

With my animation exported, I added the video to my project in iMovie. Since I timed the animation according to my narration, the two files aligned perfectly at the zero mark and no further editing had to be done.

To export the final video, I used File > Share > Youtube & Facebook and made sure to match the resolution to the one I selected in ArcGIS Pro (1920 x 1080). iMovie will notify you once the .mov file is exported.

The final step was uploading the video on Youtube.

Create and/or log in to your Youtube account. On the Youtube homepage, click on You > Your videos > Content > Create > Upload videos to add the .mov file. A wizard will pop up.

Under the Details tab, fill out the “Title” and provide a “Description” for your video. Timestamps marking different chapters in the video can also be added here.

Select a thumbnail and fill out the remaining fields, including those under “Show more”, such as “Video language”. Selecting a “Video language” is necessary to add subtitles, which can be done through the Video elements tab.

Once your video is set up, hit “Publish”. Youtube will supply you with the link to your published video.

You just visualized 3D data and created a geographic-based animation using ArcGIS Pro!

Anonymous. (2014, September 18). Ahmad Shah Durrani [Artwork]. https://history-of-pashtuns.blogspot.com/2014/09/ahmed-shah-durrani.html

Esri, Maxar, Earthstar Geographics, & GIS User Community. (2024, November 19). World Imagery (November 26, 2024) [Tile layer]. Esri. https://services.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer

Jalali, A. A. (2021). Afghanistan: A Military History From the Ancient Empires to the Great Game. University Press of Kansas.

Khan, M. (2010, December 11). Horse [Artwork]. https://www.foundmyself.com/Momin+khan/art/horse/66007

Maxar, Airbus DS, USGS, NGA, NASA, CGIAR, GEBCO, N Robinson, NCEAS, NLS, OS, NMA, Geodatastyrelsen, & GIS User Community. (2024, June 12). World Elevation 3D/Terrain 3D (November 26, 2024) [Image service layer]. Esri. https://services.arcgisonline.com/arcgis/rest/services/WorldElevation3D/Terrain3D/ImageServer

Runfola, D., Anderson, A., Baier, H., Crittenden, M., Dowker, E., Fuhrig, S., Goodman, S., Grimsley, G., Layko, R., Melville, G., Mulder, M., Oberman, R., Panganiban, J., Peck, A., Seitz, L., Shea, S., Slevin, H., Youngerman, R., & Hobbs, L. (2020). GeoBoundaries: A Global Database of Political Administrative Boundaries (September 21, 2024) [Shapefile]. GeoBoundaries. https://www.geoboundaries.org

Mapping the Elevation of different Mount Kilimanjaro Climbing Routes in a 3D Scene-based Story Map

By Gabriel Dunk-Gifford

November 27th, 2024

SA 8905

Background

In 2023, I climbed Mount Kilimanjaro. Mount Kilimanjaro, is located in the northern region of Tanzania, straddling the border with Kenya, and at 5,895 metres, is the tallest mountain in Africa. My sister was doing a work term in Tanzania, so I thought it was a great opportunity to complete a physical and mental challenge that has been a bucket list item for me. The other major driver for me to climb this mountain, is that it is one of the tallest mountains in the world that does not require a large amount of technical climbing and can be done mostly by walking. Despite this however, the freezing temperatures, the altitude and the long distances being covered meant that it was still an immensely difficult challenge for me to complete. We chose to climb the 7 day Machame Route, which was recommended for people that wanted a long enough route to have a relatively high chance of reaching the summit, but did not want to spend an excessive amount for the longest routes. This was just one of many routes that climbing companies use when leading trips to the summit, which have a lot of variation in terms of length of time. The shortest route, Marangu, which takes place over 5 days is the least expensive, due to not having to pay the 10-20 people required to lead a group of climbers (Guides, Assistant Guides, Porters, and Cooks). However, the flip side of this is that 5 days does not provide very much time to acclimatize to the altitude, which means that over 50% of climbers on this route do not reach the summit due to prevalent altitude sickness. The 7 day Machame Route is much more manageable with the extra days, giving the climbers more time to make sorties into the higher elevation zones and back down to acclimatize more comfortably. The third route, which is called the Northern Circuit as it traverses all the way around the north side of the mountain, takes place over 10 days. It is the most scenic, giving time for climbers to see all the different types of vegetation zones that the mountain has to offer, and also causes the least amount of altitude-related stress, as it ascends into the high elevation much more slowly, and gives more time to acclimatize once the climbers have reached that zone.  Altitude sickness has a large amount of variation between people, in terms of the level of severity and symptoms. For instance, one person in my group, who was an experienced triathlete, began experiencing symptoms of altitude sickness on the 2nd day of the climb, and was ultimately unable to reach the summit, whereas my symptoms were less severe. Despite this, by the time we reached the summit in the early hours of the morning on Day 6, I had begun to feel the effects of the altitude, with persistent headaches, exhaustion and vertigo. These symptoms are all consequences of the reduced amount of oxygen that is available at such a high elevation, and were also compounded by the extremely low temperatures at night (between -15 and -25 degrees), which made it very difficult to sleep. Despite these setbacks however, reaching the summit was a very interesting and rewarding experience that I wanted to share with this project. 

Scope of the project

For the purposes of this geovisualization project, I chose to create a 3d scene in ArcGIS Pro, which displayed the elevation of the different parts of the mountain, and how 3 different route lengths (5 days, 7 days and 10 days) differ in terms of how they traverse through the different elevation zones. I also drew dashed lines on my 3d model which mark the elevation at which different levels of altitude sickness typically occur. Because of my own personal experience and that of other people, I thought that it was important to analyze altitude sickness and how it can be prevalent in a climb as common as Mount Kilimanjaro

Generally there are two levels of altitude sickness that can occur on a climb such as this one. The first one, Acute Mountain Sickness or AMS, is extremely common. The symptoms are not particularly severe, usually showing in most people as fatigue, shortness of breath, headaches, and sometimes nausea. The risk of this illness occurs usually in the 2000 to 2,500 metre range, becoming extremely common by the time a person ascends to around 4000 metres. The second, much more severe form of altitude sickness, comes in two forms, High Altitude Pulmonary Edema and High Altitude Cerebral Edema. As is probably evident from the names of these illnesses, HAPE primarily affects the lungs, while HACE mostly affects the brain, though most people that contract this, experience symptoms of both. HAPE/HACE begins to occur in people at the 4500m range (with a 2-6% risk at that elevation), but becomes much more prevalent at elevations above 5000m. The risk of this illness continues to increase as the elevation increases, which is why it is so difficult to reach the summit of the 8000m + mountains like Everest or K2. To counteract the effects of these illnesses, acclimatization to the elevation is extremely important. This is why mountain guides are constantly stressing the need to keep a very slow pace of climbing, and longer routes have a much higher success rate, as it allows for more time for the body to acclimatize to the altitude. 

Format of the Project

To complete this project, I began by downloading an elevation raster dataset from NASA Earthdata to display the elevation on the mountain. I then added that to an ArcGIS project and drew a boundary around the mountain to use as my study area. From there, I clipped the raster to only show the elevation in that area, and also limit the size of the file. The dataset was classified at 1 metre intervals, which meant that the differences in elevation between classes was extremely difficult to see, so I used the Reclassify Analysis tool to classify the raster at 500 metre intervals. I then assigned colours to each class with green representing the lowest elevations, then yellow, orange, red and finally blue and white for the very high elevations around the summit. I then started a project in Google Earth to draw out the different climbing routes. While Google Earth has limited functionality in terms of mapping, I find that its 3d terrain is detailed and easy to see, so it provided a more accurate depiction of the routes than if I used ArcGIS Pro. I used point placemarks to mark the different campsites on each of the routes and connected them with line features. For knowledge of the routes and campsites, I used the itineraries on popular Kilimanjaro guide companies’ websites, for each of the different routes. Once I had finished drawing out the routes and campsites in Google Earth, I exported the map as a KML file and converted it to ArcGIS layer files using an analysis tool. Finally, I drew polygons around the elevation borders that corresponded with the risks of altitude sickness that I outlined above. I used dashed lines as my symbology for that layer in order to differentiate it from the solid line routes layer. 

The next step for the project was converting the map to a 3d scene, in order to display the elevation more accurately. I increased the vertical exaggeration of my ground terrain base layer, in order to differentiate the elevation zones more. From there, I explored the scene and added labels to make sure that all the different map elements could be seen. I created an animation that flew around the mountain to display all angles at the beginning of my Story Maps Project. I then created still maps that covered the different areas of the mountain that are traversed by the different routes. Since the 5 day route basically ascends and descends on the same path, it only needed one map to show its elevation changes, and different campsites. However, the 7 day map needed two different maps to capture all the different parts of the route, and the 10 day one needed 4 as it travels all the way around the less commonly climbed, north side of the mountain. Finally, I created an ArcGIS Story Maps project to display the different maps that I created. I think that Story Maps is an excellent tool for displaying the results of projects such as this one. Its interactive and engaging interface allows the user to understand what can be a complicated project, in a simple and intriguing manner. I added pictures of my own climb to the project to add context to the topic, along with text explaining the different maps. The project can be viewed here: https://arcg.is/1Sinnf0

Conclusions

This project is very beneficial, as it both provides people who have climbed the mountain the opportunity to see the different elevation zones that they traversed and thus maybe connect that with some of their own experiences, but also the chance for prospective climbers to see the progression through the levels of elevation that each route takes, and be informed their choice of route based on that.

Using ReactJS and OpenLayers to make a Madawaska River web map

By: Garrett Holmes  | SA8903 – Fall 2024

The Madawaska is a river and provincial located in the Central Ottawa watershed in Southern Ontario.

The section of river inside the Madawaska Provincial Park is a popular campsing and water-sport location for paddlers across the province. The river includes numerous sets of rapids that present a fun and exciting challenge for paddlers. However, as the water level and discharge rates fluctuate throughout the year from rainfall, snowmelt, and other factors, the conditions of the white water rapids change, so it’s important for paddlers to understand what state the river is in in order to prepare for a trip. My web app will visually symbolize what these different water levels mean for paddlers at different times of the year, while providing other information about rapids, campsite, and access points.

The final web app repository can be viewed here

Requirements

Creating a React App

Install React

Follow this tutorial to create a basic ReactJS app, call it ‘map-app’ and navigate to it in a text editor like VSCode. You will notice a few important files and folders in here. ‘README.md’ includes some information and important commands for your app. The ‘public’ folder includes any files that you’ll want to access in your app, like images or metadata. This is where you will put your GIS data once we have the react app assembled.

Basic React App file structure

React is designed to be modular and organized, and essentially lets us manipulate HTML components using javascript. A react app is made up of components, which are sections of code that are modular and re-usable. Components also have props and states. Props are passed into a component and can represent things like text, style options, files, and more to change the look and behaviour of components. Hooks are functions that allow us to change the state of a component on the fly, and are what makes react interactive and mutable.

Setting up OpenLayers

Before we start our react app, install OpenLayers, a library that allows us to easily display and work with geographic vector data with javascript and html, which can therefore be used with react. Run the command “npm install ol" to install OpenLayers.

Now that we have a react app set up and OpenLayers, we can start our react app with npm start. This will open a page in your default browser that links to the local server on your machine that’s running your application.

Making a base map

Now lets make a component for our map. Right click on the ‘src’ folder in the left pane and click ‘New Folder’, we will call it ‘Components’. Now right click on that folder and click ‘New File’, call it ‘BaseMap.js’. If you have the extension ‘ES7+ React/Redux/React-Native snippets’ installed – in the extensions tab on the left – you can go to your new file and type ‘rfce’ then press enter to create the basic shell of a component with the same name as the filename. Otherwise you can copy the code below into your ‘MapLayer.js’ file:

Now lets populate the component with everything we need from OpenLayers. We will create a map that displays open street map, an open source basemap. I won’t explain everything about how react works since it would take too long, but see the [OpenLayers guide])(https://openlayers.org/doc/tutorials/concepts.html) for details on what each of the components are doing. This should be your component once you have added everything:

This will fill the entire page with the Open Street Map basemap. To render our component on the page, navigate to ‘App.js’ and delete all the default items inside the <div> in the return statement. At the top of the page import our BaseMap component: import BaseMap from './Components/BaseMap';. Then, add the component inside the <div> in the return statement.

Hit ctrl+s to save, and you should see your map on the webpage! You will be able to zoon and navigate the same as if it were google maps.

Adding vector data to the map

Now, let’s create a generalized component that we can use to add vector data to the web app. OpenLayers is capable of supporting a variety of filetypes for displaying vector data, but for now we’ll use GeoJSON because of it’s widespread compatibility.

Inside the ‘Components’ folder, create a new file called ‘MapLayers.js’, then use rfce to populate the component, or copy the following code:

In React, components communicate with eachother using ‘props’. We’ll use these to add our layers.

Add a ‘layers’ prop and a ‘map’ prop to the component definition:

Now we can access the data that’s passed into the component. Layers will represent a list of objects containing the filenames for our data as well as symbology information. Map will be the same map we created in the ‘BaseMap’ component.

For react to run code, we need to use a function called a useEffect, that will run automatically when the props that we specify are changed. Inside this function is where we will load the vector data into the ‘map’ prop.

Since the ‘layers’ prop is a list of object, we can iterate through it with the ‘forEach’ command. For every layer in the list, we’ll make a new VectorSource, which is an OpenLayers object that keeps track of geometry information. We’ll then add each VectorSource to a VectorLayer, which keeps track of how we display the geometry. Finally, the loop adds each new layer to the map. The list at the very bottom of the ‘useEffect()’ tells the program to run the contained code every time the ‘map’ or ‘layers’ props change.

For now, our component will return ‘null’, because everything is going to be rendered on the map in the BaseMap component.

Here’s what your final ‘MapLayers’ component should look like:

Adding Data

A map with nothing on it is no use to anyone. For this project, the goal was to build a web tool for looking at how the water level affects the rivers edge in the Madawaska River Provincial Park in Ontario.

In order to represent the elevation of the river and calculate metrics at different locations along the river, I used the Ontario Imagery-Derived DEM which is offered at a 2m resolution. The Madawaska river is located in two sections; DRAPE B, and DRAPE C. Since these are very large files image files, I needed to convert each file to tif format and generate pyramids for display in Arc or QGIS.

Then, I downloadd the Ontario Hydrographic Line dataset to get the locations of rapids and other features like dams.

I also needed shape data to represent the river itself from the Ontario Open Data portal.

Then, I loaded the ‘.vrt’ file I made from the DEM images into QGIS, and clipped it by the extent of the river polygon. I chose to clip the raster to a buffer of 1km to leave room to represent the surrounding area as well.

Preparing the data

Then, I had to format the data properly to be used in the web app.

When the water level of a river rises, the width of the river expands, and the bank recedes up the shore. I represented the change in water level by adding a dynamic buffer to the river polygon as an approximation of water level rise. It should be noted that this approximation assumes that the water has risen uniformly across the course of the river, which could not be true, however for the purpose of simplifying the app I used that assumption. The actual distance on land that the river expands to at any given section will depend on the slope of the embankment. This is where the DEM comes into play. I calculated the buffer distance to be applied to the river based on sampled points representing the slope along the river’s edge. Then I used the average slope to come up with the buffer distance per water level rise.

To keep things simple, and since the slope of the river bank does not vary much over its course, we will use the average slope along the edge of the river as our Slope value.

To do this, I used the following QGIS tools:

  • Polygon to Lines (Madawaska River)
  • Points Along Geometry (Madawaska River Lines, for every 50m)
  • Sample Raster Values (Slope)
  • Field Calculator: mean(“SAMPLE_1”) = 9.6%
Points generated every 100m along the water line, overlaid with the slope raster

Here’s the equation for calculating buffer distance:

Buffer Distance = water level change / tan(Slope)

(Where slope is represented as a percentage)

The tangent of the slope here represents the ratio of the water level rise to the distance it will travel over land. Therefore the constant we’ll divide the water level change with will be tan(slope) = 0.17

Before adding my shape data to the map, I had to do a fair amount of cleaning in QGIS. First, every layer is clipped to be within 1km of the river. All the rapids were named manually based on topographic maps, then Aggregated by their name. I also generated a file containing the centroids for each set of rapids for easier interpretation on the map.

Campsite and Access Point data was taken from the Recreation Point dataset by the Ministry of Natural Resources. Campsites and Access points were split into separate layers for easier symbolization.

Each file was then exported from QGIS as a GeoJSON file, then saved in the ‘public’ folder of my react app under ‘layers’. This will make it possible to access the layers from the code.

Adding the data to the web app

Now that all the data is ready, we can put all the pieces together. Inside ‘BaseMap.js’, create a new list at the top of the page called ‘jsonLayers’. Each item in the list will have the following format:

Where the filename is the path to your GeoJSON layer, the style is an OpenLayers Style instance (which I won’t explain here, but you can learn more from the OpenLayers documentation), and zIndex represents which layers will appear on top of others (For example, zIndex = 1 is below zIndex = 10).

Next, at the bottom of the component where we ‘return’ what to display, we will add an instance of our ‘MapLayers’ component, and pass in the required props.

Now in your web app, you should see your layers on screen! You may need to zoom in to find them.

I added a few other features and tools that make it so that the map automatically zooms to the extent of the largest layer, and so that the user can select features to see their name.

Geo-visualization

Once the basic structure of the app was set up, I could start to add extra features to represent the water level change. I created a new component called ‘BufferLayer’, which takes in a single GeoJSON file as well as a map to display the vector on. This component makes use of a library called turf.js that allows you to perform geospatial operations in javascript. I used turf.js to apply the buffer described above using a function that takes the geometry from the VectorSource for the layer, and directly applies a turf.js buffer operation to it. The buffer is always applied to the ‘original’ river polygon, meaning that a 10m buffer won’t ‘stack’ on top of another 10m buffer. This also prevents issues with broken geometry caused by the buffer operation when applying a negative buffer.

To control my buffer, I created one more component called ‘LevelSlider’, which adds a simple slider and a button that when pressed, runs the ‘handleBufferChange` function. The math for calculating the buffer distance based on the slope is done in the LevelSlider component with the static values I calculated earlier. The minimum and maximum values are also customizable. Here’s a snippet of that component:

The LevelSlider component is added in the ‘return’ section of ‘BufferLayer’, with CSS styling to make sure it appears neatly in the bottom left corner of the map.

The example minimum and maximum values are based on the minimum and maximum water level changes (from average) in the river based on real hydro-metric data from Environment Canada.

Conclusion

With a bit of extra styling, and by making use of other OpenLayers features like ‘Select’, and ‘Overlay’, I was able to build this functional, portable web app that can be added to any react website with ease.

However, lots more can be done to improve it! A chart that tracks hydro-metric data over time could help give context to the water levels on the river. With a little more math, you could even make use of discharge information to estimate the speed of the river at different times of year.

Using the campsite data and a centreline of the river course, you could calculate the distance between campsite, rapids, access points, etc. Making the tool a functional for planning trips. Also, given more information about individual whitewater sets, such as classes (C2, C3, etc.), descriptions, or images you could better represent the river in all it’s detail.

The final layout of the web app

Visualizing Earthquakes with Pydeck: A Geospatial Exploration

Mapping data in an interactive and visually compelling way is a powerful approach to uncovering spatial patterns and trends. Pydeck, a Python library for large-scale geospatial visualization, is an exceptional tool that makes this possible. Leveraging the robust capabilities of Uber’s Deck.gl, Pydeck enables users to create layered, interactive maps with ease. In this tutorial, we delve into Pydeck’s potential by visualizing earthquake data, exploring how it allows us to reveal patterns and relationships in raw datasets.

This project focuses on mapping earthquakes, analyzing their spatial distribution, and gaining insights into seismic activity. By layering visual elements like scatterplots and heatmaps, Pydeck provides an intuitive, user-friendly platform for understanding complex datasets. Throughout this tutorial, we explore how Pydeck brings earthquake data to life, offering a clear picture of patterns that emerge when we consider time, location, magnitude, and depth.


Why Pydeck?

Pydeck stands out as a tool designed to simplify geospatial data visualization. Unlike traditional map-plotting libraries, Pydeck goes beyond static visualizations, enabling interactive maps with 3D features. Users can pan, zoom, and rotate the maps while interacting with individual data points. Whether you’re working in Jupyter Notebooks, Python scripts, or web applications, Pydeck makes integration seamless and accessible.

One of Pydeck’s strengths lies in its support for multiple visualization layers. Each layer represents a distinct aspect of the dataset, which can be customized with parameters like color, size, and height to highlight key attributes. For instance, in our earthquake visualization project, scatterplot layers are used to display individual earthquake locations, while heatmaps emphasize regions of frequent seismic activity. The ability to combine such layers allows for a nuanced exploration of spatial phenomena.

What makes Pydeck ideal for projects like this one is its balance of simplicity and power. With just a few lines of code, users can create maps that would otherwise require advanced software or extensive programming expertise. Its ability to handle large datasets ensures that even global-scale visualizations, like mapping thousands of earthquakes, remain efficient and responsive.

Furthermore, Pydeck’s layered architecture allows users to experiment with different ways of presenting data. By combining scatterplots, heatmaps, and other visual layers, users can craft a visualization that is both aesthetically pleasing and scientifically robust. This flexibility makes Pydeck a go-to tool for not only earthquake mapping but any project requiring geospatial analysis.


Creating Interactive Earthquake Maps: A Pydeck Tutorial

Before diving into the visualization process, the notebook begins by setting up the necessary environment. It imports essential libraries such as pandas for data handling, pydeck for geospatial visualization, and other utilities for data manipulation and visualization control. To ensure the libraries are available for usage they must be installed using pip.

! pip install pydeck pandas ipywidgets h3
import pydeck as pdk
import pandas as pd
import h3
import ipywidgets as widgets
from IPython.display import display, clear_output

Step 1: Data Preparation and Loading

Earthquake datasets typically include information such as the location (latitude and longitude), magnitude, and depth of each event. The notebook begins by loading the earthquake data from a CSV file using the Pandas library.

The data is then cleaned and filtered, ensuring that only relevant columns—such as latitude, longitude, magnitude, and depth—are retained. This preparation step is critical as it allows the user to focus on the most important attributes needed for visualization.

Once the dataset is ready, a preview of the data is displayed to confirm its structure. This typically involves displaying a few rows of the dataset to check the format and ensure that values such as the coordinates, magnitude, and depth are correctly loaded.

# Read in dataset
earthquakes = pd.read_csv("Earthquakes-1990-2023.csv")

# Drop rows with missing data
earthquakes = earthquakes.dropna(subset=["latitude", "longitude", "magnitude", "depth"])

# Convert time column to datetime
earthquakes["time"] = pd.to_datetime(earthquakes["time"], unit="ms")

Step 2: Initializing the Pydeck Visualization

With the dataset cleaned and ready, the next step is to initialize the Pydeck visualization. Pydeck provides a high-level interface to create interactive maps by defining various layers that represent different aspects of the data.

The notebook sets up the base map using Pydeck’s Deck class. This involves defining an initial view state that centers the map on the geographical region of interest. The center of the map is determined by calculating the average latitude and longitude of the earthquakes in the dataset, and the zoom level is adjusted to provide an appropriate level of detail.

# Render map
pdk.Deck(
    layers=[heatmap_layer],
    initial_view_state=view_state,
    tooltip={"text": "Magnitude: {magnitude}\nDepth: {depth} km"},
).show()

Step 3: Creating the Heatmap Layer

The primary visualization in the notebook is a heatmap layer to display the density of earthquake events. This layer aggregates the data into a continuous color gradient, with warmer colors indicating areas with higher concentrations of seismic activity.

The heatmap layer helps to identify regions where earthquakes are clustered, providing a broader view of global or regional seismic activity. For instance, high-density areas—such as the Pacific Ring of Fire—become more prominent, making it easier to identify active seismic zones.

# Define HeatmapLayer
heatmap_layer = pdk.Layer(
    "HeatmapLayer",
    data=filtered_earthquakes,
    get_position=["longitude", "latitude"],
    get_weight="magnitude",  # Higher magnitude contributes more to heatmap
    radius_pixels=50,  # Radius of influence for each point
    opacity=0.7,
)

Step 4: Adding the 3D Layer

To enhance the visualization, the notebook adds a columnar layer, which maps individual earthquake events and there depths as extruded columns on the map. Each earthquake is represented by a column, where:

  • Height: The height of each column corresponds to the depth of the earthquake. Tall columns represent deeper earthquakes, making it easy to identify significant seismic events at a glance.
  • Color: The color of the column also emphasizes the depth of the earthquake, with a color gradient yellow-red used to represent varying depths. Typically, deeper earthquakes are shown in redder colors, while shallower earthquakes are displayed in yellow.

This 3D column layer provides an effective way to visualize the distribution of earthquakes across geographic space while also conveying important information about their depth.

# Define a ColumnLayer to visualize earthquake depth
column_layer = pdk.Layer(
    "ColumnLayer",
    data=sampled_earthquakes,
    get_position=["longitude", "latitude"],
    get_elevation="depth",  # Column height represents depth
    elevation_scale=100,
    get_fill_color="[255,  255 - depth * 2, 0]",  # yellow to red
    radius=15000,
    pickable=True,
    auto_highlight=True,
)

Step 5: Refining the Visualization

Once the base map and layers are in place, the notebook provides additional customization options to refine the visualization. Pydeck’s interactive capabilities allow the user to:

  • Zoom in and out: Users can zoom in to explore smaller regions in greater detail or zoom out to get a global view of seismic activity.
  • Hover for details: When hovering over an earthquake event on the map, a tooltip appears, providing additional information such as the exact magnitude, depth, and location. This interaction enhances the user experience, making it easier to explore the data in a hands-on way.

The notebook also ensures that the map’s appearance and behavior are tailored to the dataset, adjusting parameters like zoom level and pitch to create a visually compelling and informative display.

Step 6: Analyzing the Results

After rendering the map with all layers and interactive features, the notebook transitions into an analysis phase. With the interactive map in front of them, users can explore the patterns revealed by the visualization:

  • Clusters of seismic activity: By zooming into regions with high earthquake density, users can visually identify clusters of activity along tectonic plate boundaries, such as the Pacific Ring of Fire. These clusters highlight regions prone to more frequent and intense earthquakes.
  • Magnitude distribution: The varying sizes of the circles (representing different earthquake magnitudes) reveal patterns of high-magnitude events. Users can quickly spot large earthquakes in specific regions, offering insight into areas that may need heightened attention for preparedness or mitigation efforts.
  • Depth-related trends: The color gradient used to represent depth provides insights into the relationship between earthquake depth and location. Deeper earthquakes often correspond to subduction zones, where one tectonic plate is forced beneath another. This spatial relationship is critical for understanding the dynamics of earthquake behavior and associated risks.

By interacting with the map, users gain a deeper understanding of the data and can draw meaningful conclusions about seismic trends.


Limitations of Pydeck

While Pydeck is a powerful tool for geospatial visualization, it does have some limitations that users should be aware of. One notable constraint is its dependency on web-based technologies, as it relies heavily on Deck.gl and the underlying JavaScript frameworks for rendering visualizations. This means that while Pydeck excels in creating interactive, browser-based visualizations, it may not be the best choice for large-scale offline applications or those requiring complex, non-map-based visualizations. Additionally, Pydeck’s documentation and community support, although growing, may not be as extensive as some more established libraries like Matplotlib or Folium, which can make troubleshooting more challenging for beginners. Another limitation is the performance handling of extremely large datasets; while Pydeck is designed to handle large-scale data, rendering thousands of points or complex layers may lead to slower performance depending on the user’s hardware or the complexity of the visualization. Finally, while Pydeck offers significant customization options, certain advanced features or highly specialized geospatial visualizations (such as full-featured GIS analysis) may require supplementary tools or libraries beyond what Pydeck offers. Despite these limitations, Pydeck remains a valuable tool for interactive and engaging geospatial visualization, especially for tasks like real-time data visualization and web-based interactive maps.


Conclusion

Pydeck transforms geospatial data into an interactive experience, empowering users to explore and analyze spatial phenomena with ease. Through this earthquake mapping project, we’ve seen how Pydeck highlights patterns in seismic activity, offering valuable insights into the magnitude, depth, and distribution of earthquakes. Its intuitive interface and powerful visualization capabilities make it a vital tool for geospatial analysis in academia, research, and beyond. Whether you’re studying earthquakes, urban development, or environmental changes, Pydeck provides a platform to bring your data to life. By leveraging its features, you can turn complex datasets into accessible stories, enabling better decision-making and deeper understanding of the world around us. While it is a powerful tool for creating visually compelling maps, it is important to consider its limitations, such as performance issues with very large datasets and the need for web-based technology for rendering. For users seeking similar features in a less code-based environment Kepler.gl—an open-source geospatial analysis tool—offer even greater flexibility and performance. To explore the notebook and try out the visualization yourself, you can access it here. Pydeck opens up new possibilities for anyone looking to dive into geospatial analysis and create interactive maps that bring data to life.

Tracking Green: A Time Series Animation App in GEE

Asvini Patel

Geovis Project Assignment, TMU Geography, SA8905, Fall 2024

Introduction

Mapping indices like NDVI and NDBI is an essential approach for visualizing and understanding environmental changes, as these indices help us monitor vegetation health and urban expansion over time. NDVI (Normalized Difference Vegetation Index) is a crucial metric for assessing changes in vegetation health, while NDBI (Normalized Difference Built-Up Index) is used to measure the extent of built-up areas. In this blog post, we will explore data from 2019 to 2024, focusing on the single and lower municipalities of Ontario. By analyzing this five-year time series, we can gain insights into how urban development has influenced greenery in these regions. The web page leverages Google Earth Engine (GEE) to process and visualize NDVI data derived from Sentinel-2 imagery. With 414 municipalities to choose from, users can select specific areas and track NDVI and NDBI trends. The goal was to create an intuitive and informative platform that allows users to easily explore NDVI changes across Ontario’s municipalities, highlighting significant shifts and pinpointing where they are most evident.

Data and Map Creation

In this section, we will walk through the process of creating a dynamic map visualization and exporting time-series data using Google Earth Engine (GEE). The provided code utilizes Sentinel-2 imagery to calculate vegetation and built-up area indices, such as NDVI and NDBI for a defined range of years. The application was developed using the GEE Code Editor and published as a GEE app, ensuring accessibility through an intuitive interface. Keep in mind that the blog post includes only key snippets of the code to walk you through the steps involved in creating the app. To try it out for yourself, simply click the ‘Explore App’ button at the top of the page.

Setting Up the Environment

First, we define global variables that control the years of interest, the area of interest (municipal boundaries), and the months we will focus on for analysis. In this case, we analyze data from 2019 to 2024, but the range can be modified. The code utilizes the municipality Table to filter and display the boundaries of specific municipalities.

Visualizing Sentinel-2 Imagery

Sentinel-2 imagery is first filtered by the date range (2019-2024 in our case) and bound to a specific municipality. Then we mask clouds in all images using a cloud quality assessment dataset called Cloud Score+. This step helps in generating clean composite images, as well as reducing errors during index calculations. We use a set of specific Sentinel-2 bands to calculate key indices, like NDVI and NDBI which are visualized in true colour or with specific palettes for enhanced contrast. To make this easier, the bands of the Sentinel 2 images (S2_BANDS) are renamed to human-readable names (STD_S2_NAMES).

Index Calculations

The key indices are calculated for each year within the selected municipality boundaries. These indices are calculated using the normalized difference between relevant bands (e.g., NIR and Red bands for NDVI), whereas NDBI is calculated using (SWIR and NIR bands). After calculating the indices, the results are added to the map for visualization. Typically, for NDVI, green represents healthy vegetation, while purple indicates unhealthy vegetation, often corresponding to developed areas such as cities. In the case of NDBI, red pixels signify higher levels of built-up areas, whereas lighter colors, such as white, indicate minimal to no built-up areas, suggesting more vegetation. Together, NDVI and NDBI results provide complementary insights, enabling a better understanding of the relationship between vegetation and built-up areas.

For each year, the calculated index is visualized, and users can see how vegetation and built-up areas have changed over time.

Generating Time-Series Animations

To provide a clearer view of changes over time, the code generates a time-series animation for the selected indices (e.g., NDVI). The animation visualizes the change in land cover over multiple years and is generated as a GIF, which is displayed within the map interface. The animation creation function combines each year’s imagery and overlays relevant text and other symbology, such as the year, municipality name, and legend.

Map Interaction

A key feature of this code is the interactive map interface, which allows users to select a municipality from a dropdown menu. Once a municipality is selected, the map zooms into that area and overlays the municipality boundaries. You can then submit that municipality to calculate the indices and render the time series GIF on the panel. You can also explore the various years on the map by selecting the specific layers you want to visualize.

To start with, we will set up the UI components and replace the default UI with our new UI:

Notice there are functions for the interactive components of the UI, those are shown below:

Future Additions

Looking ahead, the workflow can be enhanced by calculating the mean NDVI or NDBI for each municipality over longer periods of time and displaying it on a graph. The workflow can also incorporate Sen’s Slope, a statistical method used to assess the rate of change in vegetation or built-up areas. This method is valuable at both pixel and neighbourhood levels, enabling a more detailed assessment of land cover changes. Future additions could also include the application of machine learning models to predict future changes and expanding the workflow to other regions for broader use.

Visualizing select waterfalls of Hamilton, Ontario through 3D modelling using Blender and BlenderGIS

By: Darith Tran|Geovisualization Project Assignment|TMU Geography|SA8905|Fall 2024

Introduction/Background

The city of Hamilton, Ontario is home to many trails and waterfalls and offers many scenic and nature-focused areas. The city is situated along the Niagara Escarpment, which allows for unique topography and is the main reason for the high frequency of waterfalls that exist across the city. Hamilton is dubbed as the waterfall capital of the world, being home to over 100 waterfalls within the city’s boundaries. Despite this, Hamilton is still under the radar for tourists as it sits between 2 other major cities that see higher tourist traffic such as Niagara Falls (which is home to one of the world’s most known waterfall) and Toronto (popular for the CN Tower and hustle bustle city atmosphere).

The main purpose of this project was to increase awareness for the beauty of the Southern Ontario wonder and to provide prospective visitors, or even citizens of Hamilton, with an interactive story map to provide some general information on the trails connected to the waterfalls and the details of the waterfalls themselves. The 3D modelling aspect of the project aims to provide a unique visualization of how the waterfalls look in order to provide a quick, yet creative visual for those looking into visiting the city to see the waterfalls in person.

Data, Processing and Workflow (Blender + OpenTopography DEMs)

The first step of this project was to obtain DEMs for the regions of interest (Hamilton, Ontario) to be used as the foundation of the 3D model. The primary software used for this project was Blender (a 3D modeling software) leveraged by a GIS oriented plugin called “BlenderGIS” which is direct plugin available created by GitHub user domlysz allowing users to directly import GIS related files and elements such as shapefiles and base maps into the Blender editing and modelling pane. The plugin also allows users to load and access DEMs straight into Blender to be extracted and edited sourced through OpenTopography.

The first step is to open Blender and navigate towards the GIS tab in the object mode in the application :

Under the GIS tab, there are many options and hovering over “web geodata” prompts the following options:

In this case, we want to start off with a base map and the plugin has many sources available including the default Google Maps, ESRI Base maps as well as OpenStreetMap (Google Satellite was used for this project)

Once the base map is loaded into the Blender plane, I zoomed into the area of interest #1, being the Dundas Peak region, which is home to both Tew’s Falls and Webster’s Falls. The screenshot below shows the 2D image of Tew’s Falls in the object plane:

Once an area of interest is defined and all information is loaded, the elevation model is requested to generate the 3D plane of the land region:

The screenshot above shows the general 3D plane being created from a 30m DEM extracted from OpenTopography through the BlenderGIS plugin. The screenshot below showcases the modification of the 3D plane through the extrusion tool which adds depth and edges to create the waterfall look. Below is the foundation used specifically for Tew’s Falls.

Following this, imagery from the basemap was merged with the 3D extrusted plane to produce a the 3D render of the waterfall plane. To add the waterfall animation, the physics module was activated, allowing for various types of motion to be added to the 3D plane. Fluid was selected with the outflow behavior to simulate the movement of water coming down from a waterfall. This was then overlayed onto the 3D plane of the waterfall to simulate water flowing down from the waterfall.

These steps were then essentially repeated for Webster’s Falls and Devil’s Punchbowl waterfalls to produce 3D models with waterflow animations!

Link to ArcGIS Story Map: https://arcg.is/05Lr8T

Conclusion and Limitations

Overall, I found this to be a cool and fun way to visualize the waterfalls of Hamilton, Ontario and adding the rendered product directly onto ArcGIS Story Maps makes for an immersive experience. The biggest learning curve for this project was the use of the application Blender as I have never used the software before and have only briefly explored 3D modelling in the past. Originally, I planned to create 10 renders and animations of 10 waterfalls in Hamilton however, this became a daunting task after realizing the rendering and export times after completing the 3 models shown in the Story Map. Additionally, the render quality was rather low since 2D imagery was interpolated into a 3D plane which caused some distortions and warped shapes which would require further processing.

Explore Flood Resilience in Toronto: An Interactive Mapping Tool

Author: Shantelle Miller
Geovisualization Project Assignment @TMUGeography, SA8905, Fall 2024

Introduction: Why Flood Resilience Matters

Urban flooding is a growing concern, especially in cities like Toronto, where increasing urbanization has disrupted the natural water cycle. Greenspaces, impervious surfaces, and stormwater infrastructure all play vital roles in reducing flood risks, but understanding how these factors interact can be challenging.

To address this, I created an interactive mapping tool using ArcGIS Experience Builder that visualizes flood resilience in Toronto. By combining multiple datasets, including Topographic Wetness Index (TWI), greenspaces, and stormwater infrastructure, this map highlights areas prone to flooding and identifies zones where natural mitigation occurs.

One of the tool’s standout features is the TWI-Greenspace Overlay, which pinpoints “Natural Absorption Zones.” These are areas where greenspaces overlap with high TWI values, demonstrating how natural environments help absorb runoff and reduce flooding.

Why Experience Builder?

I chose ArcGIS Experience Builder for this project because it offers a user-friendly, highly customizable platform for creating dynamic, interactive web maps. Unlike static maps, Experience Builder allows users to explore data in real-time with widgets like toggleable layers, dynamic legends, and interactive pop-ups.

  • Multi-Dataset Integration: It supports the combination of multiple datasets like TWI, greenspaces, and stormwater infrastructure.
  • Widgets and Tools: Users can filter data, view attributes, and toggle layers seamlessly.
  • No Code Required: Although customizable, the platform doesn’t require coding, making it accessible for users of all technical backgrounds.

The Importance of Data Normalization and Standardization

Before diving into the data, it’s essential to understand the critical role that data normalization and standardization played in this project:

  • Ensuring Comparability: Different datasets often come in various formats and scales. Standardizing these allows for meaningful comparisons across layers, such as correlating TWI values with greenspace coverage.
  • Improving Accuracy: Normalization adjusts values measured on different scales to a common scale, reducing potential biases and errors in data interpretation.
  • Facilitating Integration: Harmonized data enables seamless integration within the mapping tool, enhancing user experience and interaction.

Data: The Foundation of the Project

The project uses data from the Toronto Open Data Portal and Ontario Data Catalogue, processed in ArcGIS Pro, and published to ArcGIS Online.

Layers

Topographic Wetness Index (TWI):

  • Derived from DEM
  • TWI identifies areas prone to water accumulation.
  • It was categorized into four levels (low, medium, high, and very high flood risk), with only the highest-risk areas displayed for focus.

Greenspaces:

  • Includes parks, forests, and other natural areas that act as natural buffers against flooding.

Impervious Surfaces and Pervious Surfaces:

  • Pervious Surfaces: Represent natural areas like soil, grass, and forests that allow water to infiltrate.
  • Impervious Surfaces: Represent roads, buildings, and other hard surfaces that contribute to runoff.

Stormwater Infrastructure:

  • Displays critical infrastructure like catch basins and sewer drainage points, which manage water flow.

TWI-Greenspace Overlay:

  • Combines high-risk TWI zones with greenspaces to identify “Natural Absorption Zones”, where natural mitigation occurs.

Creating the Map: From Data to Visualization

Step 1: Data Preparation in ArcGIS Pro

  1. Imported raw data and clipped layers to Toronto’s boundaries.
  2. Processed TWI using terrain analysis and classified it into intuitive flood risk levels.
  3. Combined pervious and impervious surface data into a single dataset for easy comparison.
  4. Created the TWI-Greenspace Overlay, merging greenspaces and TWI data to show natural flood mitigation zones.
  5. Normalized and standardized all layers.

Step 2: Publishing to ArcGIS Online

  1. Uploaded processed layers as hosted feature layers with customized symbology.
  2. Configured pop-ups to include detailed attributes, such as TWI levels, land cover types, and drainage capacities as well as google map direct link for each point feature.

Step 3: Building the Experience in ArcGIS Experience Builder

  1. Imported the web map into Experience Builder to design the user interface.
  2. Added widgets like the Map, Interactive Layer List, Filters, Legend, Search etc., for user interaction.
  3. Customized layouts and legends to emphasize the relationship between TWI, greenspaces, and surface types.

Interactive Features

The map offers several interactive features to make flood resilience data accessible:

Layer List:

  • Users can toggle between TWI, pervious surfaces, impervious surfaces, greenspaces, and infrastructure layers.

Dynamic Legend:

  • Updates automatically to reflect visible layers, helping users interpret the map.

Pop-Ups:

  • Provide detailed information for each feature, such as:
  • TWI levels and their implications for flood risk.
  • Land cover types, distinguishing between pervious and impervious surfaces.
  • Greenspace types and their flood mitigation potential.

TWI-Greenspace Overlay Layer:

  • Highlights areas where greenspaces naturally mitigate flooding, called “Natural Absorption Zones.”

Filters:

Enable users to focus on specific attributes, such as high-risk TWI areas or zones dominated by impervious surfaces.

Applications and Insights

  • The interactive map provides actionable insights for multiple audiences:

Urban Planners:

  • Identify areas lacking greenspace or dominated by impervious surfaces where flooding risks are highest.
  • Plan infrastructure improvements to mitigate runoff, such as adding bioswales or permeable pavement.

Planners:

  • Assess development sites to ensure they align with flood mitigation goals and avoid high-risk areas.

Homeowners:

  • Evaluate flood risks and identify natural mitigation features in their neighborhoods.
  • For example, the map can reveal neighborhoods with high TWI and limited greenspace, showing where additional stormwater infrastructure might be necessary.

Limitations and Future Work

Limitations

  1. Incomplete Data: Some areas lack detailed data on stormwater infrastructure or land cover, leading to gaps in analysis.
  2. Dynamic Changes: The static nature of the datasets means the map doesn’t reflect recent urban development or climate events.

Future Work

  1. Add real-time data on precipitation and runoff to make the tool more dynamic.
  2. Expand the analysis to include socioeconomic factors, highlighting vulnerable populations.
  3. Enhance accessibility features to ensure compliance with AODA standards for users with disabilities.

Conclusion: A Tool for Flood Resilience

Flood resilience is a complex issue requiring a nuanced understanding of natural and built environments. This interactive mapping tool simplifies these relationships by visualizing critical datasets like TWI, greenspaces, and pervious versus impervious surfaces.

By highlighting areas of natural flood mitigation and zones at risk, the map provides actionable insights for planners, developers, and homeowners. The TWI-Greenspace Overlay layer, in particular, underscores the importance of greenspaces in managing stormwater and reducing flood risks in Toronto.

I hope this project inspires further exploration of flood resilience strategies and serves as a resource for building a more sustainable and resilient city.

Thank you for reading, and feel free to explore the map experience using the link below!

Project Link: Explore Flood Resilience in Toronto
Data Source: Toronto Open Data Portal, Ontario Open Data Catalogue
Built Using: ArcGIS Pro, ArcGIS Online, and ArcGIS Experience Builder

Family Travel Survey

Marzieh Darabi, Geovis Project Assignment, TMU Geography, SA8905, Fall 2024

https://experience.arcgis.com/experience/638bb61c62b3450ab3133ff21f3826f2

This project is designed to help transportation planners understand how families travel to school and identify the most commonly used walking routes. The insights gained enable the City of Mississauga to make targeted improvements, such as adding new signage where it will have the greatest impact.

Project Workflow

Each school has its own dedicated page within the app, displaying both a map and a survey. The maps were prepared in ArcGIS Pro and then shared to ArcGIS Online. In the Map Viewer, I defined the symbology and set the desired zoom level for the final map. To identify key routes for the study, I used the Buffer tool in ArcGIS Pro to analyze routes in close proximity to schools. Next, I applied the Select by Location tool to identify routes located within a 400-meter radius of each school. These selected routes were then exported as a new street dataset. I further refined this dataset by customizing the streets to include only the most relevant options, reducing the number of choices presented in the survey.

Each route segment was labeled to correspond directly with the survey questions, making it easy for families to understand which options in the survey matched the map. To make these labels, new field was added to street dataset that would correspond to options in the survey. These maps were then integrated into ArcGIS Experience Builder using the Map Widget, which allows further customization of map content and styling via the application’s settings panel.

ArcGIS Experience Builder interface showing the process of adding a Map Widget and customizing the app layout

Why Experience Builder?

When designing the application, I chose ArcGIS Experience Builder because of its flexibility, modern interface, and wide range of features tailored to building interactive applications. Here are some of the specifications and advantages of using Experience Builder for this project:

  1. Widget-Based Design:
    Experience Builder operates on a widget-based framework, allowing users to drag and drop functional components onto the canvas. This flexibility made it easy to integrate maps, surveys, buttons, and text boxes into a cohesive application.
  2. Customizable Layouts:
    The platform offers tools for designing responsive layouts that adapt to different screen sizes. For this project, I configured desktop layout to ensure that the application is accessible to families.
  3. Map Integration:
    The Map Widget provided options to display the walking routes and key streets interactively. I set specific map extents to align with the study’s goals. End-users could zoom in or out and interact with the map to see routes more clearly.
  4. Survey Integration:
    By embedding the survey using the Survey Widget, I was able to link survey questions directly to map visuals. The widget also allowed real-time updates, meaning survey responses are automatically stored and can be accessed or analyzed in ArcGIS Online.
  5. Dynamic User Navigation:
    The Button Widget enabled intuitive navigation between pages. Each button is configured to link directly to a school’s map and survey page, while a Back Button on each page ensures users can easily return to the introduction screen.
  6. Styling Options:
    Experience Builder offers extensive styling options to customize the look and feel of the application. I used the Style Panel to select fonts, colors, and layouts that are visually appealing and accessible.

App Design Features

The app is designed to accommodate surveys for seven schools. To ensure ease of navigation, I created an introductory page listing all the schools alongside a brief overview of the survey. From this page, users can navigate to individual school maps using a Button Widget, which links directly to the corresponding school pages. A Back Button on each map page allows users to return to the school list easily.

The survey is embedded within each page using the Survey Widget, allowing users to submit their responses directly. The submitted data is stored as survey records and can be accessed via ArcGIS Online.

Setting links between buttons and pages in ArcGIS Experience Builder

Customizing Surveys

The survey was created using the Survey123 app, which offers various question types to suit different needs. For my survey, I utilized multiple-choice and single-line text question types. Since some questions are specific to individual schools, I customized their visibility using visibility rules based on the school selected in Question 1. For example, Question 4, which asks families about the routes they use to reach school, only becomes visible once a school is selected in Question 1.

If the survey data varies significantly across different maps, separate surveys can be created for each school to ensure accuracy and relevance.

setting visibility rules for survey questions based on user responses

Final Thoughts

Using ArcGIS Experience Builder provided the ideal platform for this project by combining powerful map visualizations with an intuitive interface for survey integration. Its customization options allowed me to create a user-centric app that meets the needs of both families and transportation planners.