Please read on below or use the search function, categories list, or tag cloud to find posts of interest. Keep in mind that most posts reflect student work summarizing one of two projects that had to be completed within a 12-week term. Happy reading!
Shashank Prabhu, Geovis Project Assignment, TMU Geography, SA8905, Fall 2024
Introduction Toronto’s residential real estate market has experienced one of the most rapid price increases among major global cities. This surge has led to a significant affordability crisis, impacting the quality of life for residents. My goal with this project was to explore the key factors behind this rapid increase, while also analyzing the monetary and fiscal policies implemented to address housing affordability.
The Approach: Mapping Median House Prices To ensure a more accurate depiction of the market, I used the median house price rather than the average. The median better accounts for outliers and provides a clearer view of housing trends. This analysis focused on all home types (detached, semi-detached, townhouses, and condos) between 2014 and 2022.
Although data for all years were analyzed, only pivotal years (2014, 2017, 2020, and 2022) were mapped to emphasize the factors driving significant changes during the period.
Data Source The Toronto Regional Real Estate Board (TRREB) was the primary data source, offering comprehensive market watch reports. These reports provided median price data for Central Toronto, East Toronto, and West Toronto—TRREB’s three primary regions. These regions are distinct from the municipal wards used by the city.
Creating the Maps
Step 1: Data Preparation The Year-to-Date (YTD) December figures were used to capture an accurate snapshot of annual performance. The median price data for each of the years across the different regions was organized in an Excel sheet, joined with TRREB’s boundary file (obtained through consultation with the Library’s GIS department), and imported into ArcGIS Pro. WGS 1984 Web Mercator projection was used for the maps.
Step 2: Visualization with 3D Extrusions 3D extrusions were used to represent price increases, with the height of each bar corresponding to the median price. A green gradient was selected for visual clarity, symbolizing growth and price.
Step 3: Overcoming Challenges
After creating the 3D extrusion maps for the respective years (2014, 2017, 2020, 2022), the next step was to export those maps to ArcOnline and then to Story Maps, the easiest way of doing so was to export it as a Web Scene, from which it would show up under the Content section on ArcOnline.
Flattened 3D Shapes: Exporting directly as a Web Scene to add onto Story Maps caused extrusions to lose their 3D properties. This was resolved using the “Layer 3D to Feature Class” tool.
Lost Legends: However, after using the aforementioned tool, the Legends were erased during export. To address this, static images of the legends were added below each map in Story Maps.
Step 4: Finalizing the Story Map After resolving these issues, the maps were successfully exported using the Export Web Scene option. They were then embedded into Story Maps alongside text to provide context and analysis for each year.
Key Insights The project explored housing market dynamics primarily through an economic lens.
Interest Rates: The Bank of Canada’s overnight lending rate played a pivotal role, with historic lows (0.25%) during the COVID-19 pandemic fueling a housing boom, and sharp increases (up to 5% by 2023) leading to market cooling.
Immigration: Record-breaking immigration inflows also contributed to increased demand, exacerbating the affordability crisis.
While earlier periods like 2008 were critical in shaping the market, boundary changes in TRREB’s data made them difficult to include.
Conclusion Analyzing real estate trends over nearly a decade and visualizing them through 3D extrusions offers a profound insight into the rapid rise of residential real estate prices in Toronto. This approach underscores the magnitude of the housing surge and highlights how policy measures, while impactful, have not fully addressed the affordability crisis.
The persistent rise in prices, even amidst various interventions, emphasizes the critical need for increased housing supply. Initiatives aimed at boosting the number of housing units in the city remain essential to alleviate the pressures of affordability and meet the demands of a growing population.
Link to Story Map (You will need to sign in through your TMU account to view it): https://arcg.is/WCSXG
Greetings everyone! For my geo-visualization project, I wanted to combine my creative skills of Do It Yourself (DIY) crafting with the technological applications utilized today. This project was an opportunity to be creative using resources I had from home as well as utilizing the awesome applications and features of Microsoft Excel, ArcGIS Online, ArcGIS Pro, and Clipchamp.
In this blog, I’ll be sharing my process for creating a 3D physical string map model. To mirror my physical model, I’ll be creating a textured animated series of maps. My models display the subway networks of two cities. The first being the City of Toronto, followed by the metropolitan area of Athens, Greece.
Follow along this tutorial to learn how I completed this project!
PROJECT BACKGROUND:
For some background, I am more familiar with Toronto’s subway network. Fortunately enough, I was able to visit Athens and explore the city by relying on their subway network. As of now, both of these cities have three subway lines, and are both undergoing construction of additional lines. My physical model displays the present subway networks to date for both cities, as the anticipated subway lines won’t be opening until 2030. Despite the hands-on creativity of the physical model, it cannot be modified or updated as easily as a virtual map. This is where I was inspired to add to my concept through a video animated map, as it visualizes the anticipated changes to both subway networks!
PHYSICAL MODEL:
Materials Used:
Paper (used for map tracing)
Pine wood slab
Hellman ½ inch nails
Small hammer
Assorted colour cotton string
Tweezers
Krazy glue
Methods and Process:
For the physical model, I wanted to rely on materials I had at home. I also required a blank piece of paper for a tracing the boundary and subway network for both cities. This was done by acquiring open data and inputting it into ArcGIS Pro. The precise data sets used are discussed further in my virtual model making. Once the tracings were created, I taped it to a wooden base. Fortunately, I had a perfect base which was pine wood. I opted for hellman 1/2 inch nails as the wood was not too thick and these nails wouldn’t split the wood. Using a hammer, each nail was carefully placed onto the the tracing outline of the cities and subway networks .
I did have to purchase thread so that I could display each subway line to their corresponding colour. The process of placing the thread around the nails did require some patience. I cut the thread into smaller pieces to avoid knots. I then used tweezers to hold the thread to wrap around the nails. When a new thread was added, I knotted it tightly around a nail and applied krazy glue to ensure it was tightly secured. This same method was applied when securing the end of a string.
Images of threading process:
City of Toronto Map Boundary with Tracing
After threading the city boundary and subway network, the paper tracing was removed. I could then begin filling in the space of the boundary. I opted to use black thread for the boundary and fill, to contrast both the base and colours of the subway lines. The City of Toronto thread map was completed prior to the Athens thread map. The same steps were followed. Each city is on opposite sides of the wood base for convenience and to minimize the use of an additional wood base.
Of course, every map needs a title , legend, north star, projection, and scale. Once both of the 3D string maps were complete, the required titles and text were printed and laminated and added to the wood base for both 3D string maps. I once again used the nails and hammer with the threads to create both legends. Below is an image of the final physical products of my maps!
FINAL PHYSICAL MODELS:
City of Toronto Subway Network Model:
Athens Metropolitan Area Metro Network Model:
VIRTUAL MODEL:
To create the virtual model, I used ArcGIS Pro software to create my two maps and apply picture fill symbology to create a thread like texture. I’ll begin by discussing the open data acquired for the City of Toronto, followed by the Census Metropolitan Area of Athens to achieve these models.
The City of Toronto:
Data Acquisition:
For Toronto, I relied on the City of Toronto open data portal to retrieve the Toronto Municipal Boundary as well as TTC Subway Network dataset. The most recent dataset still includes Line 3, but was kept for the purpose of the time series map. As for the anticipated Eglinton line and Ontario line, I could not find open data for these networks. However, Metrolinx created interactive maps displaying the Ontario Line and Eglinton Crosstown (Line 5) stations and names. To note, the Eglinton Crosstown is identified as a light rail transit line, but is considered as part of the TTC subway network.
To compile the coordinates for each station for both subway routes, I utilized Microsoft Excel to create 2 sheets, one for the Eglinton line and one for the Ontario line. To determine the location of each subway station, I used google maps to drop a pin in the correct location by referencing the map visual published by Metrolinx.
Ontario Line Excel Table :
Using ArcGIS Pro, I used the XY Table to Pointtool to insert the coordinates from each separate excel sheet, to establish points on the map. After successfully completing this, I had to connect each point to create a continuous line. For this, I used the Point to Line tool also in ArcGIS Pro.
XY Table to Point tool and Points to Line tool used to add coordinates to map as points and connect points into a continuous line to represent the subway route:
After achieving this, I did have to adjust the subway routes to be clipped within the boundary for The City of Toronto as well as Athens Metropolitan Area. I used the Pairwise Clip in the Geoprocessing pane to achieve this.
Geoprocessing pairwise clip tool parametersused. Note: The input features were the subway lines withe the city boundary as the clip features.
Athens Metropolitan Area:
Data Acquisition:
For retrieving data for Athens, I was able to access open data from Athens GeoNode I imported the following layers to ArcGIS Online; Athens Metropolitan Area, Athens Subway Network, and proposed Athens Line 4 Network which I added as accessible layers to ArcGIS online. I did have to make minor adjustments to the data, as the Athens metropolitan area data displays the neighbourhood boundaries as well. For the purpose of this project, only the outer boundaries were necessary. To overcome this, I used the merge modify feature to merge all the individual polygons within the metropolitan area boundary into one. I also had to use the pairwise clipping tool once again as the line 4 network exceeds the metropolitan boundary, thus being beyond the area of study for this project.
Adding Texture Symbology:
ArcGIS has a variety of tools and features that can enhance a map’s creativity and visualization. For this project , I was inspired by an Esri Yarn Map Tutorial. Given the physical model used thread, I wanted to create a textured map with thread. To achieve this, I utilized the public folder provided with the tutorial. This included portable network graphics (.png) cutouts of several fabrics as well as pen and pencil textures. To best mirror my physical model, I utilized a thread .png.
ESRI yarn map tutorial public folder:
I added the thread .png images by replacing the solid fill of the boundaries and subway networks with a picture fill. This symbology works best with a .png image for lines as it seamlessly blends with the base and surrounding features of the map. The thread .png image uploaded as a white colour, which I was able to modify its colour according to the boundary or particular subway line without distorting the texture it provides.
For both the Toronto and Athens maps, the picture fill for each subway line and boundary was set to a thread .png with its corresponding colour. The boundaries for both maps were set to black as in the physical model, where the subway lines also mirror the physical model which is inspired by the existing/future colours used for subway routes. Below displays the picture symbology with the thread .png selected and tint applied for the subway lines.
City of Toronto subway Networks with picture fill of thread symbology applied:
The base map for the map was also altered, as the physical model is placed on a wood base. To mirror that, I extracted a Global Background layer from ArcGIS online, which I modified using the picture fill to upload a high resolution image of pine wood to be the base map for this model. For the city boundaries for both maps, the thread .png imagery was also applied with a black tint.
PUTTING IT ALL TOGETHER:
After creating both maps for Toronto and Athens, it was time to put it into an animation! The goal of the animation was to display each route, and their opening year(s) to visually display the evolution of the subway system, as my physical model merely captures the current subway networks.
I did have to play around with the layers to individually capture each subway line. The current subway network data for both Toronto and Athens contain all 3 of their routes in one layer, in which I had to isolate each for the purpose of the time lapse in which each route had to be added in accordance to their initial opening date and year of most recent expansion. To achieve this, I set aDefinition Query for each current subway route I was mapping whilst creating the animation.
Definition query tool accessed under layer properties:
Once I added each keyframe in order of the evolution of each subway route, I created a map layout for each map to add in the required text and titles as I did with the physical model. The layouts were then exported into Microsoft Clipchamp to create the video animation. I imported each map layout in .png format. From there, I added transitions between my maps, as well as sound effects !
CITY OF TORONTO SUBWAY NETWORK TIMELNE:
ATHENS METROPOLITAN AREA METRO TIMELINE:
LIMITATIONS:
While this project allowed me to be creative both with my physical and virtual models, it did present certain limitations. A notable limitation to this geovisualization for the physical model is that it is meant to be a mere visual representation of the subway networks.
As for the virtual map, although open data was accessible for some of the subway routes, I did have to manually enter XY coordinates for future subway networks. I did reference reputable maps of the anticipated future subway routes to ensure accuracy. Furthermore, given my limited timeline, I was unable to map the proposed extensions of current subway routes. Rather, I focused on routes currently under construction with an anticipated completion date.
CONCLUSION:
Although I grew up applying my creativity through creating homemade crafts, technology and applications such as ArcGIS allow for creativity to be expressed on a virtual level. Overall, the concept behind this project is an ode to the evolution of mapping, from physical carvings to the virtual cartographic and geo-visualization applications utilized today.
Andrea Santoso-Pardi SA8905 Geovis project, Fall 2024
Introduction
Using aerial photogrammetry into Minecraft builds is an interesting way to combine real-world data with a video game that many people play. Adding aerial photogrammetry of a building and city is a way to get people interested in GIS technology and can be used for accessibility reasons to understand where different buildings are in the world. This workflow will introduce the process finding aerial building photogrammetry, using the .obj file to process it with Blender plugins (BlockBlender 1.41 and BlockBlender to Minecraft .Schem 1.42), exporting it as a .schem file for use in single player Minecraft Java Edition 1.21.1 by using the Litematica to paste the schematic, converting the model from latitude and longitude coordinates to Minecraft coordinates and editing the schematic
List of things you will need for this
Photogrammetry – preferably one that is watertight with no holes. If holes are present, one will have to manually close the holes.
You can either collect your own photogrammetry, create one using a video or get a it from a website
If you want to clean up your photogrammetry I suggest using MeshLab as it is free to use and personally, it is easier than using Blender for cleaning the model up. Just make sure you remember there is no typical undo button and you will need a desktop to run Meshlab. A laptop will not work.
Gathering Data: What is Aerial Photogrammetry & What is the best model to use?
Aerial photogrammetry is a technique that uses overlapping photographs captured from above and various angles to create accurate, measurable 3D model or maps of real-world landscapes, structures, or objects. However, photogrammetry is becoming a lot more accessible, it is now able to be created by just using a phone camera. The dataprocessing for drone imagery of a building includes: Point Clouds which are a dense collection of points representing the object or terrain in 3D space. And also 3D Meshes which are surfaces created by connecting points into a polygonal network. The polygonal network of Aerial photogrammetry of a building is usually many triangles.
If you are going to search up a photogrammetry model to use, here is what made me choose this one of a government building and also know that it was photogrammetry.
Large number of triangles and vertices. The model had 1.5 Million Triangles and 807.4k Vertices. 3D models made using 3D modeling Software will have lower counts of both of in the tens of thousands. This is how I knew it was photogrammetry.
Minimal clean up. There was little to no clean-up required on the model for it to be able to be put into minecraft. Of course if you do not care that a lot of clean-up needs to happen before being able to convert the photogrammetry into blocks then you can do so. But know it will take hours depending on how many holes the model has.
I spent too many hours trying to clean-up Kerr Hall photogrammetry and it still had all the holes associated with it. If you want to do Kerr Hall please contact the Facilities for Campus Data for floor plans and walls for what it is supposed to look like outside to ensure the trees aren’t in the photogrammetry. Then use Blender Architecture and BlenderGIS plugins to scale the building accordingly
States the Location/Coordinates. If you want the elevation of the model, you will need to know where it is geolocated in the world. Having the coordinates makes this processes easier in BlenderGIS
Minimal/Zero Objects around the wall of the building. When getting photogrammetry, objects too close to the wall can merge with the building wall. Things like trees make it very hard to get clear viewing of the wall to the point that there might not even be a wall in the photogrammetry.
The topology of trees makes it so many tiny holes may happen instead. Making sure no objects are around the buildings ensures that I know that the walls are and will be visible in the final product. Do a quick 360 of the photogrammetry to ensure this is the case for the one you want
Ensure to be able to download as a .OBJ file. For Blockblender to work the building textures need to have photos for blockblender to assign a block to the photo pixel
Consistent Lighting all around. If different areas of the building have different lighting it does not make for a consistent model as I don’t want to change the brightness of the photo.
When exporting the model I chose an OBJ format as I knew that it was compatiable with the Blockblender addon to work.
When exporting, ensure you know where it downloads to. Extra steps like unzipping the file may occur depending on how it is formatted.
Blender
Blender is a free 3D modeling software that was chosen due to its high customizable editing options. If you haven’t used blender before, I suggest learning the basic controls this is a playlist to help understand each function.
Installing Addons
Download all the files you need as .zip files Go to Edit > Preferences > Install From Disk and import the .zip files of the add-ons. Make sure you save your preferences. Just as a reminder, the ones needed for this tutorial are: BlockBlender 1.41 ($20 Version) and Minecraft .Schem 1.42
Import & Cleaning Up the .obj Model
To import the model go to File > Import > Wavefront OBJ . The file does not have to be an .obj to work. But it does have to textures that are separate from the 3D Model if you want to use the Blockblender add-on.
Import the same model twice. One to make into Minecraft blocks and the other to use as a reference. Put them into different collections. You can name them “Reference” and “Editing” . Press M to Create two separate collections for each model.
To clean up the model to have it ready for use in blockblender, the model has to have a solid , watertight, mesh. In short, what this means is that the mesh of the model needs to have closed edges. It’s a bit hard to explain. Its not necessary to learn if your 3D model requires minimal clean up. But if you want to understand more of what I mean this resource might be helpful. https://davidstutz.de/a-formal-definition-of-watertight-meshes/
Go into Edit Mode. Click on the model (it should have an orange outline) and go into edit mode (see top left corner). Alternatively you can hit Tab to switch between Edit and Object Mode
Press A to Select All
Go Above into Select > Select Loops > Select Boundary Loop
It should look like this afterwards, with only the boundary loops selcted
Press Alt + F to fill in the faces If you look underneath the model, you can see how it makes the mesh watertight
You can now exit edit mode. You can see in Object mode how the hole in the model is now enclosed. This has created a watertight solid mesh.
You can also clean up models with holes the same way. For complex models however, select the area around where the hole in the model is instead of select all.
If you would like an only visual explanation here is a video. Don’t switch over to sculpt mode and don’t enable Dyntopo and go into the sculpting mode as you will lose textures. The textures are needed for blockblender. If you do accidentally do dynotopo, Ctrl + Z can be used to undo or you can copy and paste your reference and do this section over again.
BlockBlender
Blockblender is an add-on for blender created by Joey Carolino, if you want to know how to visually see how blockblender is used better, below is a youtube video of how to use more functions in BlockBlender. There is a free version and a paid version of blockblender so if you cannot contact the Library Collaboratory to use the computer with the paid Blockblender then you can use the free version
Using Blockblender
Before doing this step, save your work to ensure that nothing goes away Select the model and press Turn Selected Into Blocks. This will take a while to fully load. When it does, the model will look like glass. If blender becomes too laggy, exit blender and dont save. You can reduce the size of your model before doing this section to ensure you can add all the textures needed
To find out the image ID and what order to use them, go to the Material Properties It should look like a Red circle.
The names of the photo are shown and to ensure the model looks like the picture you must put it in that order or else it will not look like the reference model.
Here is what the Blockblender Model looks like
From here, Blockblender has different tools to choose the block selection. Each block is categorized into these areas in the Collections Area. However You can select individual blocks and move them into the unused collection by dragging and dropping. Alternatively press CTRL to select multiple to drag and drop
I also felt that the scale of 1 Block = 1m did not give enough detail so the block size was changed to 0.5m
The final model I ended up going with is below. Although it is not perfect, I can manual edit, use Litematica or Minecraft commands afterwards. It is hard to show how the workflow with just pictures so highly suggest the video above to see more of the functionality.
Blockblender to .Schem
This add-on was created by EpicSpartanRyan#8948 on Discord. Special thanks to him. They are also available for hire if someone wanted to put buildings into minecraft to make a campus server with a 30 minute free consultation and aims to respond in 12 hours.
Putting this into a .schem file allows it to be read in a format that minecraft understands.
To quickly see how it would work to export and put into Minecraft but using World Edit and in multiplayer server, please see his video below. It also compares what the textures in blender look like to what it looks like inside of minecraft
Using Blockblender to .Schem
To prepare the file to export, Uncheckmark “Make Instances Real”
Click the model. Press Convert to Mesh in the N-panel to make the mesh look more like minecraft blocks rather than triangles. You can see if the mesh has changed by selecting the object and going into Edit Mode or by looking at the viewport wireframe
Click the model. Press Ctrl + A and apply All Transforms This will ensure all the textures will be there
Next, you want to go into File > Export > Minecraft (.schem) or press Export as schem on the N-panel Blockblender options. The N-panel can be seen in the previous section
Save the file whatever name you want but to ensure the .schem file is saved to your schematics folder. This is to save time trying to find where you put the model later. This can be found by searching %appdata% on your file pathway area. The file path should be C:\Users\[YourComputerProfileName]\AppData\Roaming\.minecraft\schematics
If a schematics folder is not present, make one inside the .minecraft folder
Minecraft
Installing Minecraft, Fabric Loader and Mods
If you need help downloading Minecraft look at this article. https://www.minecraft.net/en-us/updates/instructions . I bought Minecraft in 2013 so I’m unsure of the process of what buying and downloading Minecraft is like now as I refuse to buy something that I already have. This video here may also be helpful but I have not followed along but I did watch it to ensure the video makes sense.
Fabric Loader is used as a way to change the minecraft experience from vanilla (default minecraft) to whatever experience you want by downloading other mods. It acts as bridge between the game and the mods you want to use.
To download, Choose the download that works best for the device you are on. For me that was Download for Windows x64, the latest version of Fabric Loader which is named fabric-installer-1.0.1 but it may change in the future. Press to run the installer until it opens up to here. Since I am not running fabric on a server but on a client (single player usually), I downloaded it to Minecraft 1.21.1 and the Latest Loader Version.
Mods: Litematica and MaLiLib
Before entering Minecraft download the mods and add them to your mods folder. You do not need to do anything to the mod after it is downloaded except to move them into the Minecraft mod folder.
The general pathway would be C:\Users[YourComputerProfileName]\AppData\Roaming.minecraft\mods It should all keep as the WinRAR archive
Ensure to change the loader to be fabric-loader-1.21.1 so the mods will be attached. Once it is changed, press the big green button that says Play
Create a New World
This is just to import the model into Minecraft Java 1.21.1 SinglePlayer so I went into Singleplayer > Create New World and Here are the options chosen Game Tab Game mode : Creative Difficulty: Peaceful Allow Commands On World Tab World Type : Superflat Generate Structures : Off Bonus Chest : Off
Once having the options you like, you can create a New World.
Using Litematica
The building can be placed down in any world using the Litematica Mod. If you have any troubles using it, for the basic commands How To Use Litematica by @ryanthescion helped a lot in learning how to use the different commands
The minecraft stick is used in Litematica to toggle between modes. To get a minecraft stick, press E to open up the inventory / creative menu and search up Stick (which it opens to the search automatically) or find it under the Ingredients Tab
Left Click and Drag the stick into your hotbar (the area where one can see the multiple wooden sticks) and exit out of the inventory pressing E Note that one stick is enough for the mod to work as it has to be held in your hand to use. The multiple sticks there are to show where the hotbar is.
With the Stick in your hand, one can toggle between the different modes by pressing CTRL + Scroll Wheel to go between 9 different modes.
Adding The Model
What I did in short was open the Litematica menu by pressing M , went to the Configuration menu
Hotkeys is a place to create custom keyboard and/or mouse shortcuts for different commands. Create a shortcut that is has no existing shortcut for it already. The tutorial used J + K for “executefunction” to paste the building so I followed the tutorial and use those also so now I will have to press J and K to execute a command. If there is a problem with the hotkeys used, it would turn a yellow/orange colour instead of white.
Next I went back to the Litematica menu went to Load Schematics added the folder pathway were I keep the schematics. Pressed the schematic build file I wanted to Load then pressed Load Schematic at the bottom of the page. Thus the government building was pasted into minecraft.
Converting Latitude and Longitude to Minecraft Coordinates
In the Litematica menu press the Loaded Schematics button then go to Schematic Placements > Configure > Schematic placement and you can change the building to be the same coordinates as in real life. Y is 18 because using the “What is My Elevation” website at the coordinates states 9m. Since 1 block is equal to 0.5m in our model, 9m divded by 0.5 is 18m.
The X and Z coordinates are if the geographic coordinate system of Earth converted with what the minecraft coordinate system is (Cartesian Coordinates). The conversion between the geographic coordinate system uses the WGS84 coordinate system (World Geodetic System 1984) and Cartesian Coordinates assumes both origins start at 0,0,0 and 1 block = 0.5 metres. If 1 degree of latitude and 1 degree of longitude both are 111,320 metres (for this projection)2: Latitude in blocks per degree = 222,640 blocks per degree Metres per degree of longitude = [111,320 × cos(latitude in radians) ] / 0.5
To align this with real-world geographic coordinates (latitude and longitude), one needs to define a reference point. Since the the real-world origin (0° latitude and 0°, longitude) is set to correspond to X = 0 and Z = 0 in minecraft. The formulas below is used to calculate the difference in Latitude and Longitude based off of this
The Formulas to Convert to Minecraft are: Minecraft Z Coordinates = [ΔLatitude × 111,320] / [Scale (meters per block)] Minecraft X Coordinates = [ΔLatitude × ( 111,320 × cos(Origin Latitude in radians))] / [Scale (meters per block)] Minecraft Y Coordinates = Elevation in metres / Scale (metres per block)
Where: ΔLatitude = Target Latitude − Origin Latitude ΔLongitude = Target Longitude − Origin Longitude Target Latitude is 47.621474856679534° Target Longitude is −65.65655551636287° If Origin is 0° latitude and 0°, longitude Scale (metres per block) = 0.5 metres
Using cosine has it so the conversion better reflects real-world distances as Earth is a spheroid an minecraft is flat
Therefore the Minecraft coordinates are
Minecraft X Coordinates = −9,858,611 Minecraft Y Coordinates = 18 Minecraft Z Coordinates = 10,606,309
Note: You will have to Teleport to where the model is put do /tp <playername> x y z to where the building is loaded
Fixing The Model
There were many edits that needed to happen. I fixed the trees to actually have trunks as the textures did not load them in properly. I used what generated as a guide for what the shapes for the trees should look like
I also tried to change the pattern on the wall to more accurately reflect what it looks like in the photogrammetry
Helpful Tips
/time set day /effect give <targets> <effect> infinite [<amplifier>] [<hideParticles>]
Using this approach of taking aerial building photogrammetry, using blender to make it minecraft blocks and then trying to convert the latitude and longitude coordinates to minecraft to put the building in the exact right spot is that Minecraft is a fixed grid cubic Block representation which will lack the detail of the 3D aerial building photogrammetry model on any given day. To try to make a scale that allows for the geolocation correctness and building height but transferred over to minecraft is a fine detail task that has to try to balance the artistry with reality.
In Blockblender, fine details like the antennae at the top of the building don’t come through as it only uses blocks for the representation. so railings, window frames and more could be lost or require block subsitutes.
The Photogrammetry can be very complex and very noisy with shadows that may make blockblender interpret the data wrong. Blockblender as an add-on is limited to the minecraft default colours which may not accurately reflect what real-world surfaces look like or are made out of.
The Minecraft height limit can be an issue depending on how tall the building is you want to convert.
Geolocating the building from latitude and longitude to minecraft coordinates will not work on a much larger scale (i.e keeping the scale at 0.5m is 1 block) as the minecraft world is 30 million by 30 million.
Litematica also has limited functionality until one has to do a lot more manually or use another plug in.
Conclusion
This workflow is an excellent way to bring real-world data into Minecraft, but it requires balancing the complexity of photogrammetry models with Minecraft’s block-based limitations. Understanding and addressing these challenges produce detailed, manageable builds that work well in Minecraft’s unique environment.
Footnotes
“Canadian Government Building Photogrammetry” (https://skfb.ly/oLZyt) by Air Digital Historical Scanning Archive is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/) ↩︎
Geovis Project Assignment, TMU Geography, SA8905, Fall 2024 by Khadija Safi
Hello everyone! I’m excited to share my tutorial on how to use the animation capabilities in ArcGIS Pro to visualize 3D data and create an animated video.
My inspiration for this project was learning more about my ancestral homeland, Afghanistan, whose history and culture are known to have been heavily influenced by its location and topography.
Since I also wanted to gain experience working with the 3D layers and animation tools available in ArcGIS Pro, I decided to create a 3D animation of how geography has influenced Afghanistan’s history and culture.
My end product was an educational video that I narrated and posted on Youtube.
The GIS software I used in this project was ArcGIS Pro 3.3.1. I also used the Voice Memos app to record my narration, and iMovie to compile the audio recordings and the exported ArcGIS Pro movie into one video.
For my data sources, I derived the historical information presented in the animation from a textbook by Jalali (2021), the political administrative boundary of Afghanistan from geoBoundaries (Runfola et al., 2020), and the World Elevation 3D/Terrain 3D and World Imagery basemap layers from ArcGIS Pro (Esri et al., 2024; Maxar et al., 2024).
For this tutorial, I will only be providing a broad overview of the steps I took to create my end product. For additional details on how to use the animation capabilities in ArcGIS Pro, please refer to Esri’s free online Help documentation.
Now, without further ado, let’s get started!
Key Learning Objective
To design and create a geographic-based animation involving 3D data using ArcGIS Pro.
Note
The following convention was used to represent the process of navigating the ArcGIS Pro ribbon: Tab (Group) > Command
Step 1: Come Up With the Storyline
Since I wanted to create a narrated video as my end product, I first had to research my topic and decide what kind of story I wanted to tell by writing the script that would go along with each keyframe.
Step 2: Record the Narration Using Voice Memos
The next step was to record the narration using the script I wrote so that I could have a reference point for my keyframe transitions.
This process was as simple as hitting record on Voice Memos, then uploading each audio file to a new iMovie project.
The audio files were trimmed and aligned until a seamless transition between each clip was achieved.
Step 3: Create the Animation in ArcGIS Pro
To create the animation, the following steps were taken:
(3.1) Open a new “Local Scene” project in ArcGIS Pro
(3.2) Load and prepare the data
In my case, the Terrain 3D layer was automatically loaded as the elevation surface. To load the World Imagery layer, I had to navigate to Map (Layer) > Basemap and select “Imagery”.
I then added and symbolized the political administrative boundary shapefile I downloaded for Afghanistan.
To mark the locations of the three cities I included in some of the keyframes, I also created my own point geometry using the graphical notation layer available through Insert (Layer Templates) > Point Map Notes. The Create tool under Edit (Features) was used to digitize the points.
Finally, I downloaded two PNG images to insert into the animation at a later time (Anonymous, 2014; Khan, 2010).
(3.3) Create bookmarks for the animation keyframes
Although an animation can be created regardless, bookmarking the view you intend to use for each keyframe is a good way of planning out your animation. The Scene’s view can be adjusted and updated at a later time, but this allows you to have an initial framework to start with.
ArcGIS Pro also allows you to import your bookmarks to automatically create keyframes using preconfigured playback styles.
Creating a Bookmark
To open the Bookmarks pane, click on “Manage Bookmarks” under Map (Navigate) > Bookmarks. Zoom to your desired keyframe location and create a bookmark using the New Bookmark subcommand.
Tip
The Locate command under Map (Inquiry) can be used to quickly search for and zoom to any geocoded location on the Earth’s surface.
Adjusting the View
To change the camera angle of your current view, use the on-screen navigator in the lower left corner of the Scene window. Click on the chevron to access full control.
By clicking and holding down on the bottom of the arrow on the outer ring of the on-screen navigator, you can rotate the view around the current target by 360o.
Clicking and holding down on the outer ring only will allow you to pan the Scene towards the selected heading.
To change the pitch of the camera angle or rotate the view around the current target, click and hold down on the inner ring around the globe, then drag your mouse in the desired direction.
Finally, clicking and holding down on the globe allows you to change your current target.
(3.4) Activate the Animation tab
If your current Scene has never been initialized for an animation, the Animation tab can be activated through View (Animation) > Add.
(3.5) Set up the resolution of the exported animation
To ensure you design the animation to fit the resolution you intend to export to, click on Animation (Export) > Movie.
In the Export Movie pane, under “Advanced Movie Export Settings”, select your desired “Resolution”. You could also use one of the “Movie Export Presets” if desired. I chose “1080p HD Letterbox (1920 x 1080)” to produce a good quality video.
Note
This step is very important, as the view of your keyframes and the placement of any overlays you add are directly affected by the aspect ratio of your export, which is directly tied to your selected resolution.
(3.6) Create the animation
Start off by opening the Animation Timeline pane through Animation (Playback) > Timeline.
In the Bookmarks pane, click on your first bookmark. With your view set, click “Create first keyframe” in the Animation Timeline pane to add a keyframe.
Repeat this process until all of your keyframes are added.
Alternatively, as mentioned before, the Import command in Animation (Create) can be used to automatically load all of the bookmarks in your project as keyframes using a preconfigured playback style.
Tip
If you need to adjust the view of a keyframe, adjust your current view in the Scene window, then select the keyframe in the Animation Timeline pane and hit Update in Animation (Edit).
(3.7) Fine-tune the transition and timing between each keyframe
To configure the transition, time, and layer visibility of each keyframe, open the Animation Properties pane through Animation (Edit) > Properties and click on the Keyframe tab in this pane.
Choose one of the five transition types to animate the camera path: “Fixed”, “Adjustable”, “Linear”, “Hop”, or “Stepped”.
To create a tour animation that pans between geographic locations, a combination of “Hold” and “Hop” can be used. “Fixed” can be used to create a fly-through that navigates along a topographic feature.
Hit the play button in the Animation Timeline pane to view your animation and adjust accordingly.
Note
Although the Terrain 3D and World Imagery layers may not draw well in ArcGIS Pro due to their sheer size, they should appear fine in the exported video.
(3.8) Add text, images, and other graphics to annotate and communicate key information
Text, images, and other graphics can be added using the commands available in Animation (Overlay). Acceptable image file formats are JPG, TIFF, PNG, and BMP.
The position and timing of an overlay can be adjusted in the Overlays tab in the Animation Properties pane.
(3.9) Export the animation
Once you’re satisfied with your animation, you can export by clicking on Animation (Export) > Movie again.
Name the file and select your desired “Media Format” and “Frames Per Second” settings.
Your resolution should already be set, but you can adjust the “Quality” to determine the size of your file.
Hit “Export” once you’re ready. Depending on the size of your animation, it can take several hours for the video to export. Mine took over 10 hours.
Tip
You can also export a subsection of your animation by specifying a “Start Time” and “End Time”. This can be useful to preview the end result of your animation bit by bit without having to export the entire video, which can take a lot of time.
Step 4: Combine the Video and Audio Files Using iMovie
With my animation exported, I added the video to my project in iMovie. Since I timed the animation according to my narration, the two files aligned perfectly at the zero mark and no further editing had to be done.
To export the final video, I used File > Share > Youtube & Facebook and made sure to match the resolution to the one I selected in ArcGIS Pro (1920 x 1080). iMovie will notify you once the .mov file is exported.
Step 5: Upload the Animation on Youtube
The final step was uploading the video on Youtube.
Create and/or log in to your Youtube account. On the Youtube homepage, click on You > Your videos > Content > Create >Upload videos to add the .mov file. A wizard will pop up.
Under the Details tab, fill out the “Title” and provide a “Description” for your video. Timestamps marking different chapters in the video can also be added here.
Select a thumbnail and fill out the remaining fields, including those under “Show more”, such as “Video language”. Selecting a “Video language” is necessary to add subtitles, which can be done through the Video elements tab.
Once your video is set up, hit “Publish”. Youtube will supply you with the link to your published video.
Congratulations!
You just visualized 3D data and created a geographic-based animation using ArcGIS Pro!
Runfola, D., Anderson, A., Baier, H., Crittenden, M., Dowker, E., Fuhrig, S., Goodman, S., Grimsley, G., Layko, R., Melville, G., Mulder, M., Oberman, R., Panganiban, J., Peck, A., Seitz, L., Shea, S., Slevin, H., Youngerman, R., & Hobbs, L. (2020). GeoBoundaries: A Global Database of Political Administrative Boundaries (September 21, 2024) [Shapefile]. GeoBoundaries. https://www.geoboundaries.org
In 2023, I climbed Mount Kilimanjaro. Mount Kilimanjaro, is located in the northern region of Tanzania, straddling the border with Kenya, and at 5,895 metres, is the tallest mountain in Africa. My sister was doing a work term in Tanzania, so I thought it was a great opportunity to complete a physical and mental challenge that has been a bucket list item for me. The other major driver for me to climb this mountain, is that it is one of the tallest mountains in the world that does not require a large amount of technical climbing and can be done mostly by walking. Despite this however, the freezing temperatures, the altitude and the long distances being covered meant that it was still an immensely difficult challenge for me to complete. We chose to climb the 7 day Machame Route, which was recommended for people that wanted a long enough route to have a relatively high chance of reaching the summit, but did not want to spend an excessive amount for the longest routes. This was just one of many routes that climbing companies use when leading trips to the summit, which have a lot of variation in terms of length of time. The shortest route, Marangu, which takes place over 5 days is the least expensive, due to not having to pay the 10-20 people required to lead a group of climbers (Guides, Assistant Guides, Porters, and Cooks). However, the flip side of this is that 5 days does not provide very much time to acclimatize to the altitude, which means that over 50% of climbers on this route do not reach the summit due to prevalent altitude sickness. The 7 day Machame Route is much more manageable with the extra days, giving the climbers more time to make sorties into the higher elevation zones and back down to acclimatize more comfortably. The third route, which is called the Northern Circuit as it traverses all the way around the north side of the mountain, takes place over 10 days. It is the most scenic, giving time for climbers to see all the different types of vegetation zones that the mountain has to offer, and also causes the least amount of altitude-related stress, as it ascends into the high elevation much more slowly, and gives more time to acclimatize once the climbers have reached that zone. Altitude sickness has a large amount of variation between people, in terms of the level of severity and symptoms. For instance, one person in my group, who was an experienced triathlete, began experiencing symptoms of altitude sickness on the 2nd day of the climb, and was ultimately unable to reach the summit, whereas my symptoms were less severe. Despite this, by the time we reached the summit in the early hours of the morning on Day 6, I had begun to feel the effects of the altitude, with persistent headaches, exhaustion and vertigo. These symptoms are all consequences of the reduced amount of oxygen that is available at such a high elevation, and were also compounded by the extremely low temperatures at night (between -15 and -25 degrees), which made it very difficult to sleep. Despite these setbacks however, reaching the summit was a very interesting and rewarding experience that I wanted to share with this project.
Scope of the project
For the purposes of this geovisualization project, I chose to create a 3d scene in ArcGIS Pro, which displayed the elevation of the different parts of the mountain, and how 3 different route lengths (5 days, 7 days and 10 days) differ in terms of how they traverse through the different elevation zones. I also drew dashed lines on my 3d model which mark the elevation at which different levels of altitude sickness typically occur. Because of my own personal experience and that of other people, I thought that it was important to analyze altitude sickness and how it can be prevalent in a climb as common as Mount Kilimanjaro
Generally there are two levels of altitude sickness that can occur on a climb such as this one. The first one, Acute Mountain Sickness or AMS, is extremely common. The symptoms are not particularly severe, usually showing in most people as fatigue, shortness of breath, headaches, and sometimes nausea. The risk of this illness occurs usually in the 2000 to 2,500 metre range, becoming extremely common by the time a person ascends to around 4000 metres. The second, much more severe form of altitude sickness, comes in two forms, High Altitude Pulmonary Edema and High Altitude Cerebral Edema. As is probably evident from the names of these illnesses, HAPE primarily affects the lungs, while HACE mostly affects the brain, though most people that contract this, experience symptoms of both. HAPE/HACE begins to occur in people at the 4500m range (with a 2-6% risk at that elevation), but becomes much more prevalent at elevations above 5000m. The risk of this illness continues to increase as the elevation increases, which is why it is so difficult to reach the summit of the 8000m + mountains like Everest or K2. To counteract the effects of these illnesses, acclimatization to the elevation is extremely important. This is why mountain guides are constantly stressing the need to keep a very slow pace of climbing, and longer routes have a much higher success rate, as it allows for more time for the body to acclimatize to the altitude.
Format of the Project
To complete this project, I began by downloading an elevation raster dataset from NASA Earthdata to display the elevation on the mountain. I then added that to an ArcGIS project and drew a boundary around the mountain to use as my study area. From there, I clipped the raster to only show the elevation in that area, and also limit the size of the file. The dataset was classified at 1 metre intervals, which meant that the differences in elevation between classes was extremely difficult to see, so I used the Reclassify Analysis tool to classify the raster at 500 metre intervals. I then assigned colours to each class with green representing the lowest elevations, then yellow, orange, red and finally blue and white for the very high elevations around the summit. I then started a project in Google Earth to draw out the different climbing routes. While Google Earth has limited functionality in terms of mapping, I find that its 3d terrain is detailed and easy to see, so it provided a more accurate depiction of the routes than if I used ArcGIS Pro. I used point placemarks to mark the different campsites on each of the routes and connected them with line features. For knowledge of the routes and campsites, I used the itineraries on popular Kilimanjaro guide companies’ websites, for each of the different routes. Once I had finished drawing out the routes and campsites in Google Earth, I exported the map as a KML file and converted it to ArcGIS layer files using an analysis tool. Finally, I drew polygons around the elevation borders that corresponded with the risks of altitude sickness that I outlined above. I used dashed lines as my symbology for that layer in order to differentiate it from the solid line routes layer.
The next step for the project was converting the map to a 3d scene, in order to display the elevation more accurately. I increased the vertical exaggeration of my ground terrain base layer, in order to differentiate the elevation zones more. From there, I explored the scene and added labels to make sure that all the different map elements could be seen. I created an animation that flew around the mountain to display all angles at the beginning of my Story Maps Project. I then created still maps that covered the different areas of the mountain that are traversed by the different routes. Since the 5 day route basically ascends and descends on the same path, it only needed one map to show its elevation changes, and different campsites. However, the 7 day map needed two different maps to capture all the different parts of the route, and the 10 day one needed 4 as it travels all the way around the less commonly climbed, north side of the mountain. Finally, I created an ArcGIS Story Maps project to display the different maps that I created. I think that Story Maps is an excellent tool for displaying the results of projects such as this one. Its interactive and engaging interface allows the user to understand what can be a complicated project, in a simple and intriguing manner. I added pictures of my own climb to the project to add context to the topic, along with text explaining the different maps. The project can be viewed here: https://arcg.is/1Sinnf0
Conclusions
This project is very beneficial, as it both provides people who have climbed the mountain the opportunity to see the different elevation zones that they traversed and thus maybe connect that with some of their own experiences, but also the chance for prospective climbers to see the progression through the levels of elevation that each route takes, and be informed their choice of route based on that.
The Madawaska is a river and provincial located in the Central Ottawa watershed in Southern Ontario.
The section of river inside the Madawaska Provincial Park is a popular campsing and water-sport location for paddlers across the province. The river includes numerous sets of rapids that present a fun and exciting challenge for paddlers. However, as the water level and discharge rates fluctuate throughout the year from rainfall, snowmelt, and other factors, the conditions of the white water rapids change, so it’s important for paddlers to understand what state the river is in in order to prepare for a trip. My web app will visually symbolize what these different water levels mean for paddlers at different times of the year, while providing other information about rapids, campsite, and access points.
Follow this tutorial to create a basic ReactJS app, call it ‘map-app’ and navigate to it in a text editor like VSCode. You will notice a few important files and folders in here. ‘README.md’ includes some information and important commands for your app. The ‘public’ folder includes any files that you’ll want to access in your app, like images or metadata. This is where you will put your GIS data once we have the react app assembled.
React is designed to be modular and organized, and essentially lets us manipulate HTML components using javascript. A react app is made up of components, which are sections of code that are modular and re-usable. Components also have props and states. Props are passed into a component and can represent things like text, style options, files, and more to change the look and behaviour of components. Hooks are functions that allow us to change the state of a component on the fly, and are what makes react interactive and mutable.
Setting up OpenLayers
Before we start our react app, install OpenLayers, a library that allows us to easily display and work with geographic vector data with javascript and html, which can therefore be used with react. Run the command “npm install ol" to install OpenLayers.
Now that we have a react app set up and OpenLayers, we can start our react app with npm start. This will open a page in your default browser that links to the local server on your machine that’s running your application.
Making a base map
Now lets make a component for our map. Right click on the ‘src’ folder in the left pane and click ‘New Folder’, we will call it ‘Components’. Now right click on that folder and click ‘New File’, call it ‘BaseMap.js’. If you have the extension ‘ES7+ React/Redux/React-Native snippets’ installed – in the extensions tab on the left – you can go to your new file and type ‘rfce’ then press enter to create the basic shell of a component with the same name as the filename. Otherwise you can copy the code below into your ‘MapLayer.js’ file:
import React from 'react'
function BaseMap() {
return (
<div>BaseMap</div>
)
}
export default BaseMap
Now lets populate the component with everything we need from OpenLayers. We will create a map that displays open street map, an open source basemap. I won’t explain everything about how react works since it would take too long, but see the [OpenLayers guide])(https://openlayers.org/doc/tutorials/concepts.html) for details on what each of the components are doing. This should be your component once you have added everything:
import React, { useEffect, useState } from 'react'
// Import the necessary components from OpenLayers
import { Map, View } from 'ol'
import TileLayer from 'ol/layer/Tile'
import { OSM } from 'ol/source'
function BaseMap() {
const [map, setMap] = useState(null); // Store the map instance
// Use effect will make sure that the map is continuously rendered every time it changes
useEffect(() => {
// Create a map instance
const olMap = new Map({
layers: [
new TileLayer({
source: new OSM()
})
],
view: new View({
center: [0, 0],
zoom: 2
}),
controls: [],
target: 'map',
});
// Store the map and vector source instances
setMap(olMap);
return () => {
olMap.setTarget(null); // Cleanup on unmount
};
}, []);
return (
// Return a <div> item with style set so that it covers the entire screen
<div>
<div id="map" style={{ width: "100vw", height: "100vh" }} />
</div>
)
}
export default BaseMap
This will fill the entire page with the Open Street Map basemap. To render our component on the page, navigate to ‘App.js’ and delete all the default items inside the <div> in the return statement. At the top of the page import our BaseMap component: import BaseMap from './Components/BaseMap';. Then, add the component inside the <div> in the return statement.
Hit ctrl+s to save, and you should see your map on the webpage! You will be able to zoon and navigate the same as if it were google maps.
Adding vector data to the map
Now, let’s create a generalized component that we can use to add vector data to the web app. OpenLayers is capable of supporting a variety of filetypes for displaying vector data, but for now we’ll use GeoJSON because of it’s widespread compatibility.
Inside the ‘Components’ folder, create a new file called ‘MapLayers.js’, then use rfce to populate the component, or copy the following code:
import React from 'react'
function MapLayers() {
return (
<div>MapLayers1</div>
)
}
export default MapLayers
In React, components communicate with eachother using ‘props’. We’ll use these to add our layers.
Add a ‘layers’ prop and a ‘map’ prop to the component definition:
function MapLayers({ layers, map })
Now we can access the data that’s passed into the component. Layers will represent a list of objects containing the filenames for our data as well as symbology information. Map will be the same map we created in the ‘BaseMap’ component.
For react to run code, we need to use a function called a useEffect, that will run automatically when the props that we specify are changed. Inside this function is where we will load the vector data into the ‘map’ prop.
// Use effect will make sure that the map is continuously rendered every time it changes
useEffect(() => {
// Error checking
if (!map || !layers || layers.length === 0) return;
const vectorLayers = []
// Create a layer for each geojson and add it to the map
layers.forEach((geojson) => {
const vectorSource = new VectorSource({
url: geojson.filename,
format: new GeoJSON()
})
const vectorLayer = new VectorLayer({
source: vectorSource,
opacity: 1,
zIndex: geojson.zIndex ? geojson.zIndex : 2,
style: geojson.style
})
vectorLayers.push(vectorLayer);
map.addLayer(vectorLayer);
});
return () => {
// Cleanup on unmount
layers.forEach((layer) => map.removeLayer(layer));
};
}, [map, layers]);
Since the ‘layers’ prop is a list of object, we can iterate through it with the ‘forEach’ command. For every layer in the list, we’ll make a new VectorSource, which is an OpenLayers object that keeps track of geometry information. We’ll then add each VectorSource to a VectorLayer, which keeps track of how we display the geometry. Finally, the loop adds each new layer to the map. The list at the very bottom of the ‘useEffect()’ tells the program to run the contained code every time the ‘map’ or ‘layers’ props change.
For now, our component will return ‘null’, because everything is going to be rendered on the map in the BaseMap component.
Here’s what your final ‘MapLayers’ component should look like:
import { useEffect } from 'react'
// Import the necessary components from OpenLayers
import VectorLayer from 'ol/layer/Vector'
import VectorSource from 'ol/source/Vector'
import GeoJSON from 'ol/format/GeoJSON.js';
function MapLayers({ layers, map }) {
// Use effect will make sure that the map is continuously rendered every time it changes
useEffect(() => {
// Error checking
if (!map || !layers || layers.length === 0) return;
const vectorLayers = []
// Create a layer for each geojson and add it to the map
layers.forEach((geojson) => {
const vectorSource = new VectorSource({
url: geojson.filename,
format: new GeoJSON()
})
const vectorLayer = new VectorLayer({
source: vectorSource,
opacity: 1,
// Ternary operator, if the layer has a zIndex use it, otherwise default 2
zIndex: geojson.zIndex ? geojson.zIndex : 2,
style: geojson.style
})
vectorLayers.push(vectorLayer);
map.addLayer(vectorLayer);
});
return () => {
layers.forEach((layer) => map.removeLayer(layer)); // Cleanup on unmount
};
}, [map, layers]);
return null
}
export default MapLayers
Adding Data
A map with nothing on it is no use to anyone. For this project, the goal was to build a web tool for looking at how the water level affects the rivers edge in the Madawaska River Provincial Park in Ontario.
In order to represent the elevation of the river and calculate metrics at different locations along the river, I used the Ontario Imagery-Derived DEM which is offered at a 2m resolution. The Madawaska river is located in two sections; DRAPE B, and DRAPE C. Since these are very large files image files, I needed to convert each file to tif format and generate pyramids for display in Arc or QGIS.
Then, I downloadd the Ontario Hydrographic Line dataset to get the locations of rapids and other features like dams.
I also needed shape data to represent the river itself from the Ontario Open Data portal.
Then, I loaded the ‘.vrt’ file I made from the DEM images into QGIS, and clipped it by the extent of the river polygon. I chose to clip the raster to a buffer of 1km to leave room to represent the surrounding area as well.
Preparing the data
Then, I had to format the data properly to be used in the web app.
When the water level of a river rises, the width of the river expands, and the bank recedes up the shore. I represented the change in water level by adding a dynamic buffer to the river polygon as an approximation of water level rise. It should be noted that this approximation assumes that the water has risen uniformly across the course of the river, which could not be true, however for the purpose of simplifying the app I used that assumption. The actual distance on land that the river expands to at any given section will depend on the slope of the embankment. This is where the DEM comes into play. I calculated the buffer distance to be applied to the river based on sampled points representing the slope along the river’s edge. Then I used the average slope to come up with the buffer distance per water level rise.
To keep things simple, and since the slope of the river bank does not vary much over its course, we will use the average slope along the edge of the river as our Slope value.
To do this, I used the following QGIS tools:
Polygon to Lines (Madawaska River)
Points Along Geometry (Madawaska River Lines, for every 50m)
Sample Raster Values (Slope)
Field Calculator: mean(“SAMPLE_1”) = 9.6%
Here’s the equation for calculating buffer distance:
Buffer Distance = water level change / tan(Slope)
(Where slope is represented as a percentage)
The tangent of the slope here represents the ratio of the water level rise to the distance it will travel over land. Therefore the constant we’ll divide the water level change with will be tan(slope) = 0.17
Before adding my shape data to the map, I had to do a fair amount of cleaning in QGIS. First, every layer is clipped to be within 1km of the river. All the rapids were named manually based on topographic maps, then Aggregated by their name. I also generated a file containing the centroids for each set of rapids for easier interpretation on the map.
Campsite and Access Point data was taken from the Recreation Point dataset by the Ministry of Natural Resources. Campsites and Access points were split into separate layers for easier symbolization.
Each file was then exported from QGIS as a GeoJSON file, then saved in the ‘public’ folder of my react app under ‘layers’. This will make it possible to access the layers from the code.
Adding the data to the web app
Now that all the data is ready, we can put all the pieces together. Inside ‘BaseMap.js’, create a new list at the top of the page called ‘jsonLayers’. Each item in the list will have the following format:
{
filename: "layers/layerfilename.geojson",
style: new Style(),
zIndex: #
}
Where the filename is the path to your GeoJSON layer, the style is an OpenLayers Style instance (which I won’t explain here, but you can learn more from the OpenLayers documentation), and zIndex represents which layers will appear on top of others (For example, zIndex = 1 is below zIndex = 10).
Next, at the bottom of the component where we ‘return’ what to display, we will add an instance of our ‘MapLayers’ component, and pass in the required props.
return (
// Return a <div> item with style set so that it covers the entire screen
<div>
<div id="map" style={{ width: "100vw", height: "100vh" }} />
{map && (<MapLayers map={map} layers={jsonLayers} />)}
</div>
)
Now in your web app, you should see your layers on screen! You may need to zoom in to find them.
I added a few other features and tools that make it so that the map automatically zooms to the extent of the largest layer, and so that the user can select features to see their name.
Geo-visualization
Once the basic structure of the app was set up, I could start to add extra features to represent the water level change. I created a new component called ‘BufferLayer’, which takes in a single GeoJSON file as well as a map to display the vector on. This component makes use of a library called turf.js that allows you to perform geospatial operations in javascript. I used turf.js to apply the buffer described above using a function that takes the geometry from the VectorSource for the layer, and directly applies a turf.js buffer operation to it. The buffer is always applied to the ‘original’ river polygon, meaning that a 10m buffer won’t ‘stack’ on top of another 10m buffer. This also prevents issues with broken geometry caused by the buffer operation when applying a negative buffer.
To control my buffer, I created one more component called ‘LevelSlider’, which adds a simple slider and a button that when pressed, runs the ‘handleBufferChange` function. The math for calculating the buffer distance based on the slope is done in the LevelSlider component with the static values I calculated earlier. The minimum and maximum values are also customizable. Here’s a snippet of that component:
// Only send the value on button click to prevent performance issues
const handleButtonClick = () => {
onChange(Number(sliderValue) / tanSlope); // Send value to parent on button click
};
return (
<div style={{ padding: '10px', textAlign: 'center' }}>
<label htmlFor="buffer-slider">Water Level (meters): </label>
<input
id="buffer-slider"
type="range"
min={min}
max={max}
step={step}
value={sliderValue}
onChange={handleSliderChange}
/>
<span>{sliderValue}m</span>
<span>{'\t'}</span>
<button onClick={handleButtonClick}>Apply Water Level Change</button>
</div>
)
The LevelSlider component is added in the ‘return’ section of ‘BufferLayer’, with CSS styling to make sure it appears neatly in the bottom left corner of the map.
With a bit of extra styling, and by making use of other OpenLayers features like ‘Select’, and ‘Overlay’, I was able to build this functional, portable web app that can be added to any react website with ease.
However, lots more can be done to improve it! A chart that tracks hydro-metric data over time could help give context to the water levels on the river. With a little more math, you could even make use of discharge information to estimate the speed of the river at different times of year.
Using the campsite data and a centreline of the river course, you could calculate the distance between campsite, rapids, access points, etc. Making the tool a functional for planning trips. Also, given more information about individual whitewater sets, such as classes (C2, C3, etc.), descriptions, or images you could better represent the river in all it’s detail.
Mapping data in an interactive and visually compelling way is a powerful approach to uncovering spatial patterns and trends. Pydeck, a Python library for large-scale geospatial visualization, is an exceptional tool that makes this possible. Leveraging the robust capabilities of Uber’s Deck.gl, Pydeck enables users to create layered, interactive maps with ease. In this tutorial, we delve into Pydeck’s potential by visualizing earthquake data, exploring how it allows us to reveal patterns and relationships in raw datasets.
This project focuses on mapping earthquakes, analyzing their spatial distribution, and gaining insights into seismic activity. By layering visual elements like scatterplots and heatmaps, Pydeck provides an intuitive, user-friendly platform for understanding complex datasets. Throughout this tutorial, we explore how Pydeck brings earthquake data to life, offering a clear picture of patterns that emerge when we consider time, location, magnitude, and depth.
Why Pydeck?
Pydeck stands out as a tool designed to simplify geospatial data visualization. Unlike traditional map-plotting libraries, Pydeck goes beyond static visualizations, enabling interactive maps with 3D features. Users can pan, zoom, and rotate the maps while interacting with individual data points. Whether you’re working in Jupyter Notebooks, Python scripts, or web applications, Pydeck makes integration seamless and accessible.
One of Pydeck’s strengths lies in its support for multiple visualization layers. Each layer represents a distinct aspect of the dataset, which can be customized with parameters like color, size, and height to highlight key attributes. For instance, in our earthquake visualization project, scatterplot layers are used to display individual earthquake locations, while heatmaps emphasize regions of frequent seismic activity. The ability to combine such layers allows for a nuanced exploration of spatial phenomena.
What makes Pydeck ideal for projects like this one is its balance of simplicity and power. With just a few lines of code, users can create maps that would otherwise require advanced software or extensive programming expertise. Its ability to handle large datasets ensures that even global-scale visualizations, like mapping thousands of earthquakes, remain efficient and responsive.
Furthermore, Pydeck’s layered architecture allows users to experiment with different ways of presenting data. By combining scatterplots, heatmaps, and other visual layers, users can craft a visualization that is both aesthetically pleasing and scientifically robust. This flexibility makes Pydeck a go-to tool for not only earthquake mapping but any project requiring geospatial analysis.
Creating Interactive Earthquake Maps: A Pydeck Tutorial
Before diving into the visualization process, the notebook begins by setting up the necessary environment. It imports essential libraries such as pandas for data handling, pydeck for geospatial visualization, and other utilities for data manipulation and visualization control. To ensure the libraries are available for usage they must be installed using pip.
! pip install pydeck pandas ipywidgets h3
import pydeck as pdk
import pandas as pd
import h3
import ipywidgets as widgets
from IPython.display import display, clear_output
Step 1: Data Preparation and Loading
Earthquake datasets typically include information such as the location (latitude and longitude), magnitude, and depth of each event. The notebook begins by loading the earthquake data from a CSV file using the Pandas library.
The data is then cleaned and filtered, ensuring that only relevant columns—such as latitude, longitude, magnitude, and depth—are retained. This preparation step is critical as it allows the user to focus on the most important attributes needed for visualization.
Once the dataset is ready, a preview of the data is displayed to confirm its structure. This typically involves displaying a few rows of the dataset to check the format and ensure that values such as the coordinates, magnitude, and depth are correctly loaded.
# Read in dataset
earthquakes = pd.read_csv("Earthquakes-1990-2023.csv")
# Drop rows with missing data
earthquakes = earthquakes.dropna(subset=["latitude", "longitude", "magnitude", "depth"])
# Convert time column to datetime
earthquakes["time"] = pd.to_datetime(earthquakes["time"], unit="ms")
Step 2: Initializing the Pydeck Visualization
With the dataset cleaned and ready, the next step is to initialize the Pydeck visualization. Pydeck provides a high-level interface to create interactive maps by defining various layers that represent different aspects of the data.
The notebook sets up the base map using Pydeck’s Deck class. This involves defining an initial view state that centers the map on the geographical region of interest. The center of the map is determined by calculating the average latitude and longitude of the earthquakes in the dataset, and the zoom level is adjusted to provide an appropriate level of detail.
The primary visualization in the notebook is a heatmap layer to display the density of earthquake events. This layer aggregates the data into a continuous color gradient, with warmer colors indicating areas with higher concentrations of seismic activity.
The heatmap layer helps to identify regions where earthquakes are clustered, providing a broader view of global or regional seismic activity. For instance, high-density areas—such as the Pacific Ring of Fire—become more prominent, making it easier to identify active seismic zones.
# Define HeatmapLayer
heatmap_layer = pdk.Layer(
"HeatmapLayer",
data=filtered_earthquakes,
get_position=["longitude", "latitude"],
get_weight="magnitude", # Higher magnitude contributes more to heatmap
radius_pixels=50, # Radius of influence for each point
opacity=0.7,
)
Step 4: Adding the 3D Layer
To enhance the visualization, the notebook adds a columnar layer, which maps individual earthquake events and there depths as extruded columns on the map. Each earthquake is represented by a column, where:
Height: The height of each column corresponds to the depth of the earthquake. Tall columns represent deeper earthquakes, making it easy to identify significant seismic events at a glance.
Color: The color of the column also emphasizes the depth of the earthquake, with a color gradient yellow-red used to represent varying depths. Typically, deeper earthquakes are shown in redder colors, while shallower earthquakes are displayed in yellow.
This 3D column layer provides an effective way to visualize the distribution of earthquakes across geographic space while also conveying important information about their depth.
# Define a ColumnLayer to visualize earthquake depth
column_layer = pdk.Layer(
"ColumnLayer",
data=sampled_earthquakes,
get_position=["longitude", "latitude"],
get_elevation="depth", # Column height represents depth
elevation_scale=100,
get_fill_color="[255, 255 - depth * 2, 0]", # yellow to red
radius=15000,
pickable=True,
auto_highlight=True,
)
Step 5: Refining the Visualization
Once the base map and layers are in place, the notebook provides additional customization options to refine the visualization. Pydeck’s interactive capabilities allow the user to:
Zoom in and out: Users can zoom in to explore smaller regions in greater detail or zoom out to get a global view of seismic activity.
Hover for details: When hovering over an earthquake event on the map, a tooltip appears, providing additional information such as the exact magnitude, depth, and location. This interaction enhances the user experience, making it easier to explore the data in a hands-on way.
The notebook also ensures that the map’s appearance and behavior are tailored to the dataset, adjusting parameters like zoom level and pitch to create a visually compelling and informative display.
Step 6: Analyzing the Results
After rendering the map with all layers and interactive features, the notebook transitions into an analysis phase. With the interactive map in front of them, users can explore the patterns revealed by the visualization:
Clusters of seismic activity: By zooming into regions with high earthquake density, users can visually identify clusters of activity along tectonic plate boundaries, such as the Pacific Ring of Fire. These clusters highlight regions prone to more frequent and intense earthquakes.
Magnitude distribution: The varying sizes of the circles (representing different earthquake magnitudes) reveal patterns of high-magnitude events. Users can quickly spot large earthquakes in specific regions, offering insight into areas that may need heightened attention for preparedness or mitigation efforts.
Depth-related trends: The color gradient used to represent depth provides insights into the relationship between earthquake depth and location. Deeper earthquakes often correspond to subduction zones, where one tectonic plate is forced beneath another. This spatial relationship is critical for understanding the dynamics of earthquake behavior and associated risks.
By interacting with the map, users gain a deeper understanding of the data and can draw meaningful conclusions about seismic trends.
Limitations of Pydeck
While Pydeck is a powerful tool for geospatial visualization, it does have some limitations that users should be aware of. One notable constraint is its dependency on web-based technologies, as it relies heavily on Deck.gl and the underlying JavaScript frameworks for rendering visualizations. This means that while Pydeck excels in creating interactive, browser-based visualizations, it may not be the best choice for large-scale offline applications or those requiring complex, non-map-based visualizations. Additionally, Pydeck’s documentation and community support, although growing, may not be as extensive as some more established libraries like Matplotlib or Folium, which can make troubleshooting more challenging for beginners. Another limitation is the performance handling of extremely large datasets; while Pydeck is designed to handle large-scale data, rendering thousands of points or complex layers may lead to slower performance depending on the user’s hardware or the complexity of the visualization. Finally, while Pydeck offers significant customization options, certain advanced features or highly specialized geospatial visualizations (such as full-featured GIS analysis) may require supplementary tools or libraries beyond what Pydeck offers. Despite these limitations, Pydeck remains a valuable tool for interactive and engaging geospatial visualization, especially for tasks like real-time data visualization and web-based interactive maps.
Conclusion
Pydeck transforms geospatial data into an interactive experience, empowering users to explore and analyze spatial phenomena with ease. Through this earthquake mapping project, we’ve seen how Pydeck highlights patterns in seismic activity, offering valuable insights into the magnitude, depth, and distribution of earthquakes. Its intuitive interface and powerful visualization capabilities make it a vital tool for geospatial analysis in academia, research, and beyond. Whether you’re studying earthquakes, urban development, or environmental changes, Pydeck provides a platform to bring your data to life. By leveraging its features, you can turn complex datasets into accessible stories, enabling better decision-making and deeper understanding of the world around us. While it is a powerful tool for creating visually compelling maps, it is important to consider its limitations, such as performance issues with very large datasets and the need for web-based technology for rendering. For users seeking similar features in a less code-based environment Kepler.gl—an open-source geospatial analysis tool—offer even greater flexibility and performance. To explore the notebook and try out the visualization yourself, you can access it here. Pydeck opens up new possibilities for anyone looking to dive into geospatial analysis and create interactive maps that bring data to life.
Geovis Project Assignment, TMU Geography, SA8905, Fall 2024
Introduction
Mapping indices like NDVI and NDBI is an essential approach for visualizing and understanding environmental changes, as these indices help us monitor vegetation health and urban expansion over time. NDVI (Normalized Difference Vegetation Index) is a crucial metric for assessing changes in vegetation health, while NDBI (Normalized Difference Built-Up Index) is used to measure the extent of built-up areas. In this blog post, we will explore data from 2019 to 2024, focusing on the single and lower municipalities of Ontario. By analyzing this five-year time series, we can gain insights into how urban development has influenced greenery in these regions. The web page leverages Google Earth Engine (GEE) to process and visualize NDVI data derived from Sentinel-2 imagery. With 414 municipalities to choose from, users can select specific areas and track NDVI and NDBI trends. The goal was to create an intuitive and informative platform that allows users to easily explore NDVI changes across Ontario’s municipalities, highlighting significant shifts and pinpointing where they are most evident.
In this section, we will walk through the process of creating a dynamic map visualization and exporting time-series data using Google Earth Engine (GEE). The provided code utilizes Sentinel-2 imagery to calculate vegetation and built-up area indices, such as NDVI and NDBI for a defined range of years. The application was developed using the GEE Code Editor and published as a GEE app, ensuring accessibility through an intuitive interface. Keep in mind that the blog post includes only key snippets of the code to walk you through the steps involved in creating the app. To try it out for yourself, simply click the ‘Explore App’ button at the top of the page.
Setting Up the Environment
First, we define global variables that control the years of interest, the area of interest (municipal boundaries), and the months we will focus on for analysis. In this case, we analyze data from 2019 to 2024, but the range can be modified. The code utilizes the municipality Table to filter and display the boundaries of specific municipalities.
var beginningYear = 2019;
var endingYear = 2024;
var NDVILabel = 'NDVI';
var NDBILabel = 'NDBI';
Visualizing Sentinel-2 Imagery
Sentinel-2 imagery is first filtered by the date range (2019-2024 in our case) and bound to a specific municipality. Then we mask clouds in all images using a cloud quality assessment dataset called Cloud Score+. This step helps in generating clean composite images, as well as reducing errors during index calculations. We use a set of specific Sentinel-2 bands to calculate key indices, like NDVI and NDBI which are visualized in true colour or with specific palettes for enhanced contrast. To make this easier, the bands of the Sentinel 2 images (S2_BANDS) are renamed to human-readable names (STD_S2_NAMES).
var S2_BANDS = ['B2', 'B3', 'B4', 'B8', 'B11', 'B12', 'TCI_R', 'TCI_G', 'TCI_B', 'SCL'];
var STD_S2_NAMES = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2', 'redT', 'greenT', 'blueT', 'class'];
var QA_BAND = 'cs_cdf';
var QA_BAND2 = 'cs';
// The threshold for masking; values between 0.50 and 0.65 generally work well.
// Higher values will remove thin clouds, haze & cirrus shadows.
var CLEAR_THRESHOLD = 0.65;
function getCloudMaskedSentinelImages(municipalityGeometry)
{
// Make a clear median composite.
var sentinelDataset = sentinel2
.filterBounds(municipalityGeometry)
.filter(ee.Filter.date(beginningYear+'-01-01', endingYear+'-12-31'))
.linkCollection(csPlus, [QA_BAND, QA_BAND2])
.map(function(img) {
var x = img.updateMask(img.select(QA_BAND).gte(CLEAR_THRESHOLD));
x = x.set('year', img.date().get('year'));
return x;
})
.select(S2_BANDS, STD_S2_NAMES);
return sentinelDataset;
}
function getProcessedSentinelImages(municipalityFeature)
{
var municipalityGeometry = municipalityFeature.geometry();
var sentinelDataset = getCloudMaskedSentinelImages(municipalityGeometry);
var sentinelImages = ee.List([])
for (var year=beginningYear; year <= endingYear;year++)
{
var imageForYear = sentinelDataset.filterDate(year+'-01-01',year+'-12-31').median()
.set('year',year)
.set('product','Sentinel 2');
imageForYear = imageForYear.clip(municipalityGeometry);
sentinelImages = sentinelImages.add(imageForYear);
}
return ee.ImageCollection(sentinelImages);
}
Index Calculations
The key indices are calculated for each year within the selected municipality boundaries. These indices are calculated using the normalized difference between relevant bands (e.g., NIR and Red bands for NDVI), whereas NDBI is calculated using (SWIR and NIR bands). After calculating the indices, the results are added to the map for visualization. Typically, for NDVI, green represents healthy vegetation, while purple indicates unhealthy vegetation, often corresponding to developed areas such as cities. In the case of NDBI, red pixels signify higher levels of built-up areas, whereas lighter colors, such as white, indicate minimal to no built-up areas, suggesting more vegetation. Together, NDVI and NDBI results provide complementary insights, enabling a better understanding of the relationship between vegetation and built-up areas.
For each year, the calculated index is visualized, and users can see how vegetation and built-up areas have changed over time.
function calculateIndexes(sentinelImages)
{
var imagesNDVI = ee.List([]);
var imagesNDBI = ee.List([]);
for (var year = beginningYear; year <= endingYear; year++)
{
var result = calculateIndexesForYear(sentinelImages, year, true);
imagesNDVI = imagesNDVI.add(result.NDVI);
imagesNDBI = imagesNDBI.add(result.NDBI);
}
return {NDVI: imagesNDVI, NDBI: imagesNDBI};
}
function calculateIndexesForYear(processedImages, year, addToMap)
{
var sentinelImageForYear = processedImages.filterMetadata('year', 'equals', year).first();
sentinelImageForYear = addTimePropertiesToImage(sentinelImageForYear, year);
var NDVISentinel = calculateNormalizedDifference(sentinelImageForYear, year, 'nir', 'red');
var NDBISentinel = calculateNormalizedDifference(sentinelImageForYear, year, 'swir2', 'nir');
var result = {NDVI: NDVISentinel, NDBI: NDBISentinel};
if (addToMap) {
addImagesToMap(sentinelImageForYear, result, year);
}
return result;
}
function calculateNormalizedDifference(image, year, band1, band2)
{
var diff = image.normalizedDifference([band1, band2]);
return addTimePropertiesToImage(diff, year);
}
function addTimePropertiesToImage(image, year)
{
var startDate = ee.Date.fromYMD(year, filterMonthStart, 1);
var endDate = startDate.advance(filterMonthEnd - filterMonthStart, 'month');
return image.set('system:time_start',startDate.millis(),'system:time_end', endDate.millis(),'year', year);
}
Generating Time-Series Animations
To provide a clearer view of changes over time, the code generates a time-series animation for the selected indices (e.g., NDVI). The animation visualizes the change in land cover over multiple years and is generated as a GIF, which is displayed within the map interface. The animation creation function combines each year’s imagery and overlays relevant text and other symbology, such as the year, municipality name, and legend.
function createAnimationPerYear(collectionImages, collectionName, collectionVisualization,
municipalityName, aoiDimensions, addGradientBar)
{
var mapTitle = collectionName+' Timeseries Animation';
var legendLabels = ee.List.sequence(collectionVisualization.min,collectionVisualization.max);
var gradientBarLabelResize = collectionVisualization.min%1!=0 || collectionVisualization.max%1!=0;
var northArrow = NorthArrow.draw(aoiDimensions.NorthArrowPoint, aoiDimensions.Scale, aoiDimensions.Width.multiply(.08), .05, 'EPSG:3857');
var textProperties = {
fontType: 'Arial',
fontSize: gradientBarLabelResize? 10 : 14,
textColor: 'ffffff',
outlineColor: '000000',
outlineWidth: 0.5,
outlineOpacity: 0.6
};
var gradientBar;
if (addGradientBar) {
gradientBar = GradientBar.draw(aoiDimensions.GradientBarBox,
{
min:collectionVisualization.min,
max:collectionVisualization.max,
palette: collectionVisualization.palette,
labels: [collectionVisualization.min,collectionVisualization.max],
format: gradientBarLabelResize ? '%.1f' : '%.0f',
round:false,
text: textProperties,
scale: aoiDimensions.Scale
});
}
var rgbVis = ee.ImageCollection(collectionImages).map(function(img) {
var annotations = [
{property: 'titleLabel', position: 'left', offset: '2%', margin: '1%', scale: aoiDimensions.Scale, fontSize: 16 },
{property: 'locationLabel', position: 'left', offset: '92%', margin: '1%', scale: aoiDimensions.Scale }
];
var year = img.get('year');
var labelLocation = ee.String(municipalityName).cat(', ').cat(year);
img = img.visualize(collectionVisualization).set({titleLabel:mapTitle, locationLabel: labelLocation});
var annotated = ee.Image(text.annotateImage(img, {}, aoiDimensions.AoiBox, annotations)).blend(northArrow);
if (addGradientBar) {
annotated = annotated.blend(gradientBar);
}
return annotated;
});
var uiGif = ui.Thumbnail(rgbVis, aoiDimensions.GifParams);
return uiGif;
}
function createTimeSeriesAnimations(sentinelImages, calculatedIndexes, municipalityFeature, municipalityName)
{
var aoiDimensions = getDimensionsForGifs(municipalityFeature);
var ndviGif = createAnimationPerYear(calculatedIndexes.NDVI, NDVILabel, visualizationNDVI, municipalityName, aoiDimensions, true);
var ndbiGif = createAnimationPerYear(calculatedIndexes.NDBI, NDBILabel, visualizationNDBI, municipalityName, aoiDimensions, true);
var trueColourGif = createAnimationPerYear(sentinelImages, "True Colour", visualizationTrueColor, municipalityName, aoiDimensions, false);
return {
TrueColourGif : trueColourGif,
NdviGif: ndviGif,
NdbiGif: ndbiGif
};
}
Map Interaction
A key feature of this code is the interactive map interface, which allows users to select a municipality from a dropdown menu. Once a municipality is selected, the map zooms into that area and overlays the municipality boundaries. You can then submit that municipality to calculate the indices and render the time series GIF on the panel. You can also explore the various years on the map by selecting the specific layers you want to visualize.
To start with, we will set up the UI components and replace the default UI with our new UI:
// Initialize the map
var map = ui.Map();
// Create a list of items for the dropdown menu
var items = municipalityNames.getInfo();
// Create a search box
var searchBox = ui.Textbox({
placeholder: 'Search...',
onChange: function(text) {
filterMunicipalities(text);
}
});
// Create a dropdown menu with options
var dropdown = ui.Select({
items: ['1'],
placeholder: 'Choose a Municipality',
onChange: function(municipalityName) {
displayMunicipalityBoundary(municipalityName);
}
});
// Create a submit button
var submitButton = ui.Button({
label: 'Submit',
onClick: function() {
var selectedOption = dropdown.getValue();
print('Submitted option:', selectedOption);
doThings(selectedOption);
}
});
// Create a panel to hold the UI elements
var panel = ui.Panel({
widgets: [searchBox, dropdown, submitButton],
layout: ui.Panel.Layout.flow('vertical'),
style: {width: '250px', position: 'top-left'}
});
// Create a panel to hold the GIF
var gifPanel = ui.Panel({
layout: ui.Panel.Layout.flow('vertical', true),
style: {width: '897px', backgroundColor: '#d3d3d3'}
});
// Replace placeholder items with municipalityNames
dropdown.items().reset(items);
// Add the panel to the map
map.add(panel);
// Map/chart and image card panel
var splitPanel = ui.SplitPanel(map, gifPanel);
// Replace current UI with the new splitPanel
ui.root.clear();
ui.root.add(splitPanel);
Notice there are functions for the interactive components of the UI, those are shown below:
// Filter the dropdown with the municipalities which match the search box
function filterMunicipalities(searchString)
{
var filteredItems = items.filter(function(item) { return item.toLowerCase().indexOf(searchString.toLowerCase()) !== -1});
// Reset the dropdown items with the filtered items
dropdown.items().reset(filteredItems);
}
// Display boundary of a geometry when selected from the dropdown
function displayMunicipalityBoundary(municipalityName)
{
map.layers().reset();
var municipalityFeature = municipalityBoundaries.filter(ee.Filter.eq('MUNICIPA_2', municipalityName)).first();
map.centerObject(municipalityFeature.geometry());
addGeometryBoundaryToMap(municipalityFeature.geometry(), 'red', municipalityName);
}
// Add boundary of a geometry to the map
function addGeometryBoundaryToMap(boundaryGeometry, colour, name)
{
var styledImage = ee.Image()
.paint({ featureCollection: boundaryGeometry, color: 1, width: 3 })
.visualize({ palette: [colour], forceRgbOutput: true });
map.addLayer(styledImage, {}, name, true);
}
// Add generated gifs to the gifPanel
function displayGifs(gifs)
{
gifPanel.clear();
gifPanel.add(gifs.TrueColourGif);
gifPanel.add(gifs.NdviGif);
gifPanel.add(gifs.NdbiGif);
}
// Add true colour and calculated indexes to the map for a single year
function addImagesToMap(imageForYear, indexes, year)
{
map.addLayer(imageForYear, visualizationTrueColor, year + " True Colour", false);
map.addLayer(indexes.NDVI, visualizationNDVI, year + " "+ NDVILabel, false);
map.addLayer(indexes.NDBI, visualizationNDBI, year + " "+ NDBILabel, false);
}
// Submit button actions, get processed imagery, calculate indices, generate gifs, display gifs
function doThings(municipalityName) {
var municipalityFeature = municipalityBoundaries.filter(ee.Filter.eq('MUNICIPA_2', municipalityName)).first();
var sentinelImages = getProcessedSentinelImages(municipalityFeature);
map.layers().reset();
addGeometryBoundaryToMap(municipalityFeature.geometry(), 'green', municipalityName);
map.centerObject(municipalityFeature.geometry());
//calculate the indexes for years
var calculatedIndexes = calculateIndexes(sentinelImages);
var gifs = createTimeSeriesAnimations(sentinelImages, calculatedIndexes, municipalityFeature, municipalityName);
displayGifs(gifs);
}
Future Additions
Looking ahead, the workflow can be enhanced by calculating the mean NDVI or NDBI for each municipality over longer periods of time and displaying it on a graph. The workflow can also incorporate Sen’s Slope, a statistical method used to assess the rate of change in vegetation or built-up areas. This method is valuable at both pixel and neighbourhood levels, enabling a more detailed assessment of land cover changes. Future additions could also include the application of machine learning models to predict future changes and expanding the workflow to other regions for broader use.
The city of Hamilton, Ontario is home to many trails and waterfalls and offers many scenic and nature-focused areas. The city is situated along the Niagara Escarpment, which allows for unique topography and is the main reason for the high frequency of waterfalls that exist across the city. Hamilton is dubbed as the waterfall capital of the world, being home to over 100 waterfalls within the city’s boundaries. Despite this, Hamilton is still under the radar for tourists as it sits between 2 other major cities that see higher tourist traffic such as Niagara Falls (which is home to one of the world’s most known waterfall) and Toronto (popular for the CN Tower and hustle bustle city atmosphere).
The main purpose of this project was to increase awareness for the beauty of the Southern Ontario wonder and to provide prospective visitors, or even citizens of Hamilton, with an interactive story map to provide some general information on the trails connected to the waterfalls and the details of the waterfalls themselves. The 3D modelling aspect of the project aims to provide a unique visualization of how the waterfalls look in order to provide a quick, yet creative visual for those looking into visiting the city to see the waterfalls in person.
Data, Processing and Workflow (Blender + OpenTopography DEMs)
The first step of this project was to obtain DEMs for the regions of interest (Hamilton, Ontario) to be used as the foundation of the 3D model. The primary software used for this project was Blender (a 3D modeling software) leveraged by a GIS oriented plugin called “BlenderGIS” which is direct plugin available created by GitHub user domlysz allowing users to directly import GIS related files and elements such as shapefiles and base maps into the Blender editing and modelling pane. The plugin also allows users to load and access DEMs straight into Blender to be extracted and edited sourced through OpenTopography.
The first step is to open Blender and navigate towards the GIS tab in the object mode in the application :
Under the GIS tab, there are many options and hovering over “web geodata” prompts the following options:
In this case, we want to start off with a base map and the plugin has many sources available including the default Google Maps, ESRI Base maps as well as OpenStreetMap (Google Satellite was used for this project)
Once the base map is loaded into the Blender plane, I zoomed into the area of interest #1, being the Dundas Peak region, which is home to both Tew’s Falls and Webster’s Falls. The screenshot below shows the 2D image of Tew’s Falls in the object plane:
Once an area of interest is defined and all information is loaded, the elevation model is requested to generate the 3D plane of the land region:
The screenshot above shows the general 3D plane being created from a 30m DEM extracted from OpenTopography through the BlenderGIS plugin. The screenshot below showcases the modification of the 3D plane through the extrusion tool which adds depth and edges to create the waterfall look. Below is the foundation used specifically for Tew’s Falls.
Following this, imagery from the basemap was merged with the 3D extrusted plane to produce a the 3D render of the waterfall plane. To add the waterfall animation, the physics module was activated, allowing for various types of motion to be added to the 3D plane. Fluid was selected with the outflow behavior to simulate the movement of water coming down from a waterfall. This was then overlayed onto the 3D plane of the waterfall to simulate water flowing down from the waterfall.
These steps were then essentially repeated for Webster’s Falls and Devil’s Punchbowl waterfalls to produce 3D models with waterflow animations!
Overall, I found this to be a cool and fun way to visualize the waterfalls of Hamilton, Ontario and adding the rendered product directly onto ArcGIS Story Maps makes for an immersive experience. The biggest learning curve for this project was the use of the application Blender as I have never used the software before and have only briefly explored 3D modelling in the past. Originally, I planned to create 10 renders and animations of 10 waterfalls in Hamilton however, this became a daunting task after realizing the rendering and export times after completing the 3 models shown in the Story Map. Additionally, the render quality was rather low since 2D imagery was interpolated into a 3D plane which caused some distortions and warped shapes which would require further processing.
Author: Shantelle Miller Geovisualization Project Assignment @TMUGeography, SA8905, Fall 2024
Introduction: Why Flood Resilience Matters
Urban flooding is a growing concern, especially in cities like Toronto, where increasing urbanization has disrupted the natural water cycle. Greenspaces, impervious surfaces, and stormwater infrastructure all play vital roles in reducing flood risks, but understanding how these factors interact can be challenging.
To address this, I created an interactive mapping tool using ArcGIS Experience Builder that visualizes flood resilience in Toronto. By combining multiple datasets, including Topographic Wetness Index (TWI), greenspaces, and stormwater infrastructure, this map highlights areas prone to flooding and identifies zones where natural mitigation occurs.
One of the tool’s standout features is the TWI-Greenspace Overlay, which pinpoints “Natural Absorption Zones.” These are areas where greenspaces overlap with high TWI values, demonstrating how natural environments help absorb runoff and reduce flooding.
Why Experience Builder?
I chose ArcGIS Experience Builder for this project because it offers a user-friendly, highly customizable platform for creating dynamic, interactive web maps. Unlike static maps, Experience Builder allows users to explore data in real-time with widgets like toggleable layers, dynamic legends, and interactive pop-ups.
Multi-Dataset Integration: It supports the combination of multiple datasets like TWI, greenspaces, and stormwater infrastructure.
Widgets and Tools: Users can filter data, view attributes, and toggle layers seamlessly.
No Code Required: Although customizable, the platform doesn’t require coding, making it accessible for users of all technical backgrounds.
The Importance of Data Normalization and Standardization
Before diving into the data, it’s essential to understand the critical role that data normalization and standardization played in this project:
Ensuring Comparability: Different datasets often come in various formats and scales. Standardizing these allows for meaningful comparisons across layers, such as correlating TWI values with greenspace coverage.
Improving Accuracy: Normalization adjusts values measured on different scales to a common scale, reducing potential biases and errors in data interpretation.
Facilitating Integration: Harmonized data enables seamless integration within the mapping tool, enhancing user experience and interaction.
Data: The Foundation of the Project
The project uses data from the Toronto Open Data Portal and Ontario Data Catalogue, processed in ArcGIS Pro, and published to ArcGIS Online.
Layers
Topographic Wetness Index (TWI):
Derived from DEM
TWI identifies areas prone to water accumulation.
It was categorized into four levels (low, medium, high, and very high flood risk), with only the highest-risk areas displayed for focus.
Greenspaces:
Includes parks, forests, and other natural areas that act as natural buffers against flooding.
Impervious Surfaces and Pervious Surfaces:
Pervious Surfaces: Represent natural areas like soil, grass, and forests that allow water to infiltrate.
Impervious Surfaces: Represent roads, buildings, and other hard surfaces that contribute to runoff.
Stormwater Infrastructure:
Displays critical infrastructure like catch basins and sewer drainage points, which manage water flow.
TWI-Greenspace Overlay:
Combines high-risk TWI zones with greenspaces to identify “Natural Absorption Zones”, where natural mitigation occurs.
Creating the Map: From Data to Visualization
Step 1: Data Preparation in ArcGIS Pro
Imported raw data and clipped layers to Toronto’s boundaries.
Processed TWI using terrain analysis and classified it into intuitive flood risk levels.
Combined pervious and impervious surface data into a single dataset for easy comparison.
Created the TWI-Greenspace Overlay, merging greenspaces and TWI data to show natural flood mitigation zones.
Normalized and standardized all layers.
Step 2: Publishing to ArcGIS Online
Uploaded processed layers as hosted feature layers with customized symbology.
Configured pop-ups to include detailed attributes, such as TWI levels, land cover types, and drainage capacities as well as google map direct link for each point feature.
Step 3: Building the Experience in ArcGIS Experience Builder
Imported the web map into Experience Builder to design the user interface.
Added widgets like the Map, Interactive Layer List, Filters, Legend, Search etc., for user interaction.
Customized layouts and legends to emphasize the relationship between TWI, greenspaces, and surface types.
Interactive Features
The map offers several interactive features to make flood resilience data accessible:
Layer List:
Users can toggle between TWI, pervious surfaces, impervious surfaces, greenspaces, and infrastructure layers.
Dynamic Legend:
Updates automatically to reflect visible layers, helping users interpret the map.
Pop-Ups:
Provide detailed information for each feature, such as:
TWI levels and their implications for flood risk.
Land cover types, distinguishing between pervious and impervious surfaces.
Greenspace types and their flood mitigation potential.
TWI-Greenspace Overlay Layer:
Highlights areas where greenspaces naturally mitigate flooding, called “Natural Absorption Zones.”
Filters:
Enable users to focus on specific attributes, such as high-risk TWI areas or zones dominated by impervious surfaces.
Applications and Insights
The interactive map provides actionable insights for multiple audiences:
Urban Planners:
Identify areas lacking greenspace or dominated by impervious surfaces where flooding risks are highest.
Plan infrastructure improvements to mitigate runoff, such as adding bioswales or permeable pavement.
Planners:
Assess development sites to ensure they align with flood mitigation goals and avoid high-risk areas.
Homeowners:
Evaluate flood risks and identify natural mitigation features in their neighborhoods.
For example, the map can reveal neighborhoods with high TWI and limited greenspace, showing where additional stormwater infrastructure might be necessary.
Limitations and Future Work
Limitations
Incomplete Data: Some areas lack detailed data on stormwater infrastructure or land cover, leading to gaps in analysis.
Dynamic Changes: The static nature of the datasets means the map doesn’t reflect recent urban development or climate events.
Future Work
Add real-time data on precipitation and runoff to make the tool more dynamic.
Expand the analysis to include socioeconomic factors, highlighting vulnerable populations.
Enhance accessibility features to ensure compliance with AODA standards for users with disabilities.
Conclusion: A Tool for Flood Resilience
Flood resilience is a complex issue requiring a nuanced understanding of natural and built environments. This interactive mapping tool simplifies these relationships by visualizing critical datasets like TWI, greenspaces, and pervious versus impervious surfaces.
By highlighting areas of natural flood mitigation and zones at risk, the map provides actionable insights for planners, developers, and homeowners. The TWI-Greenspace Overlay layer, in particular, underscores the importance of greenspaces in managing stormwater and reducing flood risks in Toronto.
I hope this project inspires further exploration of flood resilience strategies and serves as a resource for building a more sustainable and resilient city.
Thank you for reading, and feel free to explore the map experience using the link below!
Project Link:Explore Flood Resilience in Toronto Data Source: Toronto Open Data Portal, Ontario Open Data Catalogue Built Using: ArcGIS Pro, ArcGIS Online, and ArcGIS Experience Builder