Visualizing Population on a 3D-Printed Terrain of Ontario

Xingyu Zeng

Geovisual Project Assignment @RyersonGeo, SA8905, Fall 2022

Introduction

3D visualization is an essential and popular category in geovisualization. After a period of development, 3D printing technology has become readily available in people’s daily lives. As a result, 3D printable geovisualization project was relatively easy to implement at the individual level. Also, compared to electronic 3D models, the advantages of explaining physical 3D printed models are obvious when targeting non-professional users.

Data and Softwares

3D model in Materialise Magics
  • Data Source: Open Topography – Global Multi-Resolution Topography (GMRT) Data Synthesis
  • DEM Data to a 3D Surface: AccuTrans 3D – which provides translation of 3D geometry between the formats used by many 3D modeling programs.
  • Converting a 3D Surface to a Solid: Materialise Magics – Converting surface to a solid with thickness and the model is cut according to the boundaries of the 5 Transitional Regions of Ontario. Using different thicknesses representing the differences in total population between Transitional Regions. (e.g. The central region has a population of 5 million, and the thickness is 10 mm; the west region has a population of 4 million the thickness is 8 mm)
  • Slicing & Printing: This step is an indispensable step for 3D printing, but because of the wide variety of printer brands on the market, most of them have their own slicing software developed by the manufacturers, so the specific operation process varies. But there is one thing in common, after this step, the file will be transferred to the 3D printer, and what follows is a long wait.

Visualization

The 5 Transitional Regions is reorganized by the 14 Local Health Integration Network (LHIN), and the corresponding population and model heights (thicknesses) for each of the five regions of Ontario are:

  • West, clustering of: Erie-St. Clair, South West, Hamilton Niagara Haldimand Brant, Waterloo Wellington, has a total population of about 4 million, the thickness is 8mm.
  • Central, clustering of: Mississauga Halton, Central West, Central, North Simcoe Muskoka, has a total population of about 5 million, the thickness is 10mm.
  • Toronto, clustering of: Toronto Central, has a total population of about 1.4 million, the thickness is 2.8mm.
  • East, clustering of: Central East, South East, Champlain, has a total population of about 3.7 million, the thickness is 7.4mm.
  • North, clustering of: North West, North East, has a total population of about 1.6 million, the thickness is 3.2mm.
Different thicknesses
Dimension Comparison
West region
Central region
Toronto
East region
North region

Limitations

The most unavoidable limitation of 3D printing is the accuracy of the printer itself. It is not only about the mechanical performance of the printer, but also about the materials used, the operating environment (temperature, UV intensity) and other external factors. The result of these factors is that the printed models do not match exactly, even though they are accurate on the computer. On the other hand, the 3D printed terrain can only represent variables that can be presented by unique values, such as the total population of my choice.

Visualizing Flow Regulation at the Shand Dam

Hannah Gordon

GeovisProject Assignment @RyersonGeo, SA8905, Fall 2022

Concept

When presented with this geovisualization opportunity I knew I wanted my final deliverable to be interactive and novel. The idea I decided on was a 3D printed topographic map with interactive elements that would allow the visualization of flow regulation from the Shand Dam by placing wooden dowels in holes of the 3D model above and below the dam to see how the dam regulated flow. This concept visualizes flow (cubic meters of water a second) in a way similar to a hydrograph, but brings in 3D elements and is novel and fun as opposed to a traditional chart.   Shand Dam on the Grand River was chosen as the site to visualize flow regulation as the Grand River is the largest river system in Southern Ontario, Shand Dam is a Dam of Significance, and  there are hydrometric stations that record river discharge above and below the dam for the same time periods (~1970-2022). 

About Shand Dam

Dams and reservoirs like the Shand Dam are designed to provide maximum flood storage following peak flows. During high flows (often associated with spring snow melt) water is held in the reservoir to reduce the amount of flow downstream, lowering flood peak flows (Grand River Conservation Authority, 2014). Shand Dam (constructed in 1942 as Grand Valley Dam) is located just south of Belwood Lake (an artificial reservoir) in Southern Ontario, and provides significant flow regulation and low flow augmentation that prevents flooding south of the dam (Baine, 2009). Shand Dam proved a valuable investment in 1954 after Hurricane Hazel when no lives were lost in the Grand River Watershed from the hurricane.

Shand Dam (at the time Grand Valley Dam) in 1942. Photographer: Walker, A., 1942

Today, the dam continues to prevent  and lessen the devastation from flooding (especially spring high-flows) through the use of four large gates and three ‘low-flow discharge tubes’ (Baine, 2009).   Dam discharge from dams on the Grand River may continue for some time after the storm is over to regain reservoir storage space and prepare for the next storm  (Grand River Conservation Authority, 2014). This is illustrated in the below hydrographs where the flow above and below the dam is plotted over a time series of one week prior to the peak flow and one week post the peak flow, and the dam delays and ‘flattens’ the peak discharge flow.

Data & Process

This project required two data sources – the hydrometric data for river discharge and a DEM (digital elevation model) from which a 3D printed model will be created. Hydrometric data for the two stations (02GA014 and 02GA016) was downloaded from the Government of Canada, Environment and Natural resources in the format of a .csv (comma separated value) table. Two datasets for hydrometric data were downloaded – the annual extreme peak data for both stations and the daily discharge data for both stations  in date-data format.  The hydrometric data provided river discharge as daily averages in cubic meters a second.   The DEM was downloaded from the Government of Canada’s Geospatial Data Extraction Tool. This website makes it simple and easy to download a DEM for a specific region of canada at a variety of spatial resolutions. I chose to extract my data for the area around Shand Dam that included the hydrometric stations, at a 20 meter resolution (finest resolution available).

3D Printing the DEM

The first step in creating the interactive 3D model was becoming 3D printer certified at Toronto Metropolitan University’s  Digital Media Experience Lab (DME). While I already knew how to 3D print this step was crucial as it allowed me to have access to the 3D printers in the DME for free. Becoming certified with the DME was a simple process of watching some videos, taking an online test, then booking an in person test. Once I had passed I was able to book my prints. The DME has two PRUSA brand printers. These 3D printers require a .gcode file to print models. Initially my data was in a .tiff file, and creating a .gcode file would first involve creating an STL (standard triangle language), then creating a gcode file from the STL. The gcode file acts as a set of ‘instructions’ for the 3D printer.

Exporting the STL with QGIS

First the plugin ‘DEM to 3D print’ had to be installed for QGIS. This plugin creates an STL file from the DEM (tiff). When exporting the digital elevation model to an STL (standard triangle language) file a few constraints had to be enforced.

  • The final size of the STL had to be under 25 mb so it could be uploaded and edited in tinkercad to add holes for the dowels.
  • The final size of the STL file had to be less than ~20cm by ~20cm to fit on the 3D printers bed. 
  • The final .gcode file created from the STL would have to print in under 6 hours to be printed at  the DME. This created a size constraint on the model I would be able to 3D print.

It took multiple experimentations of the QGIS DEM to 3D plugin to create the two STL files that would each print in under 6 hours, and be smaller than 25mb. The DEM was exported as an STL using the plugin and the following settings;

  • The spacing was 0.6mm. Spacing reflects the amount of detail in the STL, and while a spacing of 0.2 mm would have been more suitable for the project it would have created too large of a file to be imported to tinkercad. 
  • The final model size is 6 cm by 25cm and divided into two parts of 6 by 12.5cm. 
  • The model height of the STL was set to 400m, as the lowest elevation to be printed was 401m. This ensured an unnecessarily thick model would not be created. A thick model was to be avoided as it would waste precious 3D printing time.
  • The base height of the model was 2mm. This means that below the lowest elevation an additional 2 mm of model will be created.
  • The final scale of the model is approximately 1:90,000 (1:89,575), with a vertical exaggeration of 15 times. 

Printing with the DME

These STL that were exported from QGIS were opened in PRUSA slicer to create gcode files. The 3D printer configuration of the DME printers were imported and the infill density was set to 10%. This is the lowest infill density the DME will permit, and helps lower the print time by printing a lattice on the interior of the print as opposed to solid fill. Both the gcode files would print in just under 6 hours. 

Part one of the 3D elevation model printing in the DME, the ‘holes’ seen in the top are the infill grid.

3D printing the files at the DME proved more challenging than initially expected. When the slots were booked on the website I made it clear that the two files were components of a larger project, however when I arrived to print my two files the 3D printers had two different colors of filament (one of which was a blue-yellow blend). As the two 3D prints would be assembled together I was not willing to create a model that was half white, half blue/yellow. Therefore the second print had to be unfortunately pushed to the following week. At this point I was glad I had been proactive and booked the slots early otherwise I would have been forced to assemble an unattractive model.  The DME staff were very understanding and found humor in the situation,  immediately moving  my second print to the following week so the two files could use the same filament color. 

Modeling Hydrometric Data with Dowels

To choose the days used to display discharge in the interactive model the csv file of annual extreme peak data was opened in excel and maximum annual discharge was sorted in descending order. The top three discharge events at station 02GA014 (above the dam), that would have had data on the same days below the dam  were:

  • 1975-04-19 (average daily discharge of 306 cubic meters a second)
  • 1976-03-21 (average daily discharge of 289 cubic meters a second)
  • 2008-12-28 (average daily discharge of 283 cubic meters a second)

I also chose 2018’s peak discharge event (average daily discharge of 244 cubic meters a second on February 21st) to be included as it was a significant more recent flow event (top 6)

Once the four peak flow events had been decided on, their corresponding data in the daily discharge data were found, and  a scaling factor of 0.05 was applied in excel so I would know the proportional length to cut the dowels. This meant that every 0.5cm of dowel would indicate 10 cubic meters a second of discharge.

As the dowels sit within the 3D print, prior to cutting the dowels I had to find out the depth of the holes in the model. The hole for station 02GA014 (above the dam) was 15mm deep and the holes for station 02GA016 (below the dam) were 75mm deep. This meant that I would have to add 15mm or 75mm to the dowel length to ensure the dowels would accurately reflect discharge when viewed above the model. The dowels were then cut to size, painted to reflect the peak discharge event they correspond to and labeled with the date the data was from. Three dowels for the legend were also cut that reflected discharge of 100, 200, and 300 cubic meters a second. Three pilot holes then three 3/16” holes were drilled into the base for the project (two finished 1 x4’s) for these dowels to sit.

Assembling the Model

Once all the parts were ready the model could be assembled. The necessary information about the project and legend was then printed and carefully transferred to the wood with acetone. Then the base of the 3D print was aggressively sanded to provide better adhesion and glued onto the wood and clamped in place. I had to be careful with this as too tight of clamps would crack the print, but too loose of clamps and the print wouldn’t stay in place as it dried.

Final model showing 2018 peak flow
Final model showing 1976 peak flow
Final model showing 1975 peak flow
Final model showing 2008 peak flow

Applications

The finished interactive model allows the visualization of flow regulation from the Shand Dam, for different peak flow events, and highlights the value of this particular dam. Broadly, this project idea was a way to visualize hydrographs, and showed the differences in discharge over a spatial and temporal scale that resulted from the dam. The top dowel shows the flow above the dam for the peak flow event, and the three dowels below the dam show the flow below the dam for the day of the peak discharge, one day after, and two days after, to show the flow regulation over a period of days and illustrate the delayed and moderated hydrograph peak. The legend dowels are easily removable to line them up with the dowels in the 3D print to get a better idea of ow much flow there was on a given day at a given place. The project idea I used in  creating this model can easily be modified for other dams (provided there is suitable hydrometric data). Beyond visualizing flow regulation the same idea and process could be used to create models that show discharge at different stations over a watershed, or over a continuous period of time – such as monthly averages over a year. These models could have a variety of uses such as showing how river discharge changed in response to urbanization, or how climate change is causing more significant spring peak flows from snowmelt. 

References

Baine, J. (2009). Shand Dam a First For Canada. Grand Actions: The Grand Strategy Newsletter. Vol. 14, Issue 2. https://www.grandriver.ca/en/learn-get-involved/resources/Documents/Grand_Actions/Publications_GA_2009_2_MarApr.pdf

Grand River Conservation Authority (2014). Grand River Watershed Water Management Plan. Prepared by the Project Team, Water Management Plan., Cambridge, ON. 137p. + appendices. Retrieved from https://www.grandriver.ca/en/our-watershed/resources/Documents/WMP/Water_WMP_Plan_Complete.pdf

Walker, A. (April 18th, 1942). The dam is 72 feet high, 300 feet wide at the base, and more than a third of a mile long [photograph]. Toronto Star Photograph Archive, Toronto Public Library Digital Archives. Retrieved from https://digitalarchive.tpl.ca/objects/228722/the-dam-is-72-feet-high-300-feet-wide-at-the-base-and-more

Drone Package Deployment Tutorial / Animation

Anugraha Udas

SA8905 – Cartography & Visualization
@RyersonGeo

Introduction

Automation’s prevalence in society is becoming normalized as corporations have begun noticing its benefits and are now utilizing artificial intelligence to streamline everyday processes. Previously, this may have included something as basic as organizing customer and product information, however, in the last decade, the automation of delivery and transportation has exponentially grown, and a utopian future of drone deliveries may soon become a reality. The purpose of this visualization project is to convey what automated drone deliveries may resemble in a small city and what types of obstacles they may face as a result of their deployment. A step-by-step process will also be provided so that users can learn how to create a 3D visualization of cities, import 3D objects into ArcGIS Pro, convert point data into 3D visualizations, and finally animate a drone flying through a city. This is extremely useful as 3D visualization provides a different perspective that allows GIS users to perceive study areas from the ground level instead of the conventional birds-eye view.

Area of Study

The focus area for this pilot study is Niagara Falls in Ontario, Canada. The city of Niagara Falls was chosen due to its characteristics of being a smaller city but nonetheless still containing buildings over 120 meters in height. These buildings sizes provide a perfect obstruction for simulating drone flights as Transport Canada has set a maximum altitude limit of 120 meters for safety reasons. Niagara Falls also contains a good distribution of Canada Post locations that will be used as potential drone deployment centres for the package deliveries. Additionally, another hypothetical scenario where all drones deploy from one large building will be visualized. In this instance, London’s gherkin will be utilized as a potential drone-hive (hypothetically owned by Amazon) that drones can deploy from (See https://youtu.be/mzhvR4wm__M). Due to the nature of this project being a pilot study, this method be further expanded in the future to larger dense areas, however, a computer with over 16GB of RAM and a minimum of 8GB of video memory is highly recommended for video rendering purposes. In the video below, we can see the city of Niagara Falls rendered in ArcPro with the gherkin represented in a blue cone shape, similarly, the Canada Post buildings are also represented with a dark blue colour.

City of Niagara Falls (Rendered in ArcPro)

Data   

The data for this project was derived from numerous sources as a variety of file types were required. Regarding data directly relating to the city of Niagara Falls – Cellular Towers, Street Lights, Roads, Property parcel lines, Building Footprints and the Niagara Falls Municipal Boundary Shapefiles were all obtained from Niagara Open data and imported into ArcPro. Similarly, the Canada Post Locations Shapefile was derived from Scholar’s Geoportal. In terms of the 3D objects – London’s Gherkin, was obtained from TurboSquid in and the helipad was obtained from CGTrader in the form of DAE files. The Gherkin was chosen because it serves as a hypothetic hive building that can be employed in cities by corporations such as Amazon. Regarding the helipad 3D model, it will be distributed in numerous neighbourhoods around Niagara Falls as a drop-off zones for the drones to deliver packages. In a hypothetical scenario, people would be alerted on their phones as to when their package is securely arriving, and they would visit the loading zone to pick up their package. It should be noted that all files were copyright-free and allowed for personal use.

Process (Step by step)

Importing Files

Figure 1. TurboSquid 3D DAE Download

First, access the Niagara Open Data website and download all the aforementioned files in the search datasets box. Ensure that the files are downloaded in SHP format for recognition in ArcPro (Names are listed at the end of this blog). Next, go on TurboSquid and search for the Gherkin and make sure that the price drop down has a minimum and maximum value of $0 (Figure 1). Additionally, search for ‘Simple helipad free 3D model’ on CGtrader. Ensure that these files are downloaded in DAE format for recognition in ArcPro. Once all files are downloaded open ArcPro and import the Shape files (via Add Data) to first conduct some basic analysis.

Basic GIS Analysis

First, double click on the symbology box for each imported layer, and a symbology dialog should open on the right-hand side of the screen. Click on the symbol box and assign each layer with a distinct yet subtle colour. Once this is finished, select the Canada Post Locations layer, and go to the analysis tab and select the buffer icon to create a buffer around the Canada Post Locations. Input features – The Canada Post Locations. Provide a file location and name in the output feature class and enter a value of 5 kilometres for distance and dissolve the buffers (Figure 2). The reason why 5km was chosen is that regular consumer drones have a battery that can last up to ten kilometres (or 30 min flight time), thus traveling to the parcel destination and back would use up this allotted flight time.

Figure 2. Buffer option on ArcPro
Figure 3. Extent of Drone Deployment

Once this buffer is created the symbology is adjusted to a gradient fill within the layer tab of the symbol. This is to show the groupings of clusters and visualize furthering distance from the Canada Post Locations. In this project we are assuming that the Canada Post Locations are where the drones are deploying from, thus this buffer shows the extent of the drones from the location (Figure 3). As we can see, most residential areas are covered by the drone package service. Next, we are going to give the Canada post buildings a distinct colour from the other buildings. Go to ‘Select by Location’ in the ‘Map’ tab and click ‘Select by Location’. In this dialog box, an intersection relationship is created where the input features are the buildings, and the selecting features is the Canada Post location point data. Hit okay, and now create a new layer from the selection and name it Canada Post buildings. Assign a distinct colour to separate the Canada Post buildings from the rest of the buildings.

3D Visualization – Buildings

Now we are going to extrude our buildings in terms of their height in feet. Click on the View tab in ArcPro and click on the Convert to local scene tab. This process essentially creates a 3D visual of your current map. Next you will notice that all of the layers are under 2D view, once we adjust the settings of the layers, we will drag these layers to the 3D layers section. To extrude the buildings, click on the layer and the appearance tab should come up under the feature layer. Click on the Type diagram drop down and select ‘Max Height’. Thereafter, select the field and choose ‘SHAPE_leng’ as this is the vertical height of the buildings and select feet as the unit. Give ArcPro some time and it should automatically move your building’s layer from the 2D to 3D layers section. Perform this same process with the Canada Post Buildings layer.

Figure 4. Extruded Buildings

Now you should have a 3D view of the city of Niagara Falls. Feel free to move around with the small circle on the bottom left of the display page (Figure 4). You can even click the up arrow to show full control and move around the city. Furthermore, can also add shadows to the buildings by right clicking the map 3D layers tab and selecting ‘Display shadows in 3D’ under Illumination.

Converting Point Data into 3D Objects

In this step, we are going to convert our point data into 3D objects to visualize obstructions such as lamp posts and cell phone towers. First click the Street Lights symbol under 2D layers and the symbology pane should open up on the right side of Arc Pro. Click the current symbol box beside Symbol and under the layer’s icon change the type from ‘Shape Marker’ to 3D model marker (Figure 5).

Figure 5. 3D Shape Marker

Next, click style, search for ‘street-light’, and choose the overhanging streetlight. Drag the Street Light layer from the 2D layer to the 3D layer. Finally, right-click on the layer and navigate to display under properties. Enable ‘Display 3D symbols in real-world units’ and now the streetlamp point data should be replaced by 3D overhanging streetlights. Repeat this same process for the cellphone tower locations but use a different model.

Importing 3D objects & Texturing

Figure 6. Create Features Dialog

Finally, we are going to import the 3D DAE helipad and tower files, place them in our local scene and apply textures from JPG files. First, go on the view tab, click on Catalog Pane and a Catalog should show up on the right side of the viewer. Expand the Databases folder and your saved project should show up as a GDB. Right-click on the GDB and create a new feature class. Name it ‘Amazon Tower’ and change the type from polygon to 3D object and click finish. You should notice that under Drawing Order there should be a new 3D layer with the ‘Amazon Tower’ file name. Select the layer, go on the edit tab and click create to open up the ‘Create Features’ dialog on the right side of the display panel (Figure 6). Click on the Model File tab, click the blue arrow and finally, click the + button. Navigate to your DAE file location, select it and now your model should show up in the view pane and it will allow you to place it on a certain spot. For our purposes, we’ll reduce the height to 30 feet and adjust the Z position to -40 to get rid of the square base under the tower. Click on the location of where you want to place the tower, close the create feature box, apply the multi-patch tool and clear the selection. Finally, to texture the tower, select the tower 3D object, click on the edit tab and this time hit modify. Under the new modify features pane select multi patch features under reshape. Now go on to Google and find a glass building texture JPG file that you like. Click load texture, choose the file, check the ‘Apply to all’ box and click apply. Now the Amazon tower should have the texture applied on it (Figure 7).

Figure 7. Textured Amazon Building

Animation

Finally, now that all of the obstructions are created, we are going to animate a drone flying through the city. Navigate to the animation tab on the top pane and click on timeline. This is where individual keyframes will be combined for the purpose of creating a drone package delivery. Navigate your view so that it is resting on a Canada Post Building and you have your desired view. Click on ‘Create first key frame’ to create your first view, next click up on the ‘full control view’ so that the drone flies up in elevation, and click the + to designate this as a new keyframe. Ensure that the height does not exceed 120 meters as this is the maximum altitude for drones, provided by Transport Canada (Bottom left box). Next, click and drag the hand on the viewer to move forward and back and click + for a new keyframe. Repeat this process and navigate the proposed drone to a helipad (Figure 8). Finally, press the ‘Move down’ button to land the done on the helipad and create a new key frame. Congratulations, you have created your first animation in ArcPro!

Figure 8. Animation in ArcPro

Discussion

Through the process of extruding buildings, maintaining a height less than 120 meters, adding in proposed landing spaces, and turning point data into real-world 3D objects we can visualize many obstructions that drones may face if drone delivery were to be implemented in the city of Niagara Falls. Although this is a basic example, creating an animation of a drone flying through certain neighbourhoods will allow analysts to determine which areas are problematic for autonomous flying and which paths would provide a safer option. Regarding the animation portion, there are two possible scenarios that have been created. First, is a drone deployment from the aforementioned Canada Post Locations. This scenario envisions Niagara Falls as having drone package deployment set out directly from their locations. This option would cover a larger area of Niagara Falls as seen through the buffer, however, having multiple locations may be hard to get funding for. Also, people may not want to live close to a Canada Post due to the noise pollution that comes from drones.

Scenario 1. Canada Post Delivery

The second scenario is to utilize a central building that drones can pickup packages from. This is exemplified as the hive delivery building as seen below. In sharp contrast to option 1, a central location may not be able to reach rural areas of Niagara Falls due to the distance limitations of current drones. However, two major benefits are that all drone deliveries could come from a central location and less noise pollution would occur as a result of this.

Scenario 2. Single HIVE Building

Conclusions & Future Research

Overall, it is evident that drone package deliveries are completely possible within the city of Niagara Falls. Through 3D visualizations in ArcPro, we are able to place simple obstructions such as conventional street lights and cell phone towers within the roads. Through this analysis and animation it is evident that they may not pose an issue to package delivery drones when incorporating communal landing zones. For future studies, this research can be furthered by incorporating more obstructions into the map; such as electricity towers, wiring, and trees. Likewise, future studies can also incorporate the fundamentals of drone weight capacity in relation to how far they can travel and overall speed of deliveries. In doing so, the feasibility of drone package deployment can be better assessed and hopefully implemented in future smart cities.

References

https://www.dji.com/ca/phantom-4/info

https://youtu.be/mzhvR4wm__M

3D Files

Gerkin Model DAE File https://www.turbosquid.com/3d-models/free-30-st-mary-axe-3d-model/991165

Simple Helipad DAE File – https://cgtrader.com/items/212615/download-page

Shape Files

Postal Outlet Points (2020) – Scholar’s GeoPortal

Niagara Falls Building Footprints (2010) – Niagara Open Data

Road Segments (2021) – Niagara Open Data

Niagara Falls Cellular Tower Locations (2021) – Niagara Open Data

Street Lighting Pilot Project (2021) – Niagara Open Data

Niagara Falls Municipal Boundary (2021) – Niagara Open Data

Niagara Falls Property Parcels (2021) – Niagara Open Data

3D Approach to Visualizing Crime on Campus: Laser-Cut Acrylic Hexbins

By: Lindi Jahiu

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2021

INTRODUCTION

Crime on campus has long been at the forefront of discussion regarding safety of community members occupying the space. Despite efforts to mitigate the issue—vis-à-vis increased surveillance cameras, increased hiring of security personnel, etc.—, it continues to persist on X University’s campus. In an effort to quantify this phenomenon, the university’s website collates each security incident that takes place on campus and details its location, time (reported and occurrence), and crime type, and makes it readily available for the public to view through web browser or email notifications. This effort to collate security incidents can be seen as a way for the university to first and foremost, quickly notify students of potential harm, but also as a means to understanding where incidents may be clustering. The latter is to be explored in the subsequent geo-visualization project which attempts to visualize three years worth of security incidents data, through the creation of a 3D laser-cut acrylic hexbin model. Hexbinning refers to the process of aggregating point data into a predefined hexagon that represents a given area, in this case, the vertex-to-vertex measurement is 200 metres. By proxy of creating a 3D model, it is hoped that the tangibility, interchangeability, and gamified aspects of the project will effectively re-conceptualize the phenomena to the user, and in-turn, stress the importance of the issue at hand. 

DATA AND METHODS

The data collection and methodology can be divided into two main parts: 2D mapping and 3D modelling. For the 2D version, security incidents from July 2nd, 2018 to October 15th, 2021 were manually scraped from the university’s website (https://www.ryerson.ca/community-safety-security/security-incidents/list-of-security-incidents/) and parsed into columns necessary for geocoding purposes (see Figure 1). Once all the data was placed into the excel file, they would be converted into a .csv file and imported into the ArcGIS Pro environment. Once there, one simply right clicks on the .csv and clicks “Geocode Table”, and follows the prompts for inputting the data necessary for the process (see inputs in Figure 2). Once ran, the geocoding process showed a 100% match, meaning there was no need for any alterations, and now shows a layer displaying the spatial distribution of every security incident (n = 455) (see Figure 3). To contextualize these points, a base map of the streets in-and-around the campus was extracted from the “Road Network File 2016 Census” from Scholars GeoPortal using the “Split Line Features” tool (see output in Figure 3). 

Figure 1. Snippet of spreadsheet containing location, postal code, city, incident date, time of incident, and crime type, for each of the security incidents.

Figure 2. Inputs for the Geocoding table, which corresponds directly to the values seen in Figure 1.

Figure 3. Base map of streets in-and-around X University’s campus. Note that the geo-coded security incidents were not exported to .SVG – only visible here for demonstration purposes.

To aggregate these points into hexbins, a certain series of steps had to be followed. First, a hexagonal tessellation layer was produced using the “Generate Tessellation” tool, with the security incidents .shp serving as the extent (see snippet of inputs in Figure 4 and output in Figure 5). Second, the “Summarize Within” tool was used to count the number of security incidents that fell within a particular polygon (see snippet of inputs in Figure 6 and output in Figure 7). Lastly, the classification method applied to the symbology (i.e. hexbins) was “Natural Breaks”, with a total of 5 classes (see Figure 7). Now that the two necessary layers have been created, namely, the campus base map (see Figure 3 – base map along with scale bar and north arrow) and tessellation layer (see Figure 5 – hexagons only), they would both be exported as separate images to .SVG format – a format compatible with the laser cutter. The hexbin layer that was classified will simply serve as a reference point for the 3D model, and was not exported to .SVG (see Figure 7).

Figure 4. Snippet of input when using the “Generate Tessellation” geoprocessing tool. Note that these were not the exact inputs, spatial reference left blank merely to allow the viewer to see what options were available.

Figure 5. Snippet of output when using the “Generate Tessellation” geoprocessing tool. Note that the geo-coded security incidents were not exported to .SVG – only visible here for demonstration purposes.

Figure 6. Snippet of input when using the “Summarize Within” geoprocessing tool.

Figure 7. Snippet of output when using the “Summarize Within” geoprocessing tool. Note that this image was not exported to .SVG but merely serves as a guide for the physical model.

When the project idea was first conceived, it was paramount that I familiarized myself with the resources available and necessary for this project. To do so, I applied for membership to the Library’s Collaboratory research space for graduate students and faculty members (https://library.ryerson.ca/collab/ – many thanks to them for making this such a pleasurable experience). Once accepted, I was invited to an orientation, followed by two virtual consultations with the Research Technology Officer, Dr. Jimmy Tran. Once we fleshed out the idea through discussion, I was invited to the Collaboratory to partake in mediated appointments. Once in the space, the aforementioned .SVG files were opened in an image editing program where various aspects of the .SVG were segmented into either Red, Green, or Blue, in order for the laser cutter to distinguish different features. Furthermore, the tessellation layer was altered to now include a 5mm (diameter) circle in the centre of each hexagon to allow for the eventual insertion of magnets. The base map would be etched onto an 11×8.5 sheet of clear acrylic (3mm thick), whereas the hexagons would be cut-out into individual pieces at a size of 1.83in vertex-to-vertex. Atop of this, a black 11×8.5 sheet of black acrylic would be cut-out to serve as the background for the clear base map (allowing for increased contrast to accentuate finer details). Once in hand, the hexagons would be fixed with 5x3mm magnets (into the aforementioned circles) to allow for seamless stacking between pieces. Stacks of hexagons (1 to 5) would represent the five classes in the 2D map, but with height now replacing the graduated colour schema (see Figure 7 and Figure 9 – although the varying translucency of the clear hexagons is also quite evident and communicates the classes as well). The completed 3D model is captured in Figure 8, along with the legend in Figure 9 that was printed out and is to always be presented in tandem with the model. The legend was not etched into the base map so as to allow it to be used for other projects that do not use the same classification schema, and in-case I had changed my mind about a detail at some point.

Figure 8. 3D Laser-Cut Acrylic Hexbin Model depicting three-years worth of security incidents on campus. Multiple angles provided.

Figure 9. Legend which corresponds the physical model displayed in Figure 8. Physical version has been created as well and will be shown in presentation.

FUTURE RESEARCH DIRECTIONS AND LIMITATIONS

The geo-visualization project at-hand serves as a foundation for a multitude of future research avenues, such as: exploring other 3D modalities to represent human geography phenomenon; as a learning tool for those not privy to cartography; and as a tool to collect further data regarding perceived and experienced areas of crime. All of which expand on the aspects tangibility, interchangeability, and gamification harped on in the project at-hand. With the latter point, imagine a situation where a booth is set up on campus and one were to simply ask “using these hexagon pieces, tell us where you feel the most security incidents on campus would occur.” The answers provided would be invaluable, as they would yield great insight into what areas of campus community members feel are most unsafe, and what factors may be contributing to it (e.g. built environment features such as poor lighting, lack of cameras, narrowness, etc.), resulting in a synthesis between the qualitative and quantitative. Or on the point of interchangeability, if someone wanted to explore the distribution of trees on campus for instance, they could very well laser-cut their own hexbins out of green acrylic at their own desired size (e.g. 100m), and simply use the same base map.

Despite the fairly robust nature of the project, some limitations became apparent, more specifically: issues with the way a few security incident’s data were collected and displayed on the university’s website (e.g. non-existent street names, non-existent intersections, missing street suffixes, etc.); an issue where the exportation of a layer to .SVG resulted in the creation of repeated overlapping of the same images that had to be deleted before laser cutting; and lastly, future iterations may consider exaggerating finer features (e.g. street names) to make the physical model even more legible.

Creating a 3D Holographic Map Display for Real-World Driving and Flight Navigation

By: Dylan Oldfield

Geovis Class Project @RyersonGeo, SA8905, Fall 2018

Introduction:

The inspiration for this project came from the visual utility and futuristic look of holographic maps from the 2009 movie Avatar by James Cameron. Wherein there were multiple uses for holographic uses in several unique scenarios; such as within aerial vehicles, conference tables, and on air traffic control desks. Through this, the concept to create, visualize and present a current day possibility of this technology began. This technology is a form of hologram that visualizes geographically, where the user is, while operating a vehicle. For instance, the use of a hologram in a car for the everyday person displaying their navigation in the city guiding them to their destination. Imagine a 3D hologram real-time version replacing the 2D screen of google maps or any dashboard mounted navigation in a car. This application can even be used in aerial vehicles as well, imagine planes landing at airports close to urban areas, but fog or other weather conditions making safe landing and take-off difficult. With the use of the 3D hologram, visualization of where to go and how to navigate the difficult weather would be significantly easier and safer. For these 2 unique reasons, 2 scenarios or maps, were recorded into videos and made into 3D holograms to give a proof of concept for the use of the technology in cars and planes.

Data:

The data to make this project possible was taken from the City of Toronto Open Data Portal and consisted of the 3D massing and Street .shp files. It is important to note that in order for the video to work and be seen properly, the background within the video and in the real-world had to have been as dark as possible otherwise the video will not appear fully. To make this effect, features were created in ArcGIS-Pro that ensured that the background, base, and ceiling in the 3D scene of the map were black. These features were, a simple polygon for the ceiling given a different base height, and the ‘walls’ for the scene was a line surrounding the scene and extruded to the ceiling. The base of the scene was an imported night-time base map.

Methodology:

  1. Map / Scene Creation Within ArcGIS-Pro

Within the mapping program ArcGIS Pro the function to visualize 3D features was used to extrude the aforementioned .shp files for the scene. All features were extruded in 3D from the base height with meters as the measurement. The buildings were extruded to their real-world dimensions and given the colour scheme of fluorescent blue so as to provide contrast for buildings in the video. The roads were extruded in such a way so as to give the impression that sidewalks existed. The first part for making this was with buffering the roads to a 6 meter buffer, dissolving it to make it seamless, and extruding it from the base, creating the roads. The inverse polygon from the newly created roads was created and extruded slightly higher than the roads. The roads were then given differing shades of grey so as to adhere to the darkness of the scene but also to provide contrast to each other. This effect is seen in the picture below.

 

  1. Animation Videos Creation and Export

Following the creation of the scene the animation or videos of “driving” through the city and “flying” into Billy Bishop Airport were created. Within ArcGIS-Pro the function to create Animations through the consecutive placements of key frames allows for the seamless running of a video in any 3D scene created. The key frames are essentially checkpoints in a video and the program fills the time and space between each frame by traveling between the frames as a video. The key frames are the boxes at the bottom of the image below.

Additionally, as seen in the image above, is the exporting options ArcGIS-Pro makes available for the user. The video can be exported at differing qualities to YouTube, Vimeo, Twitter, MP4, and as a Gif among other options. The 2 videos created for this project were at 1080p, 60 frames a second in MP4 format. Due to the large size of the videos with these chosen options, the exporting process took over 2 hours for each video.

  1. PowerPoint Video Transposition and Formatting

The way the hologram functions is by refracting the videos through each of the lenses into the center creating the floating effect of an image. For this effect to work the video exported from ArcGIS-Pro was inserted into PowerPoint and transposed 3 times into the format seen in the image below. Once the placements were equal and exact the background, as mentioned previously, was turned black. The videos were made to play at the same time and then was exported for a second time into a MP4 as the final products.

  1. Hologram Lenses Template Creation

The hologram lenses were created out of 4 clear CD cases. The templates for the lenses needed to be physically compatible with the screen display of the video created. The screen used was from a 5th Generation iPad. After the template was defined they were cut out of the 4 CD cases with a box cutter and lightly sanded at all cut edges so as to ensure they would not cut anyone, and so that the surfaces in contact with the epoxy would bond without issue. After this an epoxy resin was used to glue the 4 lenses into their final shape. While the epoxy had a 10 setting time, it was left for 3 hours to ensure it was fully set. After this the lenses was complete and ready for use. The final lens and the iPad used for the display are seen in the image below.

Finally, here is a screen shot of the City of Toronto “Driving Navigation” video:

Using LiDAR to create a 3D Basemap

By: Jessie Smith
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

INTRO

My Geovisualization Project focused on the use of LiDAR to create a 3D Basemap. LiDAR, which stands for Light Detection and Ranging, is a form of active remote sensing. Pulses of light are sent from a laser towards the ground. The time it takes for the light pulse to be returned is measured, which determines the distance between where the light touched a surface and the laser in which it was sent from. By measuring all light returns, millions of x,y,z points are created and allow for 3D representation of the ground whether it be just surface topography or elements such as vegetation or buildings etc. The LiDAR points can then be used in a dataset to create DEMs or TINs, and imagery is draped over them to create a 3D representation. The DEMs could also be used in ArcPro to create 3D buildings and vegetation, as seen in this project.

ArcGIS SOLUTIONS

ArcGIS solutions are a series of resources made available by Esri. They are resources marketed for industry and government use. I used the Local Government Solutions which has a series of focused maps and applications to help local governments maximize their GIS efficiency to improve their workflows and enhance services to the public. I looked specifically at the Local Government 3D Basemaps solution. This solution included a ArcGIS Pro package with various files, and an add-in to deploy the solution. Once the add-in is deployed a series of tasks are made available that include built in tools and information on how to use them. There is also a sample data set included that can be used to run all tasks as a way to explore the process with appropriate working data.

IMPLEMENTATION

The tasks that are provided have three different levels: basic, schematic and realistic. Each task only requires 2 data sources, a las(LiDAR) dataset and building footprints. Based on the task chosen, a different degree of detail in the base map will be produced. For my project I used a mix of realistic and schematic tasks. Each task begins with the same steps: classifying the LiDAR by returns, creating a DTM and DSM, and assigning building heights and elevation to the building footprints attribute table. From there the tasks diverge. The schematic task then extracted roof forms to determine the shape of the roofs, such as a gabled type, where in the Basic task the roofs remain flat and uniform. Then the DEMS were used in conjunction with the building footprints and the rooftop types to 3D enable buildings. The realistic scheme created vegetation points data with z values using the DEMs. Next, a map preset was added to assign a 3D realistic tree shape that corresponds with the tree heights.

DEMs Created

DSM

DTM

Basic Scene Example

Realistic Scene

 

ArcGIS ONLINE

The newly created 3D basemap, which can be seen and used on ArcGIS Pro, can also be used on AGOL with the newly available Web Scene. The 3D data cannot be added to ArcGIS online directly like 2D data would be. Instead, a package for each scene was created, then was published directly to ArcGIS online. The next step is to open this package on AGOL and create a hosted layer. This was done for both the 3D trees and buildings, and then these hosted layers were added to a Web Scene. In the scene viewer, colours and basemaps can be edited, or additional contextual layers could be added. As an additional step, the scene was then used to create a web mapping application using Story Map template. The Story Map can then be viewed on ArcGIS Online and the data can be rotated and explored.

Scene Viewer

Story Map

You can find my story map here:
http://ryerson.maps.arcgis.com/apps/Styler/index.html?appid=a3bb0e27688b4769a6629644ea817d94

APPLICATIONS

This type of project would be very doable for many organizations, especially local government. All that is needed is LiDAR data and building footprints. This type of 3D map is often outsourced to planners or consulting companies when a 3D model is needed. Now government GIS employees could create a 3D model themselves. The tasks can either be followed exactly with your own data, or the general work flow could be recreated. The tasks are mostly clear as to the required steps and processes being followed, but there could be more reasoning provided when setting values or parameters specific to the data being used inside the tool. This will make it easier to create a better model with less trial and error.

 

 

 

Surfer 15 Whistler-Blackcomb Geovisualization Using Data Retrieved From Google Earth

By :Ryan Wilkinson

Geovizualization Project Assingment, @RyersonGEO, SA8905, Fall 2018

 

In this project a 3D Surface Map of Whistler-Blackcomb in British Columbia was created using XYZ data retrieved from Google Earth and the geovisualization software program Surfer 15. Surfer is an excellent geovisualization software program capable of creating 2D contour maps and 3D surface maps from XYZ and DEM data. The following method can work for any terrain location in the world that can be viewed on google earth and is certainly not limited to my chosen location.

Collection of Data from Google Earth:

  

The path tool on google earth was used to drop points on the Whistler-Blackcomb area, each red square represents a point that has corresponding latitude, longitude, and elevation values.

The image above shows the trace of the path that was drawn in order to collect the XYZ data from the Whistler area necessary for adequate creation of an accurate 3D surface map in Surfer.

Once the desired path was drawn it was saved under “My places” in google earth as a .kml file.

Data Conversion:

The .kml file was then uploaded into TCX converter. The altitude values are commonly not present during this stage therefore TCX converter can be used to add the altitudes using its “update altitude tool”. Once the altitudes were successfully calculated TCX converter was used to convert the file from a .KML to a .CSV in preparation for visualization in Surfer.

 

Grid File and 3D Surface Map Creation:

The .CSV File was then uploaded into Surfer’s grid data tool which is capable of creating grid files (.grd) from XYZ and DEM data. Grid files can be used to create 2D contour maps and 3D surface maps in Surfer.

The grid file was then used by the 3D Surface tool to create a 3D surface map of the Whistler area. Colour scales and variations can be easily changed in Surfer to achieve desired effect and convey information in the way the user chooses. The above colour scheme is called “terrain” and effectively visualize elevation change. The model can also be rotated and viewed from any desired angle in surfer using the “trackball” tool, multiple angles of the 3D surface map above can be seen in the finish product at the beginning of this blog post.

 

 

 

Creating a Flight Simulator and Virtual Landscape Using LiDAR Data and Unity 3D

by Brian Mackay
Geovis Class Project @RyersonGeo, SA8905, Fall 2017

The concept for this project stems from the popularity of phone apps and computer gaming. It attempts to merge game creation software, graphic art and open source geographic data. A flight simulator was created using Unity 3D to virtually explore a natural landscape model created from Light Detection and Ranging (LiDAR) data collected from the US Geological Survey (USGS). 

The first step is to define the study area, which is a square mile section of Santa Cruz Island off the coast of California. This area was selected for the dramatic elevation changes, naturalness and rich cultural history. Once the study area is defined, the LiDAR data can be collected from the USGS Earth Explorer in an .LAS file format.

After data is collected, ArcMap is used to create a raster image before use in Unity 3D. The .LAS file was imported into ArcMap in order to create the elevation classes. This was done by using the statistics tab in the property manager of the .LAS file in ArcCatalog and clicking the calculate statistics button. Once generated an elevation map is displayed using several elevation classes. The next step is to convert the image to a raster using the .LAS dataset to raster conversion tool in ArcToolbox. This creates a raster image that must then be turned into a .RAW file format using Photoshop before it is compatible with Unity.

The data is now ready for use in Unity. Unity 3D Personal (free version) was used to create the remainder of the project. The first step is to import the .RAW file after opening Unity 3D. Click the GameObject tab → 3D Object → Terrain, then click on the settings button in the inspector window, scroll down, click import raw and select your file. 

Next, define the area and height parameters to scale the terrain. The USGS data was in imperial units so this had to be converted to meters, which is used by Unity. The Length and width after conversion were set as 1609m (1 sq mile) and the height was set as 276m (906ft), which was taken from ArcMap and the .LAS file elevation classes (seen right and below). Once these parameters are set you can use your graphic art skills to edit the terrain.

 

 

Editing the terrain starts with the inspector window (seen below). The terrain was first smoothed to give it a more natural appearance rather than harsh, raw edges. Different textures and brushes were used to edit the terrain to create the most accurate representation of the natural landscape. In order to replicate the natural landscape, the satellite map from Google was used as a reference for colours, textures and brush/tree cover.

This is a tedious process and you can spend hours editing the terrain using these tools. The landscape is almost finished, but it needs a sky. This is added by activating the Main Camera in the hierarchy window then going to the inspector window and clicking Add Component → Rendering → Skybox. The landscape is complete and it is time to build the flight simulator.

To build the plane you must create each of its parts individually by GameObject → 3D Object → Cube. Arrange and scale each of the planes parts using the inspector window. The final step is to drag and drop the individual airplane parts into a parent object by first clicking GameObject → Create Empty. This is necessary so the C# coding applies to the whole airplane and not just an individual part. Finally, a chase camera has to be attached to the plane in order for the movement to be followed. You can use the inspector window to set the coordinates of the camera identical to the plane and then offsetting the camera back and above the plane to a viewing angle you prefer.

C# coding was the final step of this project and it was used to code the airplane controls as well as the reset game button. To add a C# script to an object, click on the asset in the hierarchy you want to code and select Add Component → New Script. The C# script code below was used to control the airplane.

Parameters for speed and functionality of the airplane were set as well as the operations for the chase camera. Finally, a reset button was programmed using another C# script as seen below. 

The flight simulator prototype is almost complete. The last thing is inserting music and game testing. To insert a music file go to GameObject → Create Empty, then in the inspector window click Add Component → Audio → Audio Source and select the file. Now it’s time to test the flight simulator by clicking the game tab at the top of the viewer and pressing play.

Once testing is complete, it’s ready for export/publishing. Click File → Build Settings then select the type of game you want to create (in this case WebGL), then click Build and Run and upload the files to the suitable web host. The game is now complete.. Try it here!

There are a few limitations to using Unity 3D with geographic data. The first is the scaling of objects, such as the airplane and trees. This was problematic because the airplane was only about 5m long and 5m wide, which makes the scale of other objects appear overly large. The second limitation is the terrain editor and visual art component. Without previous experience using Unity 3D or having any graphic arts background it was extremely time consuming to replicate a natural landscape virtually. The selection of materials available for the terrain were limited to a few grasses, rocks, sand and trees in the Personal version. Other limitations include the user’s skills related to programming, particularly the unsuccessfully programmed colliders, which create invisible barriers to limit the airplane from flying off the landscape. Despite these limitations Unity 3D appears to provide the user with an endless creative canvas for game creation, landscape modeling, alteration and conceptual design, which are often very limited when using other GIS related software.

3D Printing Canadian Topographies

by Scott Mackey, Geovis Project Assignment @RyersonGeo, SA8905, Fall 2016

Since its first iteration in 1984 with Charles Hull’s Stereo Lithography, the process of additive manufacturing has made substantial technological bounds (Ishengoma, 2014). With advances in both capability and cost effectiveness, 3D printing has recently grown immensely in popularity and practicality. Sites like Thingiverse and Tinkercad allow anyone with access to a 3D printer (which are becoming more and more affordable) to create tangible models of anything and everything.

When I discovered the 3D printers at Ryerson’s Digital Media Experience (DME) lab, I decided to 3D print models of interesting Canadian topographies, selecting study areas from the east coast (Nova Scotia), west coast (Alberta), and central Canada (southern Ontario). These locations show the range of topographies and land types strewn across Canada, and the models can provide practical use alongside their aesthetic allure by identifying key features throughout the different elevations of the scene.

The first step in this process was to learn how to 3D print. The DME has three different 3D printers, all of which use an additive layering process. An additive process melts materials and applies them thin layer by thin layer to create a final physical product. A variety of materials can be used in additive layers, including plastic filaments such as polylactic acid (PLA) (plastic filament) and Acrylonitrile Butadiene Styrene (ABS), or nylon filaments. After a brief tutorial at the DME on the 3D printing process, I chose to use their Lulzbot TAZ, the 3D printer offering the largest surface area. The TAZ is compatible with ABS or PLA filament of a 1.75 mm diameter. I decided on white PLA filament as it offers a smooth finish and melts at a lower temperature, with the white colour being easy to paint over.

img_1740
Lulzbot TAZ

The next step was to acquire the data in the necessary format. The TAZ requires the digital 3D model to be in an STL (STereoLithography) format. Two websites were paramount in the creation of my STL files. The first was GeoGratis Geospatial Data Extraction. This National Resources Canada site provides free geospatial data extraction, allowing the user to select elevation (DSM or DEM) and land use attribute data in an area of Canada. The process of downloading the data was quick and painless, and soon I had detailed geospatial information on the sites I was modelling.

geogratis
GeoGratis Geospatial Data Extraction

One challenge still remained despite having elevation and land use data – creating an STL file. While researching how to do this, I came across the open source web tool called Terrain2STL on a visualization website called jthatch.com. This tool allows the user to select an area on a Google basemap, and then extracts the elevation data of that area from the Consortium for Spatial Information’s SRTM 90m Digital Elevation Database, originally produced by NASA. Terrain2STL allows the users to increase the vertical scaling (up to four times) in order to exaggerate elevation, lower the height of sea level for emphasis, and raise the base height of the produced model in a selected area ranging in size from a few city blocks to an entire national park.

The first area I selected was Charleston Lake in southern Ontario. Being a southern part of the Canadian Shield, this lake was created by glaciers scarring the Earth’s surface. The vertical scaling was set to four, as the scene does not have much elevation change.

Once I downloaded the STL, I brought the file into Windows 10’s 3D Builder application to slim down the base of the model. The 3D modelling program Cura was then used to further exaggerated the vertical scaling to 6 times, and to upload the model to the TAZ. Once the filament was loaded and the printer heated, it was ready to print. This first model took around 5 hours, and fortunately went flawlessly.

Cape Breton, Nova Scotia was selected for the east coast model. While this site has a bit more elevation change than Charleston Lake, it still needed to have 4 times vertical exaggeration to show the site’s elevations. This print took roughly 4 and a half hours.

Finally, I selected Banff, Alberta as my final scene. This area shows the entrance to Banff National Park from Calgary. No vertical scaling was needed for this area. This print took roughly 5 and half hours.

Once all the models were successfully printed, it was time to add some visual emphasis. This was done by painting each model with acrylic paint, using lighter green shades for high areas to darker green shades for areas of low elevation, and blue for water. The data extracted from GeoGratis was used as a reference in is process. Although I explored the idea of including delineations of trails, trail heads, roads, railways, and other features, I decided they would make the models too busy. However, future iterations of such 3D models could be designed to show specific land uses and features for more practical purposes.

img_1778
Charleston Lake, Ontario

img_1779
Cape Breton, Nova Scotia

img_1775
Banff, Alberta

3D models are a fun and appealing way to visual topographies. There is something inexplicably satisfying about holding a tangible representation of the Earth, and the applicability of 3D geographic models for analysis should not be overlooked.

Sources:

GeoGratis Geospatial Data Extraction. (n.d.). Retrieved November 28, 2016, from http://www.geogratis.gc.ca/site/eng/extraction

Ishengoma, F. R., & Mtaho, A. B. (2014). 3D Printing: Developing Countries Perspectives. International Journal of Computer Applications, 104(11), 30-34. doi:10.5120/18249-9329

Terrain2STL Create STL models of the surface of Earth. (n.d.). Retrieved November 28, 2016, from http://jthatch.com/Terrain2STL/

 

 

3D Paper Topography Map of Evergreen Brick Works and Its Surroundings

By Nicole Serrafero

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2016

When learning about geography in the early years of school we had to trace and label contours based off topographic maps. For the purpose of the course work I decided to take inspiration from my younger school days and use modern technologies to attempt to reproduce a topographic map with cartographic elements included. My main inspiration came from an artist by the name of Sam Cadwell who creates beautiful works of arts using layers of paper to represent contours. An example of his work can be seen below and through the link to his website.

Example of Sam Cadwell's Work

The project involved cutting out each contour layer and features using a Cricut machine which is computer guided paper cutter (seen below).

 photo IMG_20161107_123320_zps33mlcyg6.jpg

The maximum paper size that the cutter program can handle is 11” in x 11” so I ensured that the study area would fit within the paper size limitations. The paper used for the project was 12”x12” cardstock paper in a variety of colours to represent each feature. For the layers of contours, a pink to red colour scheme was used as it provided me with up to 15 layers of sequential colours.

 photo 0aa4f3c0-b318-4696-9a89-09c959f8483f_zpsdazdxid7.jpg

The water features were blue, the rail features yellow, the buildings a light purple, and the roads black.


Data Used

Four (4) datasets were used to produce the topographic model:

  • Contour Lines (Obtained from TRCA)
  • Building Footprints (Obtained from DMTI spatial)
  • Waterways (Obtained from TRCA)
  • Road and Rail Lines (Obtained from Statistics Canada)

Study Area Extraction

All of the files were loaded into ArcMap then all projected to WGS84 to ensure all files were in the same projection. The Evergreen Brick Works was chosen as the study area as its surrounding area contains interesting contours, roads, a major highway, railways, a river. To ensure that the study area was contained within the paper limitations the page size within ArcMap was set to 11” x 11” and the map view was adjusted until I was satisfied with the area. Once the final study area was chosen the features within the view were clipped out and saved as separate files. Below is a screen shot of what the final study area covers.

studyarea_ns

With the data now clipped the further data processing could be done easily as the amount of data was significantly reduced. The contour lines came as 1m intervals with a range of 22 individual contours levels which is too many levels for the amount of paper that I have available for the contours. The number of contours was reduced by selecting every 4 m contour then extracting the selected lines to a separate file. With the new file the number of layers was reduced to 12 layers which fits within my 15-layer limit. The remaining files did not need further processing within ArcMap.

The next major step to get the files ready for the paper cutter. To do this all layers were saved as scalable vector files (SVG) for each data set. To accomplish this all layers were turned off except for one dataset. Then the Export Map option was used to save the map area as an SVG file. The SVG files were then imported into a program called Inskscape to be edited further. Within the Inskscape program the contours were divided up into their individual 4m interval layers (seen below).

layers_ns

Some of the smaller contour lines were deleted as the cutter would not be able to cut the shape out. The other features were given a layer of their own as well. Each individual layer was then exported and saved as an 11”x11” page in JPEG format.  The program used to work the paper cutter did not work as well with files that came from ArcMap directly which was why Inkscape was used. It is also easier to edit/select the lines and change the thickness within Inkscape.


Printing and Assembling the Model

To cut our each layer the JPEG layers were imported into the paper cutter program. Each layer was placed on the canvas then the corresponding colour was placed on the cutting map and loaded into the machine. Once loaded the paper cutter proceeded with cutting the paper. An example of what a cut layer from the machine can be seen below.

 photo 21275b2f-1f16-4cae-8a18-7f725417c1b5_zpsdoaqnf9s.jpg

The contours were cut first followed by the river, then the roads and railway and last was the Evergreeen Brick Works buildings. Each contour layer was stuck together using foam spacers that had tape on each size. These spacers were used to create the illusion of height in the model. The remaining paper features were stuck on using double sided tape. The following images show the assembling process.

 photo d084f076-6a2f-4cde-a51d-73927be5435c_zpsth4idvyg.jpg

 photo c4d9c449-5152-4ef2-9c45-3d56c5f90dfb_zps4ebxn1v7.jpg

 photo db8a056f-1a67-4486-896f-87fdb407c8fe_zps5xwbnwea.jpg

 photo 5ebc99e9-6bf6-4f0b-8a88-a63f1bf7bee3_zpsrupmc76i.jpg

Once all of the paper layer were assembled the legend, scale, north arrow, and labels were added by hand. The final product can be seen below.

 photo IMG_20161113_221812_zpszfiofto3.jpg