Movies and Television shows filmed in Toronto but based elsewhere…

by Alexander Pardy
Geovis Class Project @RyersonGeo, SA8905, Fall 2017

Data and Data Cleaning:

To obtain my data I used https://moviemaps.org/ and selected Toronto  the website displays a map that shows locations in the Greater Toronto Area  where movies and television shows were filmed. The point locations are overlaid on top of Google Maps imagery.

If you use the inspect element tool in internet explorer, you can find a single line of JavaScript code within the map section of the webpage that contains the latitude and longitude of every single point.

The data is in a similar format to python code. The entire line of JavaScript code was inputted into a Python script.  The python script writes the data into a CSV file that can then easily be opened in Microsoft Excel. Once the file was opened in Excel, Google was used to search for the setting of each and every single movie or television show, using the results of various different websites such as fan websites, IMDB, or Wikipedia. Some locations take place in fictional towns and cities, in this case locations were approximated using best judgement to find a similar location to the setting. All the information was than saved into a CSV file. Python was then used to delete out any duplicates in the CSV file and was used to give a count of each unique location value. This gives the total number of movies and television shows filmed at each different geographical location. The file was than saved out of python back into a CSV file. The latitude and longitude coordinates for each location was than obtained from Google and inputted into the CSV file.  An example is shown below.

Geospatial Work:

The CSV file was inputted into QGIS as a delimited text layer with the coordinate system WGS 84. The points were than symbolized using a graduated class method based on a classified count of the number of movies or television shows filmed in Toronto. A world country administrative shape file was obtained from the Database of Global Administrative Areas (GADM). There was a slight issue with this shapefile,  the shapefile had too much data and every little island on the planet was represented in this shapefile. Since we are working at a global scale the shapefile contained too much detail for the scope of this project.

Using WGS 84 the coordinate system positions the middle of the map at the prime meridian and the equator. Since a majority of the films and television shows are based in North America,  a custom world projection was created. This was accomplished in QGIS by going into Settings, Custom CRS, and selecting World Robinson projection. The parameters of this projection was then changed to change the longitude instead of being the prime meridian at 0 degrees, it was changed to -75 degrees to better center North America in the middle of the map. An issue came up after completing this is that a shapefile cannot be wrapped around a projection in QGIS.


After researching how to fix this, it was found that it can be accomplished by deleting out the area where the wrap around occurs. This can be accomplished by deleting the endpoints of where the occurrence happens. This is done by creating a text file that says:

This text box defines the corners of a polygon we wish to create in QGIS.  A layer  can now be created from the delimited text file, using custom delimiters set to semi colon and well-known text. It creates a polygon on our map, which is a very small polygon that looks like a line. Then by going into Vector, Geoprocessing Tools, Difference and selecting the input layer as the countries layer and the difference layer as the polygon that was created. Once done it gives a new country layer with a very thin part of the map deleted out (this is where the wrap around occurred). Now the map wraps around fine and is not stretched out. There is still a slight problem in Antarctica so it was selected and taken out of the map.

Styling:

The shapefile background was made grey with white hairlines to separate the countries. The count and size of the locations was kept the same. The locations were made 60% transparent. Since there was not a lot of  different cities the  symbols were classified to be in 62 classes, therefore each time the number increased, the size of the point would increase.  The map is now complete. A second map was added in the print composer section to show a zoomed in section of North America. Labels and lines were then added into the map using Illustrator.

Story Map:

I felt that after the map was made a visualization should also be created to help covey the map that was created by being able to tell a story of the different settings of films and television shows that were filmed in Toronto.  I created a ESRI story map that can be found Here .

The Story Map shows 45 points on a world map, these are all based on the setting of television shows and movies that were filmed in the City of Toronto. The points on the map are colour coded. Red point locations had 4-63 movie and television shows set around the points. Blue point locations had 2-3 movie and television shows set around the points. Green point locations had 1 movie or television show set around the point. When you click on a point it brings you to a closer view of the city the point is located in. It also brings up a description that tells you the name of the place you are viewing and the number of movies and television shows whose settings takes place in that location. You also have the option to play a selected movie or television show trailer from YouTube in the story map to give you an idea of what was filmed in Toronto but is conveyed by the media industry to be somewhere else.

Vancouver Minecraft Project

by Christopher Gouett-Hanna
Geovis Class Project @RyersonGeo, SA8905, Fall 2017

The general idea for my geo-visualization project was to utilize GIS files to create a Minecraft world.  What is Minecraft you may be asking?  Minecraft is a computer game, similar to Lego, where you can create environments with 1×1 blocks.  This makes it an ideal candidate for working with GIS data as it provides a reference scale to build upon.  To facilitate the transformation of GIS data to Minecraft, FME by Safe Software was used.  FME is an ETL software that can read and write to numerous file types. It also has countless transformers that can alter the imported data.  For the sake of this project, LiDAR data and vector shapefiles were used to populate the new world.

DEM

To create the base for the Vancouver Minecraft, we had to create a digital elevation model (DEM).  A LiDAR data set was used to create the DEM within FME.  This step was important because it forms the ground that everything would be placed upon.  In hindsight, a traditional raster DEM may have been better, since the LiDAR had classified things like cars as ground.  The following is a picture of the LiDar file used to create the DEM and buildings in the Vancouver Minecraft. All areas below sea level were separated to be attributed water blocks. Below is a screenshot of the LiDAR used to make the Vancouver Minecraft.

3D Buildings

The buildings in the Minecraft world were created using the LiDAR data and building footprint shape files.  The shapefiles were needed to ensure the Buildings were created at the right elevation.  The z values from the lidar data were used to define the extent of the buildings and the footprints were used to clip the data so there was no overlap. This data was all put into the 3D Forcer and Extruder tools in FME.   This produced 3D models of all buildings in the study area.  Below is an image of some of the buildings in the minecraft world and the same area in real life.

Vector Data

To add some more features to the Minecraft world, two shapefile vectors were added.  One was a road vector, and the other was a street tree shapefile.  FME was able to clip this data to the extents of the DEM.  Below is an image of each feature in the game.  The vector layers were given the condition to be placed 1 unit above the elevation of the DEM at the X-Y coordinate.  This would ensure that the vector layers were on top of the ground.  No other online case that I could find used both LiDAR and vector shapefiles, so this was a successful trial of the technique.  Below is a picture of the roads in the minecraft world and a picture of the street trees on Georgia street.  The trees were planted as saplings in the game, so it took 30-45 minutes for most of the trees to grow.  The bumps in the road are either to do with a rise in elevation, or cars that got coded as ground features in the LiDAR data.

 

Final Step

Once the ground, water, buildings, roads and tree files were all selected, they were all transformed into point cloud data in FME.  The point cloud calculator was also used to append each data type with a Minecraft block ID.  This will allow Minecraft to build the world with the proper coloured blocks.  These individual point clouds were then combined with the PointCloudCombiner tool.  Some were also given elevation parameters to ensure they rested above the ground.  Finally, the world was scaled down 50% to ensure it all fit within the Minecraft box.  Then the point cloud was exported with the Minecraft writer, with each point being assigned a Minecraft result.  Here are some more pictures of the world and the FME work-space used to construct it.

 

 

 

Raspberry Pi Controlled LED Map

by Arsh Grewal
Geovis Class Project @RyersonGeo, SA8905, Fall 2017

The original idea for this project came from a map that Torontonians see all the time. The subway map uses lights to convey information, I wanted to use the same concept and apply it for this project. I decided to use this technique on cities.

LED Map for TTC Line 1.

By displaying the cities as LED’s, I can use light to display information. To do this I used Python with Raspberry Pi. Raspberry Pi is unique because it has GPIO pins which can be controlled via code. Raspberry Pi works like a low spec computer but it is much more flexible. Python can be used with Raspberry Pi to control current flowing out of the GPIO pins. I decided to use this current to power LED’s.

Raspberry Pi Controlling RGB LEDs

The image below shows the schematic of how the project would have originally worked. The theme of the project is sustainability in cities. Each entry for the legend is associated with the top 3 cities for that category. Each colour would correspond with a ranking. Green would equal one, Blue equals 2 and Red equals 3. I initially wanted to use Red, Blue and Green to differentiate the ranks. For example, The cities with the highest Livability are Vancouver, Toronto and Calgary in that order. When the LED for Livability (represented with an I) turns on, the LED for Vancouver would turn green, the LED for Toronto would turn blue and the LED for Calgary would turn Red. This would happen for each category. The LED’s would stay on for 4 seconds for each category. The positive end of the LED would connect to a GPIO pin, while the negative end connects to a Ground pin.

The Diagram shows the original layout of the LED map.

There was a cartographic issue with this idea. The use of red indicates the lowest value, which is not what it means in my case. Red is the third best value (not the worst) and should be represented with the appropriate colour. The issue was that I was unable to find something similar to RGB LED’s, which displayed multiple colours. I could combine the RGB values to create new colours, but the pi can only supply a limited amount of current (~50 mha). Using RGB LED’s to create new colours requires a lot of current. Each LED takes about 20 mha. I could limit the energy consumption by using resistors (I did eventually end up using them) but that would reduce the brightness of the LED’s. I would not have been able to do this with the number of LED’s I planned on using. Instead, I decided to use brightness. I realized I could use (Pulse Width Modulation) PWM to control the brightness of the bulbs. Below is the video of me testing the RGB LED’s.

PWM essentially turns the LED’s on and off at a rapid rate to control brightness (faster than the eye can detect). The longer a bulb is on, the brighter it seems. The video below is me testing the PWM function on the Raspberry Pi.

After deciding to include PWM, I had to change the code I created. Below is the code I used to program the Raspberry Pi.

import RPi.GPIO as GPIO #Imports GPIO functions in the library

import time

GPIO.setmode(GPIO.BCM)

GPIO.setwarnings(False)

GPIO.setup(18,GPIO.OUT) #Sets pin 18 as active

v=GPIO.PWM(18,100) #Sets pin 18 to 100hz frequency for PWM, aliased to ‘v’

v.start(100) #Pin 18 is on for a 100% of the time

print “Vancouver on” #Displays ‘Vancouver on’ on the command screen

GPIO.setup(23,GPIO.OUT)

print “Toronto on”

t=GPIO.PWM(23,100)

t.start(10) #Pin 23 is on for 10% of the time.

GPIO.setup(4,GPIO.OUT)

print “Calgary on”

c=GPIO.PWM(4,100)

c.start(1) #Pin 4 is on for 1% of the time

GPIO.setup(12,GPIO.OUT)

print “Indicator 1 on”

GPIO.output(12,GPIO.HIGH) # Turns on Pin 12 (Maximum brightness)

time.sleep(4) #Pauses code for 4 seconds

print “Vancouver off”

v.stop() #Stops PWM for pin 18

print “Toronto off”

t.stop()

print “Calgary off”

c.stop()

print “Indicator 1 off”

GPIO.output(12,GPIO.LOW) #Turns off pin 12

#A similar code would be run for the next category.

After the coding was complete I tested the connections using the Breadboard, which allows the user to test connections without soldering. After a few trials, the code worked! The next step was to connect the cables with the LED’s via soldering. Soldering is used to make connections more permanent. A solder is a combination of highly conductive metals. The solder is heated until it melts, and the liquid metal is then applied to the ‘connection’. The liquid solder then cools and hardens. I soldered the resistors to the positive end of the LED and then soldered the copper wires to each end of the LED.

After all the LED’s were soldered, the next step was to build the actual physical model. A map of Canada with the appropriate cities was glued onto a piece of cardboard. Holes were made for the required cities, after which the LED’s were installed. I used Styrofoam from old boxes to make the base for the project. After about 8 hours of building the model, I was ready to test the final product.

The LED map finally worked! I ran into one problem. I did not have a monitor as a display for the Raspberry Pi. I was using my TV as the display, but I could not bring that with me to campus. Instead I downloaded PuTTy. PuTTy allowed me to connect my laptop to the command module of the pi using an ethernet cable. All I had to do was type ‘sudo python LED.py’ in the command module to make it work. I made it so the project would work with a simple right click of the mouse. The image below shows some of the wiring required to make the map work.

The Underbelly of the final product.

#AddressingTheSurface Translucent Maps inspired by GIS and Open Data

by Edgar Baculi #themapmaker
Geovisualization Project @RyersonGeo, SA8905, Fall 2017

#AddressingTheSurface was a collaborative geovisualization project with recent OCAD University graduate, Graphic Designer and Fine Artist Jay Ginsherman, with ideas and direction from Ryerson University, Master of Spatial Analysis candidate Edgar Baculi. This project was inspired by the previous work of Ginsherman entitled ‘Liquid Shadows’ using translucent images or maps as well as a lighting device nicknamed the ‘Lightbox’. This piece along with Ginsherman’s previous and on-going work can be found here http://jginsherman.format.com/. While attending OCAD University’s 102nd GradEx, Baculi encountered the work of Ginsherman and the GIS like experience of the attendees. From this the idea of using open data and actual GIS to produce a piece had begun.

After consulting with Ginsherman a piece based on the lived experience of Baculi, open data and GIS was established. Having previous research work in open data, Baculi was familiar with exploring and downloading open data. The Toronto Open Data Catalogue provided all the data relevant to the project. The key focus of the data collection were datasets related to Toronto Community Housing and services of interest for these residents and other locations.

The following datasets were downloaded and manipulated from the catalogue:
1. Toronto Community Housing Corporation Residences (with high, mid and low rise buildings selected and divided into three map layers)
2. The boundary of the city of Toronto (dissolved former municipality shape file)
3. City of Toronto Neighbourhoods
4. Street file
5. Fire Stations
6. Police Stations
7. Park Land
8. TTC Subway lines
9. Three heat/ kernel density maps on services of interest for TCHC residents (based on Rent Bank Centres, Community Cooling Centres and Shelters.

A key aspect of this project was the use of subtractive colours (Magenta, Yellow and Cyan) for the heat maps to show interesting overlap, resulting in new colours. The overlap of colours were designed intentionally to be open to interpretation to the map readers.

Using ArcGIS the previously mentioned datasets were adjusted by Baculi with ideal symbology before being sent to Ginsherman. The discussions between Baculi and Ginsherman involved understanding how GIS works and cartographic ideals for the look of the maps, with great design to appeal to the audience. Baculi wanted to create a hands on GIS experience, with a legend that built itself up and remained legible to the map reader. Ginsherman incorporated these ideals into the final look under Baculi’s direction.

Once Baculi completed the GIS portion of the layers, they were sent off to Ginsherman to improve design, layout and to print. Ginsherman used PDF’s of the layers in adobe illustrator, and ensured map alignment by limiting the work to the same illustrator file and giving each map its own layer. Printing involved using a laser printer, specifically at the OCAD University Digital Print Centre. Previous draft layers were also created to test the colour combinations and the best level of transparency for the maps.

A key component of the piece was the Lightbox from Ginsherman’s previous work which was designed and built by Ginsherman and his father. The Lightbox is made of wood, acrylic glass, and LED lights which were screwed together. The Toronto boundary layer was the only layer not printed on a translucent sheet, but on the glass. The boundary along with the north arrow acted as guides to align the layering of the maps. The LED lights improved the clarity of the layering maps as well as directed attention to the piece.

The end result was presented on Ryerson’s 2017 GIS Day and consisted of a Lightbox with the Toronto boundary printed on top and a total of 12 translucent maps. A variety of combinations were possible for interaction and discussion for the attendees. Please see the YouTube video below!

Using Laser-Cutting to Visualize the 1953 North Sea Flood in the Netherlands

by Carmen Huber
Geovis Class Project @RyersonGeo, SA8905, Fall 2017

Context

The 1953 flood was one of the worst in the history of the Netherlands (Deltawerken, 2017) – a country in which 1/3 of its land area lies below sea level (Netherlands Tourism, 2017). The flood had devastating impacts on the country, including:

  • 1,835 deaths
  • 200,000 cattle drowned
  • 200,000 hectares of soil flooded
  • 3,000 houses and 300 farms destroyed
  • 40,000 houses and 3,000 farms damaged
  • 72,000 people evacuated

This flood has additional significance, because the devastating impacts led to multiple changes in policy and infrastructure related to water management in the Netherlands. On February the 21st, 1953, the Deltacommission was founded to devise a plan to guide these changes (Deltawerken, 2017).

Creating the Model

To help in visualizing the extent of the flood itself, laser-cutting was used to create two physical models of the landscape in which the flood occurred:

  1. Displaying the landscape “pre-flood”
  2. Displaying the landscape “post-flood” (or during the flood)

Both models were created to represent only the areas surrounding the flood extent – which was centred in the province of Zeeland (appropriately translates to English as “Sealand”).

Data

Two spatial datasets were used to create the models: elevation and land cover. Elevation is extremely relevant to the 1953 flood due to the increased vulnerability of low-lying areas – it is much easier for these areas to flood that those with higher elevations. The MODIS (2015) elevation raster was used to represent elevation.

A land cover raster (Landscan, 2015) was used to determine the “normal” extent of the North Sea – used in the pre-flood model. While the extent of the North Sea was likely slightly different in 1953 than in 2015, spatial data was difficult to come by for this time period – due to the fact that the flood took place over 50 years ago along with difficulty navigating open data portals written completely in Dutch! Therefore, the 2015 dataset was used to provide a general representation of the North Sea extent.

Additionally, one non-spatial image was georeferenced to define the flood extent. The historical time period that the flood took place also made it difficult to obtain spatially-enabled data of the flood extent itself. I contacted an employee at the Dutch Open Data Portal, and was pointed to multiple PDFs outlining the flood extent in great detail – and again, written in Dutch. Because these images were so hard to decipher, I opted to use a basic JPG image of the flood extent that was made available on the Deltawerks website (http://www.deltawerken.com/Devastating-Powers/484.html). This image was used to define the flood extent.

Methods

These datasets needed to be processed, and then exported into a format which could be used as an input for the laser-cutting machines.

The first step in this process was to transform the JPG flood extent image into a spatially referenced polygon. To do this, provinces in the Netherlands were selected by eyeballing the approximate extent of the flood. From this selection, the ‘minimum bounding area’ tool was used to output a rectangular polygon which would encompass all of the selected area. In order to “cut” the flood extent from this rectangular polygon, the JPG image showing the flood extent first needed to be geocoded. The image was imported into ArGIS Pro – and initially appeared to be “floating in space” because it had no spatial information associated with it. To resolve this, multiple georeferencing points were added to match identifiable points on the JPG image to the same points on the exiting base map. Approximately twenty georeferencing points were added before the JPG image appeared to line up with the base map underneath. Finally, the geocoded image was made transparent, and used as a guide to edit the rectangular polygon to form the shape of the flood extent (using the “cut” editing tool). The flood extent data for the “post-flood” model was complete!

The second step in this process was to define the “normal” extent of the North Sea in the study area. To do this, the elevation raster was used. The raster was first clipped to the same rectangular study area polygon previously used to create the flood extent. After the raster was clipped, all pixels identified as being water were extracted. Then, the raster extraction of water pixels were converted to polygon. Because the polygon was created from individual pixels, the resulting edges are very jagged. The “smoothing” tool was run on the sea extent polygon, which resulted in something that would be much more visually appealing in the model. The result represented the North Sea extent “pre-flood”.

The third step in the process was to manipulate the elevation data. Like the land cover raster, the elevation raster was clipped to the rectangular study area polygon. Then, elevation values needed to be reclassed. Reclassing the raster simplified the data in a way, so that it would be possible to construct using laser-cutting. The raster was reclassed into five classes: -6 – 0 m, 0 – 10 m, 10 – 20 m, 20 – 30 m, and 30 – 40 m. This reclassed raster was then converted to a polygon, with the ‘simplify’ option elected.  Graduated colours was selected for symbology of the resulting layer, so that you could intuitively tell which layer had the lowest to highest elevation. A copy of the polygon elevation layer was made, so that one existed for the “pre-flood” model and for the “post-flood” model. For the “pre-flood” model the erase tool was used, inputting the normal North Sea extent as the erase feature and one of the elevation polygon layers as the target feature. This was repeated for “post-flood” model, but instead the flood extent polygon was inputted as the erase feature. Now, both elevation datasets were complete!

The result of processing was four layers, for two models:

  • “Pre-flood” model
    • Normal North Sea extent
    • Elevation (with North Sea extent erased)
  • “Post-flood model
    • 1953 flood extent
    • Elevation (with North Sea extent erased)

These layers were shown on the layout view of Arc individually, and exported as PDFs (seen below). PDFs were requested by the company that I had contacted about running the laser cutting process.

“Pre-flood” sea extent and elevation (top left and right, and “post-flood” flood extent and elevation (bottom left and right)

After reviewing my project with the external company, I sent them my PDF files and discussed potential materials. We decided that using 3 mm wood would be a good choice in material for the elevation component – it would show the difference in elevation by layering but also would not be too expensive. For each increase in elevation class, and additional layer of wood would be cut to stack onto the one below. This would make the result 3D in nature, and make differences in elevation really stand out (especially since the Netherlands is so flat!). We also decided that using a blue Plexiglas would make the sea and flood extent be visually appealing and really stand out. I received the laser cut materials in a package, as shown in the image below.

Laser-cut wood and Plexiglas picked up!

When the cutting was complete, I stained the wood using a decreasing number of coats as the elevation layer increased. Then, I assembled the pieces on a wooden base using wood glue and double sided tape. I added a legend indicating the elevation range for each level.

Final Product

The finial product is as seen here:

Final product: “Pre-flood” model on left, “post-flood” model on right

Multiple layers used to show differences in elevation

Looking back, there were a few things I would have done differently to improve this project. First, the model would have been more interesting if I had found historically accurate data sources for 1953. Additionally, the models would have been more accurate if I had somehow incorporated the locations of the storm barriers and/or dykes that failed during the flood.  Second, the final product might have been more visually appealing if I had used a greater number class breaks to represent elevation. This would have provided more detail on the elevation of the study area, and would have made the models look better in general. Third, I could have improved minor details (such as the title) to make the final product look more polished. Despite these potential improvements, I was happy with the result! I discovered that laser-cutting is an effective technique to create visually appealing spatial models.

 

Sources:

Deltawerken. (2017). The Flood of 1953. Retrieved from: http://www.deltawerken.com/the-flood-of-1953/89.html

Netherlands Tourism. (2017). Is the Netherlands Below Sea Level? Retrieved from: http://www.netherlands-tourism.com/netherlands-sea-level/

 

 

Exploring Street Art in Central Toronto – A Story Map

by Daniel LeBlanc
GeoVisualization Project Assignment @RyersonGeo, SA8905, Fall 2017

For my GeoVis project I wanted to do something that focused on the confluence of art and cartography. After some research, I settled on the use of story maps because they are a great way of bringing many different layers of content together and setting them in a geographical context. They allow for the ability to supplement a map with pictures, music, and video in an engaging way that is on the forefront of how people interact with maps and GIS applications. I also knew that I wanted to do something related to the Toronto street art scene, with graffiti being its most prevalent manifestation, because it has always been something that has interested me. I love turning around a corner in the city and being confronted with a colourful mural, or finding a back alley with some amazing hidden art work.

Though there are many story mapping platforms out there now, ESRI offers a great range of templates easily available on their website (you can create a free account and login). It is an engaging type of project, and can be picked up by just about anyone. ESRI’s templates range in style and format, with the type of content you want to present determining the best choice (or choices) for you. I chose to work with the Map Journal format as the main framing tool, and inserted many smaller Cascade stories to provide a smooth viewing experience for the photographs I took.

The Map Journal template revolves around a scrolling sidebar or ‘side panel’ that controls content on the ‘main stage’. Side panel content usually involves text or pictures that lays out the narrative while the main stage highlights content with maps, pictures, videos, or other story maps. I chose this template because I knew I wanted to include as many different forms of media as possible, and the Map Journal provides an easy and logical way to bring them all together and connect them to specific points on a map. Other formats include the Map Tour, Swipe, Spyglass and Crowdsource. Because ESRI is seeking to promote this type of format for map interaction, there is a wide range of support resources available including tutorials, message boards, blogs, and galleries of examples. The galleries gave me some great ideas of what was possible and what wasn’t and the blogs were very helpful when troubleshooting.

The first stage of the project was to research the street art scene in Toronto and decide which pieces would be included in the project. Blogs focused on the topic, as well as newspaper reports and tour information were used to get an idea of what some of the most well known pieces or areas are in Toronto. A total of 12 art pieces or areas were selected, most of which were chosen through this review process and a few were from my personal knowledge. The addresses of the buildings they were painted on, or the closest reasonable address to where they were located were determined. This was tricky in some cases as some of the areas were over 100 meters long, or inaccessible by foot in the case of one area located along some train tracks. Google Maps was used for some initial spot checking and determining some of the addresses to confirm.

Once the addresses were decided, ArcGIS Desktop was used to extract them from the Address Points (Municipal) shapefile retrieved from the Toronto Open Data Catalogue. One of the main ideas was to style the maps with colours corresponding to each art piece. Each address, called an ‘Art Point’, was buffered three times (250, 500, 750 meters) using the Buffer tool. The Select by Location – Intersect tool was then used to select features from the Toronto CentreLine shapefile. This shapefile, also retrieved from the Toronto Open Data Catalogue, contains all the linear features in Toronto including roads, pathways, rivers etc. It was used because it created a complex visual effect and gave the illusion of each Art Point radiating outwards. Each selected Centreline layer was then saved and exported, providing three ‘halos’ of differing distances around each artwork. Figure 1 shows ArcGIS desktop and a few of the many layers of buffers and halos being created.

Figure 1 – ArcGIS desktop, creation of buffers and corresponding halos.

12 Art Points X 3 halos = 36 buffer-selection-exports, all of which were then compressed into zip files separately so they could be uploaded to the ArcOnline mapping tool. ArcOnline was used because of it’s webmapping capabilities and easy integration with the story map templates. A number of tools is also available through ArcOnline, including the ability to add layers from their Living Atlas. This will be discussed more later. A dark grey canvas basemap was selected in order to better show off the halos once added, configured, and coloured. Figure 2 shows the construction of the overview map with all 12 Art Points and their 750 meter associated halos.

Figure 2 – ArcOnline being used to construct an overview map with Art Points and halos.

In the meantime, I spent two long mornings driving around Toronto (or taking the TTC) in the sun and the rain, taking my own photos and videos of each area or artwork. Introduction and background sections in the side panel were created, along with 12 different sections for each Art Point. All the photos were then uploaded and an example of each Art Point was inserted into the side panel while the rest of the photos were arranged in a Cascade story map. The Cascade story map template is not used to it’s full extent here, but provided a convenient way of integrating the photos that was in line with the scrolling functionality of the rest of the project.  The twelve different halo sets were then coloured based on the example artwork and each point on the map was linked with the side panel so the map would jump to the appropriate section as the user scrolled down. The videos I took of selected Art Points were also uploaded to YouTube and joined with music from their free Audio Library. Figure 3 shows the Graffiti Alley Art Point and associated halos (once finished).

Figure 3 – Graffiti Alley, side panel content and main stage map.

Content including background on each Art Point and the artist (if applicable) was then added to each section of the side panel. If an artist was identified, their name was also hyperlinked to their own website, flikr, or instagram account if possible. As the user scrolls down through the side panel section then, each Art Point is shown including the background content, a link to the Cascade to view more photos, a link to the YouTube video (if applicable), and the main stage would jump to the associated location with the styled map halos. Figure 4 shows the Underpass Park section, with the Cascade story map inserted on the main stage showing a series of more detailed pictures about the place. Figure 5 shows the Reclamation Wall section, with the link to the created YouTube video open.

Figure 4 – Underpass Park, side panel content and Cascade photographs opened.

Figure 5 – Reclamation Wall, side panel content and link to YouTube video opened on main stage.

Each halo was also designed to correspond to a walking distance as laid out in the introduction side panel sections, meaning that by looking at the map, any halo corresponded to a 10 minutes or less walking distance to an Art Point. For improved navigation and map usability, public transportation layers were added in from ESRI’s Living Atlas (which is connected to ArcOnline), allowing users to click on all the TTC bus, streetcar, and subway routes shown faintly on the map to help them navigate to each Art Point.

In the end, two different maps (one overview and one specific Art Point), 36 halos, 4 YouTube videos, and over 150 photos were taken to tell a story about the street art in Toronto.

Have you a look yourself though, don’t they say a picture (or map) is worth 1000 words?

https://ryerson.maps.arcgis.com/apps/MapJournal/index.html?appid=ee452e25fc5e4604a22c92bd291b8b93

References:

Open Data Toronto. (2017). Address Points and Toronto Centreline shapefiles. Retrieved from: https://www1.toronto.ca/wps/portal/contentonly?vgnextoid=1a66e03bb8d1e310VgnVCM10000071d60f89RCRD

What Kind of Story Do You Want to Tell? (2017). ESRI Story Maps. Retrieved from:  https://storymaps.arcgis.com/en/app-list/

 

3D Printing Canadian Topographies

by Scott Mackey, Geovis Project Assignment @RyersonGeo, SA8905, Fall 2016

Since its first iteration in 1984 with Charles Hull’s Stereo Lithography, the process of additive manufacturing has made substantial technological bounds (Ishengoma, 2014). With advances in both capability and cost effectiveness, 3D printing has recently grown immensely in popularity and practicality. Sites like Thingiverse and Tinkercad allow anyone with access to a 3D printer (which are becoming more and more affordable) to create tangible models of anything and everything.

When I discovered the 3D printers at Ryerson’s Digital Media Experience (DME) lab, I decided to 3D print models of interesting Canadian topographies, selecting study areas from the east coast (Nova Scotia), west coast (Alberta), and central Canada (southern Ontario). These locations show the range of topographies and land types strewn across Canada, and the models can provide practical use alongside their aesthetic allure by identifying key features throughout the different elevations of the scene.

The first step in this process was to learn how to 3D print. The DME has three different 3D printers, all of which use an additive layering process. An additive process melts materials and applies them thin layer by thin layer to create a final physical product. A variety of materials can be used in additive layers, including plastic filaments such as polylactic acid (PLA) (plastic filament) and Acrylonitrile Butadiene Styrene (ABS), or nylon filaments. After a brief tutorial at the DME on the 3D printing process, I chose to use their Lulzbot TAZ, the 3D printer offering the largest surface area. The TAZ is compatible with ABS or PLA filament of a 1.75 mm diameter. I decided on white PLA filament as it offers a smooth finish and melts at a lower temperature, with the white colour being easy to paint over.

img_1740
Lulzbot TAZ

The next step was to acquire the data in the necessary format. The TAZ requires the digital 3D model to be in an STL (STereoLithography) format. Two websites were paramount in the creation of my STL files. The first was GeoGratis Geospatial Data Extraction. This National Resources Canada site provides free geospatial data extraction, allowing the user to select elevation (DSM or DEM) and land use attribute data in an area of Canada. The process of downloading the data was quick and painless, and soon I had detailed geospatial information on the sites I was modelling.

geogratis
GeoGratis Geospatial Data Extraction

One challenge still remained despite having elevation and land use data – creating an STL file. While researching how to do this, I came across the open source web tool called Terrain2STL on a visualization website called jthatch.com. This tool allows the user to select an area on a Google basemap, and then extracts the elevation data of that area from the Consortium for Spatial Information’s SRTM 90m Digital Elevation Database, originally produced by NASA. Terrain2STL allows the users to increase the vertical scaling (up to four times) in order to exaggerate elevation, lower the height of sea level for emphasis, and raise the base height of the produced model in a selected area ranging in size from a few city blocks to an entire national park.

The first area I selected was Charleston Lake in southern Ontario. Being a southern part of the Canadian Shield, this lake was created by glaciers scarring the Earth’s surface. The vertical scaling was set to four, as the scene does not have much elevation change.

Once I downloaded the STL, I brought the file into Windows 10’s 3D Builder application to slim down the base of the model. The 3D modelling program Cura was then used to further exaggerated the vertical scaling to 6 times, and to upload the model to the TAZ. Once the filament was loaded and the printer heated, it was ready to print. This first model took around 5 hours, and fortunately went flawlessly.

Cape Breton, Nova Scotia was selected for the east coast model. While this site has a bit more elevation change than Charleston Lake, it still needed to have 4 times vertical exaggeration to show the site’s elevations. This print took roughly 4 and a half hours.

Finally, I selected Banff, Alberta as my final scene. This area shows the entrance to Banff National Park from Calgary. No vertical scaling was needed for this area. This print took roughly 5 and half hours.

Once all the models were successfully printed, it was time to add some visual emphasis. This was done by painting each model with acrylic paint, using lighter green shades for high areas to darker green shades for areas of low elevation, and blue for water. The data extracted from GeoGratis was used as a reference in is process. Although I explored the idea of including delineations of trails, trail heads, roads, railways, and other features, I decided they would make the models too busy. However, future iterations of such 3D models could be designed to show specific land uses and features for more practical purposes.

img_1778
Charleston Lake, Ontario
img_1779
Cape Breton, Nova Scotia
img_1775
Banff, Alberta

3D models are a fun and appealing way to visual topographies. There is something inexplicably satisfying about holding a tangible representation of the Earth, and the applicability of 3D geographic models for analysis should not be overlooked.

Sources:

GeoGratis Geospatial Data Extraction. (n.d.). Retrieved November 28, 2016, from http://www.geogratis.gc.ca/site/eng/extraction

Ishengoma, F. R., & Mtaho, A. B. (2014). 3D Printing: Developing Countries Perspectives. International Journal of Computer Applications, 104(11), 30-34. doi:10.5120/18249-9329

Terrain2STL Create STL models of the surface of Earth. (n.d.). Retrieved November 28, 2016, from http://jthatch.com/Terrain2STL/

 

 

3D Paper Topography Map of Evergreen Brick Works and Its Surroundings

By Nicole Serrafero

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2016

When learning about geography in the early years of school we had to trace and label contours based off topographic maps. For the purpose of the course work I decided to take inspiration from my younger school days and use modern technologies to attempt to reproduce a topographic map with cartographic elements included. My main inspiration came from an artist by the name of Sam Cadwell who creates beautiful works of arts using layers of paper to represent contours. An example of his work can be seen below and through the link to his website.

Example of Sam Cadwell's Work

The project involved cutting out each contour layer and features using a Cricut machine which is computer guided paper cutter (seen below).

 photo IMG_20161107_123320_zps33mlcyg6.jpg

The maximum paper size that the cutter program can handle is 11” in x 11” so I ensured that the study area would fit within the paper size limitations. The paper used for the project was 12”x12” cardstock paper in a variety of colours to represent each feature. For the layers of contours, a pink to red colour scheme was used as it provided me with up to 15 layers of sequential colours.

 photo 0aa4f3c0-b318-4696-9a89-09c959f8483f_zpsdazdxid7.jpg

The water features were blue, the rail features yellow, the buildings a light purple, and the roads black.


Data Used

Four (4) datasets were used to produce the topographic model:

  • Contour Lines (Obtained from TRCA)
  • Building Footprints (Obtained from DMTI spatial)
  • Waterways (Obtained from TRCA)
  • Road and Rail Lines (Obtained from Statistics Canada)

Study Area Extraction

All of the files were loaded into ArcMap then all projected to WGS84 to ensure all files were in the same projection. The Evergreen Brick Works was chosen as the study area as its surrounding area contains interesting contours, roads, a major highway, railways, a river. To ensure that the study area was contained within the paper limitations the page size within ArcMap was set to 11” x 11” and the map view was adjusted until I was satisfied with the area. Once the final study area was chosen the features within the view were clipped out and saved as separate files. Below is a screen shot of what the final study area covers.

studyarea_ns

With the data now clipped the further data processing could be done easily as the amount of data was significantly reduced. The contour lines came as 1m intervals with a range of 22 individual contours levels which is too many levels for the amount of paper that I have available for the contours. The number of contours was reduced by selecting every 4 m contour then extracting the selected lines to a separate file. With the new file the number of layers was reduced to 12 layers which fits within my 15-layer limit. The remaining files did not need further processing within ArcMap.

The next major step to get the files ready for the paper cutter. To do this all layers were saved as scalable vector files (SVG) for each data set. To accomplish this all layers were turned off except for one dataset. Then the Export Map option was used to save the map area as an SVG file. The SVG files were then imported into a program called Inskscape to be edited further. Within the Inskscape program the contours were divided up into their individual 4m interval layers (seen below).

layers_ns

Some of the smaller contour lines were deleted as the cutter would not be able to cut the shape out. The other features were given a layer of their own as well. Each individual layer was then exported and saved as an 11”x11” page in JPEG format.  The program used to work the paper cutter did not work as well with files that came from ArcMap directly which was why Inkscape was used. It is also easier to edit/select the lines and change the thickness within Inkscape.


Printing and Assembling the Model

To cut our each layer the JPEG layers were imported into the paper cutter program. Each layer was placed on the canvas then the corresponding colour was placed on the cutting map and loaded into the machine. Once loaded the paper cutter proceeded with cutting the paper. An example of what a cut layer from the machine can be seen below.

 photo 21275b2f-1f16-4cae-8a18-7f725417c1b5_zpsdoaqnf9s.jpg

The contours were cut first followed by the river, then the roads and railway and last was the Evergreeen Brick Works buildings. Each contour layer was stuck together using foam spacers that had tape on each size. These spacers were used to create the illusion of height in the model. The remaining paper features were stuck on using double sided tape. The following images show the assembling process.

 photo d084f076-6a2f-4cde-a51d-73927be5435c_zpsth4idvyg.jpg

 photo c4d9c449-5152-4ef2-9c45-3d56c5f90dfb_zps4ebxn1v7.jpg

 photo db8a056f-1a67-4486-896f-87fdb407c8fe_zps5xwbnwea.jpg

 photo 5ebc99e9-6bf6-4f0b-8a88-a63f1bf7bee3_zpsrupmc76i.jpg

Once all of the paper layer were assembled the legend, scale, north arrow, and labels were added by hand. The final product can be seen below.

 photo IMG_20161113_221812_zpszfiofto3.jpg

 

West Don Lands Development: 2011 – 2015



CHRISTINA BOROWIEC
CHRISTINA BOROWIEC | West Don Lands Development: 2011 – 2015 | 3D Printing Tech.

Author: CHRISTINA BOROWIEC
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2016



PROJECT DESCRIPTION:
The model displayed above is of the West Don Lands of the City of Toronto, bounded by Queen St. E to the north, the rail corridor to the south, Berkeley St. to the west, and Bayview Ave. to the east. In utilizing Ryerson University’s Digital Media Experience Lab’s three-dimensional printing technology, an interactive model providing a tangible means to explore the physical impact of urbanization and the resultant change in the city’s skyline has been produced. The model interactively demonstrates how the West Don Lands, a former brownfield, have intensified from 2011 to 2015 as a result of waterfront revitalization projects and by serving as the Athletes’ Village for the Toronto Pan Am/Parapan American Games.

Buildings constructed during or prior to 2011 are printed in black, while those built in 2012 or later are green. In total, 11 development projects have been undertaken within the study area between 2011 and 2015. Each of these development projects have been individually printed, and correspond to a single property on the base layer, which is identifiable by the unique building footprint. The new developments can be easily attached and removed from the base of the model (the 2011 building and elevation layer) via magnetic bases and footprints, thereby providing an engaging way to discover how the West Don Lands of Toronto have developed in a four year period. By interacting with the model, the greater implications of the developments on the city’s built form and skyline can be realized and experienced at a tangible scale.

Areas with the lowest elevation (approximately 74 m) are solidly filled in on the landscape grid, while areas with higher elevations (80 m to 84 m) have stacked grids and foam risers added to better exaggerate and communicate the natural landscape. These additions can be viewed in the video below.

Street names and a north arrow are included on the model, as well as both an absolute and traditional scale bar. The absolute scale of the model is 1:5,000.




PROJECT EXECUTION:
To complete the project, a mixture of geographic information system (GIS) and modeling software were used. First, the 3D Massing shapefile was downloaded from the City of Toronto’s OpenData website, and the digital elevation model (DEM) for Toronto was retrieved from Natural Resources Canada. Using ArcMap, the 3D Massing shapefile, which includes information such as the name, location, height, elevation, and age of buildings in the city, was clipped to the study area. Next, buildings constructed prior to or during 2011 were selected and exported as a new layer file. The same was done for new developments, or the buildings constructed from 2012 to 2015, with both layers using a NAD83 UTM Zone 17N projection. Once these new layers were successfully created, they were imported into ArcScene.

In ArcScene, the digital elevation model for Toronto was opened and projected in NAD83. The raster layer was clipped to the extent of the 2011 building layer, and ensured to have the same spatial reference as the building layer. Next, the DEM layer properties were adjusted so base heights were obtained from the surface, and a vertical exaggeration was calculated from the extent of the DEM in the scene properties. Once complete, the “EleZ” variable data provided in the building layers’ shapefiles were used to calculate and display building heights. The new developments 3D file was then exported, as the 2011 buildings and DEM files were merged. Since the “EleZ” (building height) variable was used rather than “Z” (ground elevation) or “Elevation” (building height from mean sea level), the two layers successfully merged without buildings extending below the DEM layer. The merged file was then exported as a 3D file. Although many technical issues were encountered at this point in the project (i.e. the files failed to merge, ArcScene crashed unexpectedly repeatedly, exported file quality was low…), the challenges were overcome by viewing online tutorials of users who had encountered similar issues.

Once the two 3D files were successfully exported (the new developments building file and the 2011 building file merged with the DEM), they were converted to .STL file types and opened in AutoDesk Inventor. Here, the files were edited, cleaned, smoothed, and processed to ensure the model was complete and would be accepted in Cura (3D printing software).



At Ryerson University’s Digital Media Experience Lab, the models were printed using the TAZ three-dimensional printer (pictured below). Black filament was used for the 2011 buildings and DEM layer, and green was used for the new developments. These colours were selected from what was currently available at the lab because they provided the greatest level of contrast. In total, printing took approximately 7 hours to complete, with the base layer taking about 5.5 hours and the new developments requiring 1.5 hours. The video above reveals the printing process. No issues were encountered in the utilization of the 3D printer, as staff were on-hand to answer any questions and provide assistance. Regarding printing settings, the temperature of the bed was set at 60°C, and the print temperature was set to 210°C. A 0.4 mm nozzle was used with a 20% fill density. The filament density was 1.75 mm, and a brim was added for support to the platform during printing. Although the brim is typically removed at the completion of a print, the brim was intentionally kept on the model for aesthetic purposes and to serve as a border to the study area.


TAZ 3D Printer


Once printing was completed, the model was attached to a raised base and street names, a north arrow, legend, absolute scale and scale bar, and title were added. Magnets were then cut to fit the new development building pieces, and attached both to the base layer of the model and the new developments. As a final step in the process, the model’s durability and stability were tested by encouraging family and friends to interact with the model prior to its display at the Environics User Conference in Toronto, Ontario in November 2016.


West Don Lands Development: 2011 - 2015 Project



RECOMMENDED ENHANCEMENTS:
To improve the project, three enhancements are recommended. First, stronger magnets could be utilized both on the new development pieces and on the base layer of the model. In doing so, the model would become more durable, sturdy, and easier to lift up to examine at eye level – without the worry of buildings falling over due to low magnetic attractiveness resulting from the thicker cardboard base on which the model rests. In relation to this, stronger glue could be used to better bind the street names to the grid as well.

Additionally, the model may be improved if a solid base layer was used instead of a grid. Although the grid was intended to be experimental and remains an interesting feature which draws attention, it would likely be easier for a viewer to interpret the natural features of the area (including the hills and valleys) if the model base was solid.

The last enhancement entails using a greater variety of filaments in the model’s production to create a more visually impactful product with more distinguishable features. For instance, the base elevation layer could be printed in a different colour than the buildings constructed in 2011. Although this would complicate the printing and assembly of the model, the final product would be more eye-catching.



DATA SOURCES:
City of Toronto. (2016, May). 3D Massing. Buildings [Shapefile]. Toronto, Ontario. Accessed from <http://www1.toronto.ca/wps/portal/contentonly?vgnextoid=d431d477f9a3a410VgnVCM10000071d60f89RCRD>.

Natural Resources Canada. (1999). Canadian Digital Elevation Data (CDED). Digital Elevation Model [Shapefile]. Toronto, Ontario. Accessed from <http://maps.library.utoronto.ca/cgi-bin/datainventory.pl?idnum=20&display=full&title=Canadian+Digital+Elevation+Model+(DEM)+&edition=>.

 




CHRISTINA BOROWIEC
Geovisualization Project
Professor: Dr. Claus Rinner
SA 8905: Cartography and Geovisualization
Ryerson University
Department of Geography and Environmental Studies
Date: November 29, 2016

Map Animation of Toronto’s Watermain Breaks (2015)

Audrey Weidenfelder
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2016
mymap

For my geo-viz project, I wanted to create a map animation.  I decided to use CARTO, a web mapping application.

CARTO

CARTO is an open source web application software built on PostGIS and PostgreSQL open source spatial databases.  Users can manage data, run spatial analysis and design custom maps.  Within CARTO, there is an interface where SQL can be used to manipulate data, and a CartoCSS editor (a cartography language) to symbolize data.

CARTO has a tool called Torque that allows you to ‘bring your data to life’.  It’s good for mapping large data sets that have a time and/or date reference.  CARTO is well documented, and they offer guides and tutorials to assist users in their web mapping projects.  You can sign up for a free account here.  The free account is limited to 250Mb of storage after which charges apply.

The Process:  Connect to data, create new data set, add new column, symbolize

To create a map animation, simply connect to your data set either by dragging and dropping or browsing to your file.  If you don’t have data, you can search CARTO’s data library.  I had a file that I downloaded from the Toronto Open Data Catalogue.  I wanted to test CARTO’s claim that it can ‘bring large data sets to life’.  The file contained over 35,000 records of the city’s watermain breaks from 1990 to 2015.  I brought it into CARTO through the file browser, and in about 40 seconds all 35,000 point locations appeared in the map viewer.  From here, I explored the data, experimented with all the different visualization tools, and practised with CartoCSS to symbolize the data.

I decided to animate the 1,353 watermain breaks for 2015.  I had to filter the data set using a SQL statement to create a new data set containing only the 2015 breaks.  It’s easy to do using SQL.  You select from your table and column:

Select * from Breaks where Break_Year = 2015

CARTO asks if you wish to create a new data set from your selection – select ‘Yes’.  A new data set is created.  It will transfer your selected data into a new table along with the attributes associated with the selection.  You can keep the default table name or change the name of your table.  I re-named the table to ‘Watermain Breaks 2015’

From here, I wanted to organize the data by the seasons:  Spring, Summer, Winter and Fall.  This required creating a new column, selecting data according to the months and days of the season, entering the selected data into the column, and reassigning it a new name.

In data view, select ‘Add Column’ from the table designer, give it a name and a data type.  In this case I called it ‘Season’ and selected ‘String’ as the data type for text.  The next step was to update the column ‘Season’ based on values from the ‘Break_Date’ column that contained the dates of all breaks.  This was accomplished through the SQL Query editor, as so:

Update Watermain_Breaks _2015 set Season = ‘Spring’
where Break_Date >= ‘2015-03-21’ and Break_Date <= ‘2015-06-20’

The value of ‘Spring’ replaced the selected date range in the new column.  This was repeated for summer, fall and winter, substituting the appropriate date range for each season.

I then switched to the Category Wizard to symbolize this map layer.  Here you select the column you wish to symbolize.  I wasn’t pleased with the CARTO default symbolization, and there are were few options to choose from, so I used the CartoCSS editor to modify:

/** category visualization */
#breaks {
Marker-fill-opacity: 0.9;
Marker-placement: point;
Marker-type: ellipse;
Marker-width: 8;
Marker-allow-overlap: true;
}

#breaks[season=”Fall”] {
Marker-fill: #FF9900;
Marker-line-color: #FF9900
}

#breaks[season=”Spring”] {
Marker-fill: #229A00;
Marker-line-color: #229A00;
}

And so on …

To make the map layer interactive, I used the Infowindow designer in map view.  Here you can create pop-up windows based on a column in the table.  Options are available for a hover window or a clickable window.

Adding Layers

To add more interest to the map, I added the City of Toronto Neighbourhood boundaries so that the number of breaks per neighbourhood could be viewed.  I downloaded the shapefile from Toronto Open Data, connected the data set to my map and added it as a second layer.  I added info pop-ups, and changed the default symbolization with CartoCSS editor:

/** simple visualization */  #neighbourhoods_wgs84{
Polygon-fill: #FF6600;
Polygon-opacity: 0;
Line-color: #000000;
Line-width: 0.5;
Line-opacity: 1;
}

Animation

CARTO only allows animation on one map layer, and it does not permit info windows.  You also cannot copy a layer.  As such, I added a new layer by connecting to the watermain breaks data table, and then used the Torque Cat Wizard to animate the layer.

Animation is based on the column that contains either a date or time.  I selected the Break_Date column, and used CartoCSS editor to set the number of frames, duration of the animation, data aggregation to cumulative so that the points remained on the map, and then symbolized the data points to match the original watermain breaks layer.  A legend was then added to this layer.

CARTO has the option to add elements such as title, text boxes and images.  I added a title and a text box containing some facts about the city’s watermain breaks and pipe distribution.

The map animation can be viewed here .  Zoom in, pan around, find your neighbourhood, move the date slider, and select from the visible layers.

Note:  CARTO does not function well in Microsoft Edge