West Don Lands Development: 2011 – 2015



CHRISTINA BOROWIEC
CHRISTINA BOROWIEC | West Don Lands Development: 2011 – 2015 | 3D Printing Tech.

Author: CHRISTINA BOROWIEC
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2016



PROJECT DESCRIPTION:
The model displayed above is of the West Don Lands of the City of Toronto, bounded by Queen St. E to the north, the rail corridor to the south, Berkeley St. to the west, and Bayview Ave. to the east. In utilizing Ryerson University’s Digital Media Experience Lab’s three-dimensional printing technology, an interactive model providing a tangible means to explore the physical impact of urbanization and the resultant change in the city’s skyline has been produced. The model interactively demonstrates how the West Don Lands, a former brownfield, have intensified from 2011 to 2015 as a result of waterfront revitalization projects and by serving as the Athletes’ Village for the Toronto Pan Am/Parapan American Games.

Buildings constructed during or prior to 2011 are printed in black, while those built in 2012 or later are green. In total, 11 development projects have been undertaken within the study area between 2011 and 2015. Each of these development projects have been individually printed, and correspond to a single property on the base layer, which is identifiable by the unique building footprint. The new developments can be easily attached and removed from the base of the model (the 2011 building and elevation layer) via magnetic bases and footprints, thereby providing an engaging way to discover how the West Don Lands of Toronto have developed in a four year period. By interacting with the model, the greater implications of the developments on the city’s built form and skyline can be realized and experienced at a tangible scale.

Areas with the lowest elevation (approximately 74 m) are solidly filled in on the landscape grid, while areas with higher elevations (80 m to 84 m) have stacked grids and foam risers added to better exaggerate and communicate the natural landscape. These additions can be viewed in the video below.

Street names and a north arrow are included on the model, as well as both an absolute and traditional scale bar. The absolute scale of the model is 1:5,000.




PROJECT EXECUTION:
To complete the project, a mixture of geographic information system (GIS) and modeling software were used. First, the 3D Massing shapefile was downloaded from the City of Toronto’s OpenData website, and the digital elevation model (DEM) for Toronto was retrieved from Natural Resources Canada. Using ArcMap, the 3D Massing shapefile, which includes information such as the name, location, height, elevation, and age of buildings in the city, was clipped to the study area. Next, buildings constructed prior to or during 2011 were selected and exported as a new layer file. The same was done for new developments, or the buildings constructed from 2012 to 2015, with both layers using a NAD83 UTM Zone 17N projection. Once these new layers were successfully created, they were imported into ArcScene.

In ArcScene, the digital elevation model for Toronto was opened and projected in NAD83. The raster layer was clipped to the extent of the 2011 building layer, and ensured to have the same spatial reference as the building layer. Next, the DEM layer properties were adjusted so base heights were obtained from the surface, and a vertical exaggeration was calculated from the extent of the DEM in the scene properties. Once complete, the “EleZ” variable data provided in the building layers’ shapefiles were used to calculate and display building heights. The new developments 3D file was then exported, as the 2011 buildings and DEM files were merged. Since the “EleZ” (building height) variable was used rather than “Z” (ground elevation) or “Elevation” (building height from mean sea level), the two layers successfully merged without buildings extending below the DEM layer. The merged file was then exported as a 3D file. Although many technical issues were encountered at this point in the project (i.e. the files failed to merge, ArcScene crashed unexpectedly repeatedly, exported file quality was low…), the challenges were overcome by viewing online tutorials of users who had encountered similar issues.

Once the two 3D files were successfully exported (the new developments building file and the 2011 building file merged with the DEM), they were converted to .STL file types and opened in AutoDesk Inventor. Here, the files were edited, cleaned, smoothed, and processed to ensure the model was complete and would be accepted in Cura (3D printing software).



At Ryerson University’s Digital Media Experience Lab, the models were printed using the TAZ three-dimensional printer (pictured below). Black filament was used for the 2011 buildings and DEM layer, and green was used for the new developments. These colours were selected from what was currently available at the lab because they provided the greatest level of contrast. In total, printing took approximately 7 hours to complete, with the base layer taking about 5.5 hours and the new developments requiring 1.5 hours. The video above reveals the printing process. No issues were encountered in the utilization of the 3D printer, as staff were on-hand to answer any questions and provide assistance. Regarding printing settings, the temperature of the bed was set at 60°C, and the print temperature was set to 210°C. A 0.4 mm nozzle was used with a 20% fill density. The filament density was 1.75 mm, and a brim was added for support to the platform during printing. Although the brim is typically removed at the completion of a print, the brim was intentionally kept on the model for aesthetic purposes and to serve as a border to the study area.


TAZ 3D Printer


Once printing was completed, the model was attached to a raised base and street names, a north arrow, legend, absolute scale and scale bar, and title were added. Magnets were then cut to fit the new development building pieces, and attached both to the base layer of the model and the new developments. As a final step in the process, the model’s durability and stability were tested by encouraging family and friends to interact with the model prior to its display at the Environics User Conference in Toronto, Ontario in November 2016.


West Don Lands Development: 2011 - 2015 Project



RECOMMENDED ENHANCEMENTS:
To improve the project, three enhancements are recommended. First, stronger magnets could be utilized both on the new development pieces and on the base layer of the model. In doing so, the model would become more durable, sturdy, and easier to lift up to examine at eye level – without the worry of buildings falling over due to low magnetic attractiveness resulting from the thicker cardboard base on which the model rests. In relation to this, stronger glue could be used to better bind the street names to the grid as well.

Additionally, the model may be improved if a solid base layer was used instead of a grid. Although the grid was intended to be experimental and remains an interesting feature which draws attention, it would likely be easier for a viewer to interpret the natural features of the area (including the hills and valleys) if the model base was solid.

The last enhancement entails using a greater variety of filaments in the model’s production to create a more visually impactful product with more distinguishable features. For instance, the base elevation layer could be printed in a different colour than the buildings constructed in 2011. Although this would complicate the printing and assembly of the model, the final product would be more eye-catching.



DATA SOURCES:
City of Toronto. (2016, May). 3D Massing. Buildings [Shapefile]. Toronto, Ontario. Accessed from <http://www1.toronto.ca/wps/portal/contentonly?vgnextoid=d431d477f9a3a410VgnVCM10000071d60f89RCRD>.

Natural Resources Canada. (1999). Canadian Digital Elevation Data (CDED). Digital Elevation Model [Shapefile]. Toronto, Ontario. Accessed from <http://maps.library.utoronto.ca/cgi-bin/datainventory.pl?idnum=20&display=full&title=Canadian+Digital+Elevation+Model+(DEM)+&edition=>.

 




CHRISTINA BOROWIEC
Geovisualization Project
Professor: Dr. Claus Rinner
SA 8905: Cartography and Geovisualization
Ryerson University
Department of Geography and Environmental Studies
Date: November 29, 2016

CloudCities 3D Model of the Ryerson Campus

Justin Miron

Submission for GeoVis Project Assignment @RyersonGeo, SA8905, Fall 2016

Interactive City Models

One of the most useful visualization and planning tools used in urban planning and design is the 3D model: a to-scale representation of the built form of a city, its existing (and as-built) conditions and its proposed (or possible) conditions.  A 3D model effectively communicates information about the proportion, size, and distribution of structures and other urban elements, that when well made and presented is intuitively grasped by the people that are viewing it.

A principal drawback to most 3D models is that they are physical models, and they take a lot of time to create, to modify, and can only be shared with an audience who is physically present. One way to solve the this problem is to replace the physical with a 3D digital model (using 3D modelling software such as Rhino, ArchiCAD, Blender, Solidworks, etc.) and to share the models with other users.  Yet, there are drawbacks to this approach, too. For one, these models can only be shared with users that have the same (or similar) software of the kind that was used to create the model. For users who do not have the correct software, static or animated representations of the model are made which, while they can still convey information, do not allow the user make choices on what aspects of the model they want to view or explore.

Beyond this technical problem, the models are not geographic and they are not data-driven. Though they are spatial, they are not referenced to a location on the earth and they don’t contain attributes. There is no way to know what building or open space you are looking at without asking someone who is familiar with the model. Informal exploration is just too limited. One way to solve these problems is to store and view 3D model information in CloudCities.

CloudCities and the Ryerson Campus

CloudCities is a geographically-enriched 3D model viewing and storage platform. The graphical rendering is done through ThreeJS, a javascript library used to build and render 3D objects in a browser. It is one of several platforms that blend geographic information within a 3D environment (see here and here for further examples).

CloudCities allows users to upload 3D model information, such as a building, tree, vehicle, or terrain, as well as their attributes. Not all 3D information can be uploaded (for instance, stylized 3D lines or other non-geographic 3D visualizations are not generally possible). In addition to upload, CloudCities has several customization features that allow the model scene to be modified: sun/shadow settings; pre-set camera views and 3D slides; a search function; location comparison to OpenStreetMap; and dynamic attribute and 3D editing, which allows the user to dynamically modify/add to object attributes and to use basic 3D editing functions.

CloudCities is built to store and view 3D models (as opposed to general 3D visualizations), and specifically 3D models of cities (multiple buildings, blocks, terrain, etc.) so for this project I have built a model of the bulk of Ryerson University’s Campus in downtown Toronto.

Area used for the CloudCities model
Area used for the CloudCities model
A view of the entire model

Data

The input data for the model’s 3D buildings is from two sources: myself, who modelled several buildings on the Ryerson campus, including Kerr Hall, in Rhinoceros (Rhino), a 3D modelling program, and the City of Toronto’s Open Data portal, which maintains a 3D massing and building model dataset that is frequently updated and that is available in several formats.

The 3D information from the City of Toronto is of high quality, but it is released in several formats, and not all of these formats contain equivalent data. Out of all of the data available, the 3D CAD information is the most detailed and accurate but it is harder to work with.

Ultimately, all of the 3D information that fits within the sample area were converted, by individual building, into multipatch features using the ArcGIS 3D Analyst extension. These multipatches were loaded into ArcScene, exported to an ESRI 3D webscene format, and then uploaded into a CloudCities scene. While there are other ways to create a functional CloudCities scene, uploading from ArcScene is the most straightforward, though it is certainly not an option for everyone (see the Asset import tutorial), especially when they do not have ArcScene or 3D Analyst available to use!

original-rhino-models
Rhinoceros model of Kerr Hall (above) and a multipatch of the Ryerson Student Center (below)

I manually modelled Kerr Hall because I wanted it to be more detailed than that stored within the City of Toronto dataset. The modelling was done in Rhino. The model was then exported from Rhino into .3DS format, then to multipatch to be included into the webscene uploaded into CloudCities. Deletion of original building massing data from the City of Toronto dataset was required where another model instance – in this case, custom-models like that of Kerr Hall – takes its place. 

Zoning information is also provided by the City’s Open Data portal and this was used to code each building instance with its associated zone category (e.g. R or ‘Residential’).

I have customized and manually refined City blocks (which define the road surfaces) and green open space areas because these are not accurately captured within the City’s data.

Complex Data

Terrain surfaces and trees (which can be very complex objects) were not added to this model because of the eventual data size requirements, but in order for these elements to look good and not awkward, they must be of sufficient detail. Terrain published by the City of Toronto, even when simplified, is a complex geometry that would weigh on the model’s performance. In addition, terrain requires that buildings sit on top of the surface, but the buildings modelled by the City do not account for an uneven grade around the base (what is known as Finished Floor Elevation). While this detail can be made within the models, the eventual time required would have been onerous. The more detail in a building and the more the model approximates reality, the longer the model will take to create.

User Experience (UX) highlights

In the CloudCities model, buildings contain a name, whether they are Ryerson University buildings, the planning zone they fall within (e.g. commercial or residential), and the size of the building footprint area in sq.m. Some of this information is added within the pre-upload ArcGIS environment, but much of it is added from within CloudCities’ editing environment.

These attributes serve as the basis for dashboards and a search bar. The dashboard displays these vital statistics whenever a building object is clicked.

 

dashboard-for-statistics
Dashboard reveals attributes when a building is clicked.

Additionally, a search bar and search constraints can be set, and the user can search through the scene’s attributes to highlight objects that are returned. For instance, every building that has the zone ‘Commercial Residential’ is highlighted whenever that term is entered into the search. The search functions are limited, however – there are no advanced queries supported by CloudCities. Instead, various constraints on searches must be set on the back end to make sure that a particular search does not return any object that fulfills any small dimension of the attribute data.

Search results when "Commercial Residential" is entered
Search results when “Commercial Residential” is entered

Specific locations can be saved as bookmarks, and these aid in presentation purposes. These locations can be combined into a slideshow “tour” of the model. This is a particularly relevant feature when sending the model to others, as the locations are stored with the scene, and literally move the user point of view around the model in order to tell a story.

bookmarks
Camera bookmarks can help guide a user through the model

A sun/shade rendering tool can be implemented, which allows the user to set the time of year and time of day to create a realistic view of how shadows would be cast by model elements based on the model’s location on the earth, although this is not a sun shadow calculator and is meant simply to enhance the experience of the model.

sun-shade-comparison
Sun and shadow controls

Limitations of CloudCities

One of the main limitations of CloudCities is that it is not customizable from a development point of view. A user is limited to pre-set dashboard, search, and styling options. In addition, the platform costs money and is billed at a hefty $60 USD+/per month in order to create a city model to the detail that was made for this post.

The range of 3D visualizations possible is limited. It would be nice to have a platform that incorporates more options for presenting thematic data that goes beyond dashboards and search bars. There is a lot of 3D data that does not manifest itself in a 3D structure. ThreeJS’s gallery of 3D visualizations provides interesting examples of how 3D city modelling could be developed in the future.

Despite these limitations, CloudCities provides an easy-to-use platform for making and viewing 3D city models. I do not believe that CloudCities will always be the only platform that offers the same functionality, but it is currently a really good example of how urban planners and designers can take advantage of geo-technology to create a more interactive and data-rich experience of their 3D information.

The final model can be viewed on CloudCities hereAfter mid-December 2016, the model’s geographic extents will be greatly reduced so that the model can be stored on a free account.

 

 

Animating Toronto Parking Enforcement with heatmap.js

by Justin Pierre – Geovis course project for SA8905, Fall 2015 (Dr. Rinner)

Heatmap.js is a project developed by Patrick Wied to create heatmaps online using JSON data and javascript. It’s lightweight, free to use and comes with tons of great customization options.

For my geovisualization project for SA8905 I created an animated heat map of parking tickets issued in Toronto during the 24 hour period of May 1st 2014. Parking ticket data is supplied on the Toronto Open Data Portal.

Thursday May 1st, 2014 was one of the busiest days of the year for parking tickets. There were 9,559 issued in 24 hours. 6am was the safest time with only 25 tickets issued and 9am was the busiest with 1,451.

To create the heatmap I  geocoded the Toronto parking ticket data using the city of Toronto street data with address ranges. About 10% of the records had to be manually geocoded to intersections, which was a time consuming process! Once I had the locations, it was simple to create a JSON object for each hour in excel, like this:

var h=[ {
 max: 100000,
 data: [
{lat: 43.667229, lng: -79.382666, count: 1},
{lat: 43.728744, lng: -79.30461, count: 1},
{lat: 43.778933, lng: -79.418283, count: 1},
{lat: 43.647378, lng: -79.418484, count: 1},

etc…

h is an array where each element is a JSON object containing the lats and lngs of each traffic ticket. The count is required for the heatmapping function and is always 1, unless you’re this driver:

Using heatmap.js is super straightforward. Initialize your web map in leaflet or openlayers (I used leaflet), configure some simple parameters:

var cfg = {
 "radius": .008,           //set for interpolation radius
 "maxOpacity": .8,         //set to .8 to show the basedata
 "scaleRadius": true,      //recalibrate radius for zoom
 "useLocalExtrema": true,  //reset data maximum based on view
 latField: 'lat',          //where is latitude referenced 
 lngField: 'lng',          //where is longitude referenced
 valueField: 'count'       //where is the numerical field
 };

Attach that to your heatmap object and point it at your datasource like so:

heatmapLayer = new HeatmapOverlay(cfg);
map.addLayer(heatmapLayer);
i=0;
heatmapLayer.setData(h[i]);

Remember that h[] is the array where the ticket data is stored and so h[0] is the first hour of data, midnight to 1am. This will create a static heatmap like this:

Screenshot

Now comes the part where we cycle through the hours of data with a setInterval() function:

setInterval(function(){ 
 i+=1;
 if (i>23) i=0;
 $( ".heatmap-canvas" ).fadeOut( "slow", function() 
   {heatmapLayer.setData(h[i]);
   heatmapLayer._draw();
   $( "#hour").html(i);
 });
 $( ".heatmap-canvas" ).fadeIn( "slow", function() {
 });
}, 2000);

Every 2,000 milliseconds (2 seconds) the page will fade out the heatmap layer, switch the data for the next hour and fade it back in. If the cycle has reached the end of the day it resets. The $( “#hour”).html(i) bit refers to changing the hour printed on the webpage itself.

You can check out the finished project at http://justinpierre.ca/tools/heatmap/ and be sure to let me know what you think at https://twitter.com/jpierre001.

T.Orientation: Colouring the Grids of Toronto

By Boris Gusev, Geovis Course Assignment, SA8905, Fall 2015 (Rinner)

 

The way in which we settle the land around us can paint a rich picture of how our cities have developed over years.  By the turn of the 19th century, urban planners generally agreed that grid-like patterns were the optimal solution and held the most promise for the future of transit. Physical planning led to the development of automotive cities like Los Angeles, Chicago and Detroit. Toronto’s history of growth can also be traced through its sprawling grid of roads.

In this visualization, a MapZen extract of OpenStreetMap road network was used to represent the compass-heading-based orientation of  Toronto roads. Streets that are orthogonal, meaning that they intersect at a right angle, are assigned the same colours. At a 90 degree angle, the streets are coloured with the darkest shades of orange or blue, decreasing in intensity as the intersection angle becomes more obtuse.

Follow the link to take a look at: Toronto Streets by Orientation

Vis_overview

More exciting details and a DIY guide under the cut. Kudos to Stephen Von Worley at Data Pointed for the inspiration and Mathieu Rajerison at Data & GIS Tips for the script and a great how-to.

Continue reading T.Orientation: Colouring the Grids of Toronto