Monitoring Water Level Changes Using High Spatial and High Temporal Resolution Satellite Imagery

Author: Menglu Wang

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Introduction

The disappearing of the once world’s fourth largest lake, Aral Sea, was a shocking tragedy to the world, not only just the shrinkage of lake volume from 1,093.0 km3 in 1960 to 98.1 km3 in 2010 ( Gaybullaev et al., 2012), but also, the rate of shrinkage. Impacts on environment, local climate, citizen’s health, and agriculture are irreversible. This human made disaster could have been prevented in some degree if close monitoring of the lake was made and people are more educated about the importance of ecosystem. One efficient approach to monitor lake water level changes is the utilizing of satellite imagery .The spreading of free high spatial and high temporal resolution satellite imagery provides excellent opportunity to study water level changes through time. In this study, spatial resolution in 3  and 5 meters and temporal resolution as high as 3 days per visit PlanetScope Scene Satellite Imagery are obtained from Planet website. Iso-Cluster Unsupervised Classification in ArcGIS Desktop and Animation Timeline in ArcGIS Pro are used. Study area is set to Claireville Reservoir and 10 dates of imagery starting from April to late June are used to study water level changes.

Data Acquisition

To download the satellite imagery, a statement of research interest needed to be submitted to Planet Sales personal on their website (https://www.planet.com/). After getting access, go on typing in the study area and select a drawing tool to determine an area of interest. All available imagery will load up after setting a time range, cloud cover percentage, area coverage, and imagery source. To download a imagery, go select a imagery and click “ORDER ITEM” and items will be ready to download on the “Orders” tab when you click on your account profile. When downloading a item, noticing that there is a option to select between “Analytic”, “Visual”, and “Basic”. Always select “Analytic” if analysis will be made on the data. “Analytic” indicating geometric and radiometric calibration are already been made to imagery.

Methodology

ArcGIS desktop is used to implement classification and data conversion. Following after, ArcGIS Pro is used to create a animated time slide. Steps are list below:

  1. After creating a file geodatabase and opening a map, drag imagery labeled letter ending with “SR” (surface reflectance) into the map .
  2. Find or search “Mosaic To New Raster” and use it to merge multiple raster into one to get a full study area (if needed).
  3. Create a new polygon feature class and use it to cut the imagery into much smaller dataset by using “Clip”. This will speed up processing of the software.
  4. Grab “Image Classification” tool from Customize tab on top after selecting “Toolbars”.
  5. On “Image Classification” toolbar, select desired raster layer and click on “Classification”. Choose Iso Cluster Unsupervised classification. Please see Figure 1. for classified result.
  6.  Identify classes that belong to water body. Search and use “Reclassify” tool to set a new value (for example: 1) for classes belong to water body, leave new value fields empty for all the rest of classes. Check “ Change missing values to NoData” and run the tool. You will get a new raster layer contain only 1 class: water body as result (Figure 2. and Figure 3.).
  7. Use “Raster to Polygon” tool to convert resulted raster layer to polygons and clean up misclassified data by utilizing editor tool bar. After select “Start editing” from Editor drop down menu, select and delete unwanted polygons (noises).
  8. Use resulted polygons to cut imagery in order to get a imagery contain water bodies only.
  9. Do the above process for all the dates.
  10. Open ArcGIS Pro and connect to the geodatabase that has been using in ArcGIS Desktop.
  11. Search and use “Create Mosaic Dataset” tool to combine all water body raster into one dataset. Notes: Select “Build Raster Pyramids” and “Calculate Statistics” in Advanced Options.
  12. After creating a mosaic dataset, find “Footprint” under the created layer and right click to open attribute table.
  13. Add a new field, set data type as “text” and type in dates for these water body entries. Save edited table.
  14. Right click on the layer and go to properties. Under time tab, select “each feature has a single time field” for “Layer Time”, select the time field that just has been created for “Time Field”, and specify the time format same as the time field format.
  15. A new tab named “Time” will show up on first line of tabs in the software interface.
  16. Click on the “Time” tab and specify “Span”. In my case, the highest temporal resolution for my dataset is 3 days, so I used 3 days as my “Span”.
  17. Click the Run symbol in the “Play Back” section of tabs and one should see animated maps.
  18. If editing each frame is needed, go to “Animation” tab on the top and select “Import” from tabs choose “Time Slider Step”. A series will be added to the bottom and waiting to be edited.
  19. To export animated maps as videos, go to “Movie” in “Export” section of Animation tabs. Choose desired output format and resolution.  
Figure 1. Classified Satellite Imagery
Figure 2. Reclassify tool example.
Figure 3. Reclassified satellite imagery

Conclusion

A set of high temporal and high spatial resolution imagery can effectively capture the water level changes for Claireville Reservoir. The time range is 10 dates from April to June, and as expected, water level changes as time pass by. This is possibly due to heavy rains and flood event which normally happens during summer time. Please see below for animated map .

Reference

Gaybullaev, B., Chen, S., & Gaybullaev, D. (2012). Changes in water volume of the Aral Sea after 1960. Applied Water Science2(4), 285–291. doi: 10.1007/s13201-012-0048-z

Automobile Collisions Involving TTC Vehicles

Eric Lum
SA8905 Geovis Project, Fall 2019

Toronto is the largest metropolis in Canada, attracting people from far and wide. As such, there are many forms of transportation that pass through the city including cars, bicycles, public transit, regional trains and many more. The Toronto Transit Commission (TTC) is one of the main methods that people rely on, as millions ride their services each and every day. All of these forms of transportation must share the roads, and from time to time collisions occur. This project aims to animate collisions between TTC surface vehicles such as a bus or streetcar, with another form of transportation (not including pedestrians). This visualization will be on the web-mapping service Carto, where a time series map will be produced on the various TTC related collisions.

The collision data for this project was obtained from the Toronto Police open data portal. The “TTC Municipal Vehicle” dataset that was used is a subset of the “Killed and Seriously Injured” dataset, as these are the specific types of collisions that were collected. The data is available for the years 2008-2018, but only the past five years from 2014-2018 were used for the sample size of the project. Information on the collisions provided in the dataset include the latitude, longitude, intersection, vehicle collision type, time, date, year and neighbourhood it occurred in.

The first step of getting the time series web map to work is to create a map and import the data into Carto. The collisions data was downloaded from the Toronto Police as a .csv file, which can easily be uploaded to Carto. Other supporting data used for this map includes the City of Toronto boundary file retrieved from the City of Toronto open data portal and the TTC routes which were retrieved from Scholars Geoportal. In order for these shapefiles to be imported into Carto, they must either be uploaded as .ZIP files or converted to another supported format such as JSON file. Once all the data was ready, it was uploaded through the “Connect Dataset” label shown below.

The next step was to geocode the collision locations with the latitude and longitude provided in the collisions .csv file. This was done through Carto’s geocode feature shown below. To do this, the layer with the data was selected and the geocode option was chosen under the “Analysis” tab. The fields for latitude and longitude were then input.

Once geocoded, the “Aggregation” method for the data needed to be chosen. As this is a visualization project over a span of years, the time series option was chosen. The “Style” also needed to be set, referring to how the points would be displayed. The dataset contained information on the different vehicle types that were involved in the collisions, so the “point colour” was made different for each vehicle. These functions are both shown below.

The same “Style” method for visualization was also applied to the TTC Routes layer, as each type of transportation should be shown with a unique colour. The last part in animating the data is to choose the field for which the timer series is to be based on. The date field in the collisions .csv file was used in the “Widgets” function on Carto. This allows for all the data to be shown on a histogram for the entire time span.

To finalize the map, a legend and basemap were selected. Once happy with my map, I made it public by enabling the “Publish” option at the bottom of the interface. This generated a shareable link for anyone to view.

A snapshot of the final time series map is shown below.

Thank you for viewing my blog post!

To access the full web map on Carto, the link is provided here:

https://ericlum24.carto.com/builder/00c16070-d0b8-4efd-97db-42ad584b9e14/embed

Creating a Noise Model and a Facade Noise Map of the King Street Area Pilot using SoundPlan

Geovisualization Project Assignment, SA8905, Fall 2018 @RyersonGEO

By: Cody Connor

The city of Toronto is large and still growing, the influx of new people brings more vehicles and as a result more vehicular traffic. The increase noise across the city is highly correlated with vehicular traffic in the city will inevitably be higher as more vehicles drive on our roads. To monitor this change in noise, vehicular traffic counts are collected by Toronto public health along with the University of Toronto and Ryerson University. The traffic counts can be assigned to specific road networks which can be used to create a model of the noise in city.

SoundPlan is a noise modelling software used to take the traffic data and estimate noise levels across the city. The program can create different types of maps for example a grid level map or a Façade map. The Facade map was used for this project to help distinguish noise levels on the faces of buildings.

The first step in creating a Façade noise model is to insert a Shapefile that include all building assets of the selected study area. This includes information about the shape and size of the buildings (specifically the king street area in this case) and even includes data as specific as the number of floors in each building and the number of residents who live there. This allows the program to assess the noise levels that will affect the residents of each building over time. In SoundPlan the user can visualize the building assets in a three-dimensional environment which helps to distinguish errors in building size and height.

The user has to connect the imported Shapefile attributes to the assignment table within Soundplan. This is critical as if some buildings properties are not imported properly, the model will have errors and likely wont run. The image above shows the connected Shapefile to the Soundplan properties.

After importing the building attributes, we can now move on the the road network Shapefile. This file has the physical characteristics of the road network in Toronto as well as the traffic information that is used to calculate the noise in the city. The physical characteristics of the roads can be as simple as the shape and size of the road and can become more specific like the type of pavement and the incline or decline of the road. This becomes important in the noise model as noise will vary based on these variables. An engine in a vehicle will need to work harder if there is a significant incline in the road and if the road has a different pavement type like brick, it can increase the noise as well.

The traffic noise also includes what types of vehicles are traveling on the road at the time of measurement like cars, larger trucks, large transport vehicles, buses and even bicycles. Larger vehicles will cause more noise and therefore it is important to make the distinction. Finally the speeds of the vehicles are taken into account as this is the primary reason for the noise levels. It is well known that engine noise in vehicles increases as the speed increases. Interestingly as vehicles approach 60 km/h, the noise associated with the tires on the pavement become louder than the engine noise. This means that when working with highways this type of information is vital. Because this project only involved city streets and the speeds for the most part are under 60 km/h, it was not necessary to import these properties in detail.

Once both of these variables are imported to the program there are still a few steps before a noise model can be run. Firstly a digital ground model must be calculated and associated with both variables. A digital ground model is essentially the plain in which the variables can be attached so as to ground them to a common point. The Soundplan software allows the user to either import a DGM or calculate one based on the files imported.

 

 

 

 

 

 

 

Once the buildings and roads are set to the ground model there is only one step remaining in order to run a noise model.

The last step before running a model in Soundplan is to create a calculation area. This area defines the boundaries where the calculation will take place. The image below shows the buildings, roads and the area defined for the calculation.

The roads can be seen highlighted in red while the buildings are shaded as green with a blue outline. The calculation area that defines where the program will focus the model is the green box located in the middle of the King Street Area. This area is smaller than the total size of the study area because of the time it takes to calculate a Facade map. For just the area contained within the green box, the model ran for over 16 hours. To run the entire study area, it could take a week to make that calculation.

After importing all the necessary files to run the model which can be as simple as the one I ran, but can include much more specific data, the program needs to know what type of calculation should be run. This is where the user will indicate the Facade or Grid level map.

The way the calculation is run for the Facade map can be complicated. First each building is assigned noise calculation points which are spread across the Facades at a distance that the user sets. In this case the points were every 2 meters. The number of points has a direct influence in the scale of the calculation being run. Because this is a three dimensional environment, the points are not only across the Facade but are located at those points at every floor.

The final map as shown above shows the distribution of noise as modeled on the Facades of the buildings in the study area. The building faces closest to the road are seen to be the noisiest and as the faces get further away the noise decreases. Overall the downtown area is very noisy and this map demonstrates this.

 

 

 

Visualizing Freshwater Resources: A Laser Cut Model of Lake Erie with Water Volume Representations

Author: Anna Brooker

Geovisualization  Project Assignment @RyersonGeo SA8905, Fall 2018

Freshwater is a limited resource that is essential to the sustenance of all life forms. Only 3% of the water on earth is freshwater, and only 0.03% is accessible on the surface in the form of lakes, streams, and rivers. The Great Lakes, located in Southern Ontario and along the US border, contain one fifth of the surface freshwater. I wanted to visualize this scarcity of freshwater by modelling Lake Erie, the smallest of the Great Lakes. Lake Erie is 6th largest freshwater lake in the world, but is has the smallest water volume out of the Great Lakes. I decided to create a laser cut model of the lake and use water spheres to represent its proportion of the world’s surface freshwater resources. I used the infographic from Canadian Geographic for reference.

Process:

  • Retrieve bathymetric imagery and import into ArcScene
  • Generate contours lines for every 20m of depth and export them each into individual CAD files
  • Prepare the CAD files in an Adobe Illustrator layout file to optimize them for laser printing
  • Paint and assemble the laser cut layers
  • Create spheres out of clay to scale with the model

The following images show the import of the bathymetric imaging and contour retrieval:

The bathymetry data used was collected in 1999 by the National Oceanic and Atmospheric Association and comes in a raster file format. They were retrieved from Scholar’s Geoportal. I used a shapefile of the Lake Erie shoreline from Michigan’s GIS Open Data as a mask to clip the raster imaging to only the extent of the lake surface. I then created 20m contours from the raster surface. I exported each of the 3 contour vectors into individual shapefiles. These were added to the scene and exported again as CAD files to be able to manipulate them in Adobe Illustrator and prepare them on a template for laser cutting.

The screenshots above show the template used for laser cutting. The template was downloaded from the Hot Pop Factory homepage. Hot Pop Factory is the service I used for laser cutting the plywood layers. I used their templates and arranged my vector files to reflect the size I want the model to be, 18″x7″. I added the rectangles around each contour to ensure a final product of a rectangular stacked model. I then sent this to the Factory for cutting. The photos below show what I received from Hot Pop.

Lake Erie is incredibly shallow with maximum depth of 64m. In order to show the contours of the lake I needed to exaggerate the depth. Limited by the thickness of the materials available to me, the final model had an exaggerated depth of approximately 130% at its deepest point. The final result of this exaggeration allowed me to create three layers of depth to Lake Erie and make it more visually engaging. I included as a part of my model a flat cut out of Lake Erie, which is what the model would have looked like if I had not exaggerated it.

The water volume spheres were created using a material called porcelain clay. This air dry medium has a slightly translucent finish. I stained the clay with blue oil paint so that it would intuitively represent water. The size of the spheres is based on the information in the Canadian Geographic infographic linked in the introduction to this tutorial. The diameter of the spheres was made to scale with the scale bar on the models. A limitation with this model is that the scale bar only refers to the lateral size of the lake and spheres, and does not refer at all to the depth of the model.

The photos above show the final product. The photo on the right shows the scale bar that is included on both parts of the model. I painted the interior layers in blue, the top two layers in the same shade. The third layer was slightly darker, and the deepest layer was the darkest shade of blue. I chose to paint the layers in this way to draw attention to the deepest part of the lake, which is very small area. I attached the layers together using wood glue and laid them beside each other for display.  I painted the 3D and 2D models in slightly different hues of blue. The 2D model was made to better match the hue of the water spheres to visually coordinate them. I wanted the spheres to be distinct from the 3D model so that they would not be interpreted as being representative of the water volume of an exaggerated model.

 

Visualizing Station Delays on the TTC

By: Alexander Shatrov

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018.

Intro:

The topic of this geovisualization project is the TTC. More specifically, the Toronto subway system and its many, many, MANY delays. As someone who frequently has to suffer through them, I decided to turn this misfortune into something productive and informative, as well as something that would give a person not from Toronto an accurate image of what using the TTC on a daily basis is like. A time-series map showing every single delay the TTC went through over a specified time period.  The software chosen for this task was Carto, due to its reputation as being good at creating time-series maps.

Obtaining the data:

First, an excel file of TTC subway delays was obtained from Toronto Open Data, where it is organised by month, with this project specifically using August 2018 data. Unfortunately, this data did not include XY coordinates or specific addresses, which made geocoding it difficult. Next, a shapefile of subway lines and stations was obtained from a website called the “Unofficial TTC Geospatial Data”. Unfortunately, this data was incomplete as it had last been updated in 2012 and therefore did not include the recent 2017 expansion to the Yonge-University-Spadina line. A partial shapefile of it was obtained from DMTI, but it was not complete. To get around this, the csv file of the stations shapefile was opened up, the new stations added, the latitude-longitude coordinates for all of the stations manually entered in, and the csv file then geocoded in ArcGIS using its “Display XY Data” function to make sure the points were correctly geocoded. Once the XY data was confirmed to be working, the delay excel file was saved as a csv file, and had the station data joined with it. Now, it had a list of both the delays and XY coordinates to go with those delays. Unfortunately, not all of the delays were usable, as about a quarter of them had not been logged with a specific station name but rather the overall line on which the delay happened. These delays were discarded as there was no way to know where exactly on the line they happened. Once this was done, a time-stamp column was created using the day and timeinday columns in the csv file.

Finally, the CSV file was uploaded to Carto, where its locations were geocoded using Carto’s geocode tool, seen below.

It should be noted that the csv file was uploaded instead of the already geocoded shapefile because exporting the shapefile would cause an issue with the timestamp, specifically it would delete the hours and minutes from the time stamp, leaving only the month and day. No solution to this was found so the csv file was used instead. The subway lines were then added as well, although the part of the recent extension that was still missing had to be manually drawn. Technically speaking the delays were already arranged in chronological order, but creating a time series map just based on the order made it difficult to determine what day of the month or time of day the delay occurred at. This is where the timestamp column came in. While Carto at first did not recognize the created timestamp, due to it being saved as a string, another column was created and the string timestamp data used to create the actual timestamp.

Creating the map:

Now, the data was fully ready to be turned into a time-series map. Carto has greatly simplified the process of map creation since their early days. Simply clicking on the layer that needs to be mapped provides a collection of tabs such as data and analysis. In order to create the map, the style tab was clicked on, and the animation aggregation method was selected.

The color of the points was chosen based on value, with the value being set to the code column, which indicates what the reason for each delay was. The actual column used was the timestamp column, and options like duration (how long the animation runs for, in this case the maximum time limit of 60 seconds) and trails (how long each event remains on the map, in this case set to just 2 to keep the animation fast-paced). In order to properly separate the animation into specific days, the time-series widget was added in the widget tab, located next to to the layer tab.

In the widget, the timestamp column was selected as the data source, the correct time zone was set, and the day bucket was chosen. Everything else was left as default.

The buckets option is there to select what time unit will be used for your time series. In theory, it is supposed to range from minutes to decades, but at the time of this project being completed, for some reason the smallest time unit available is day. This was part of the reason why the timestamp column is useful, as without it the limitations of the bucket in the time-series widget would have resulted in the map being nothing more then a giant pulse of every delay that happened that day once a day. With the time-stamp column, the animation feature in the style tab was able to create a chronological animation of all of the delays which, when paired with the widget was able to say what day a delay occurred, although the lack of an hour bucket meant that figuring out which part of the day a delay occurred requires a degree of guesswork based on where the indicator is, as seen below

Finally, a legend needed to be created so that a viewer can see what each color is supposed to mean. Since the different colors of the points are based on the incident code, this was put into a custom legend, which was created in the legend tab found in the same toolbar as style. Unfortunately this proved impossible as the TTC has close to 200 different codes for various situations, so the legend only included the top 10 most common types and an “other” category encompassing all others.

And that is all it took to create an interesting and informative time-series map. As you can see, there was no coding involved. A few years ago, doing this map would have likely required a degree of coding, but Carto has been making an effort to make its software easy to learn and easy to use. The result of the actions described here can be seen below.

https://alexandershatrov.carto.com/builder/8574ffc2-9751-49ad-bd98-e2ab5c8396bb/embed

Transportation Flow Mapping Using R

Transportation Flows Mapping Using R

The geographic visualization of data using programming languages, and specifically R, has seen a substantial upsurge in adoption and popularity among members of the GIS and data analytics community in recent years. While the learning curve in acquainting oneself with scripting techniques might be steeper than using more traditional and out of box GIS applications, it undoubtedly provides some other benefits such as building customizable processes and handling complex spatial analysis operations. The latter point being imperative for projects containing extensive amounts of data as is often the case with transportation and commuting flows which ordinarily contain considerable amount of records comprising of trips’ origins and destinations, mode of transport and travel times information. An added interesting perk is that R offers very creative and visually appealing finalized graphical solutions which were one of the motivators behind the choice of technique for this project. The primary motivator was, however, the program’s capacity in transportation data modelling and mapping as the aim of the project was mapping commuting flows.

Story of R

R is an open source software environment and language for statistical computing and graphics. It is highly extensible which makes it particularly useful to researchers from varied academic and professional fields (they increasingly range from social science, biology and engineering to finance and energy sectors and multifold other fields in between). It is also one of the most rapidly growing software programs in the world, most likely due to the expansion of data science. In the context of Geographic Information Systems (GIS), it can be described as a powerful command-line system comprised of a range of tailored packages, each of them offering different and additional components for handling and analyzing spatial data. The ones utilized in the project were ggplot2, and maptools, and to lesser extent plyr. The former two are some of the most common ones in the R geospatial community while the others encountered in research and worth exploring further were: leaflet and mapview for interactive maps; shiny for web applications; and ggmap, sp and sf for general GIS capabilities. Being an open source software, R community is very helpful in organizing and locating necessary information. One neat option is the readily available cheat sheets for many of the packages (i.e. ggplot cheat sheet) which make finding information genuinely fast.

There are some stunning examples of data visualization in R. One that made a significant media splash a few years ago was done by Paul Butler, a mathematics student at University of Toronto at the time, who plotted social media friendship connections (it created admiration as well as disbelief from many, according to an author, that this was done with less than 150 lines of code in an “old dusty” statistical software such as R). It also inspired further data visualization explorations using R. One of my favorite recent such works came in the form of a compelling book London – The Information Capital by geographer James Cheshire and its co-author designer Oliver Uberti. The majority of the examples in the book were predominantly written not only in R but specifically in its ggplot package, in combination with graphic design applications, and should serve as innovative illustrations on data visualization approaches as well as capabilities on what software could potentially provide. Both of the aforementioned projects inspired mine.

Transportation Mapping and Modelling

I would like to give some background on the type of analysis that was conducted. One of the common types of analysis in transportation geography, transportation planning and transportation engineering is geographic analysis of transport systems for origin-destination data that shows how many people travel (or could potentially travel) between places. This also represents the basic unit of analysis in most transport models which is the trip (single purpose journeys from an origin “A” to origin “B”, and not to be mistaken with Timothy Leary definition). Trips are often grouped by transport mode or number of people travelling, and are represented as desire lines connecting zone centroids (desire lines are straight and closest possible lines between origin – destination points, and can be converted to routes). They do not necessarily need to represent just movement of the people and can show commodity flows and retail trade as well. TransCAD software is often used as the industry standard for this type of modelling. It is, however, quite costly and implemented solely by transportation planning firms and agencies. On the other hand, R is starting to see dedicated transportation planning packages and continuously utilizing relevant GIS ones in transportation field. And most importantly: it’s free.

Data

The dataset implemented for the project was American Community Survey 2009-2013 – 5 Year American Community Survey Commuting Flows located via Inter-University Consortium for Political and Social Research. It is a survey for the entire United States focusing on people’s (over the working age of 16) journeys to work. Data in the original survey was tabulated based on a few categories: means of transportation to work, private vehicle occupancy, time leaving home to go to work, travel and aggregated travel time to work, etc. For the purposes of the project all workers in commuting flows were selected (grouped together for all transportation modes). The trips were based on inner and inter-county commutes.

There are two main components needed when mapping transportation flows in general: coordinates of place of origin, and coordinates of place of destination. Common practice in transportation planning field is to have population weighted centroids for origins and destinations, regardless of the geographic unit of analysis, which in this case was U.S. counties. Therefore population weighted centroid shapefile for U.S. counties was needed so that it can be merged with the original survey data. It was located at the U.S. Census Bureau website and based on 2010 U.S. Census population numbers and distributions per county areas. The study area for the project was the United States and it excluded Canada and Mexico (even though both countries were included for workplace-based geographies), because specific regions of both countries were not mentioned which would make calculations of population weighted centroids not very realistic. Additionally, these records were not numerous to significantly change the model.

Process

In the first step, data was loaded and reformatted in R (R can be downloaded from https://www.r-project.org/ and although analysis can be conducted in R directly it is much preferred and easier to use Rstudio which provides a user-friendly-graphical interface). Rstudio interface and snippet of code is displayed in Figure 1 below (Rstudio can be downloaded from https://www.rstudio.com/ ).

Figure 1: Rstudio interface and snippet of code in the project

Following the two datasets, original commuting survey and population weighted centroids, were joined based on county name and code, and then the unified file was subset to exclude Canada and Mexico, followed by renaming some columns fields for easier readings of origin and destination coordinates. In the next step, ggplot2 was used to position scales for continuous data for x and y axes, succeeded by plotting line segments with alpha command. Number of trips to be plotted were experimented with to show either all trips, or to filter them based on more than 5, 10, 15, 20, 25 and 50 trips. Showing all trips resulted in too dense of a plot as all of the United States was used as a study area. If the study area was of a large scale in nature, showing all trips would be acceptable. The optimal results seemed to be when trips were filtered to show over 10 inner and inter county journeys-to-work trips which resulted in the plot displayed in Figure 2.

 

Figure 2 – U.S. origin-destination plot in Rstudio_US_11x17_10_Nebojsa_Stulic

 

The final map was then graphically improved in Adobe Creative Suite resulting in image in Figure 3.

Figure 3: Final mapping project after graphical improvements

Map

The final design showing thousands of commuting trips resembled a NASA image of United States from space at night. It indicated some predictable commuting patterns such as increased journey-to-work lines concentration in large urban centres and in areas with large population densities, such as the North East part of the country. However, some patterns were not so obvious and required some further digging into data accuracy (which passed the test) and then the way in which the original survey was designed. For instance, there are lines from Honolulu, Anchorage and Puerto Rico to the mainland even though the survey was designed to represent daily commuting flows by car, truck, or van; public transport, and other means of commuting. The survey was designed to ask questions for all workers based on primary and secondary jobs by way of commuting for respective reference week when it was conducted and answered. These uncommon results were attributable to people who worked during the reference week at a location that was different from their home (or usual place of work), such as people away from home on business. Therefore place-of-work data showed some interesting geographic patterns of workers who made daily work trips to different parts of the country (e.g., workers who lived in New York and worked in California).

The final mapping product was printed and framed on 24” x 36” canvas as shown in Figure 4. Size was chosen based on aspect ratio of 2 to 3 which seemed best suited to represent the geography of the United States horizontal width and vertical length. Some other options would be to print on acrylic or aluminum which is less cost effective and more time consuming (most of the shops require around 10 days to complete it). However, the printed map on canvas was my preferred choice for this project based on the aesthetic I was aiming for which was to have the appearance of accentuated high commuting areas and dimmed low commuting areas. Another pleasant surprise was that when printing was finalized it manifested more as a painting than data visualization transportation project.

Figure 4 – Printed map on canvas_Nebojsa_Stulic

Figure 4: Printed map on canvas

Visual Story of GHG Emissions in Canada

By Sharon Seilman, Ryerson University
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

Background

Topic: 

An evaluation of annual Greenhouse Gas (GHG) Emissions changes in Canada and an in-depth analysis of which provinces/ territories contribute to most of the GHG emissions within National and Regional geographies, as well as by economic sectors.

  • The timeline for this analysis was from 1990-2015
  • Main data sources: Government of Canada Greenhouse Gas Emissions Inventory and Statistics Canada
Why? 

Greenhouse gas emissions are compounds in the atmosphere that absorbs infrared radiation, thus trapping and holding heat in the atmosphere. By increasing the heat in the atmosphere, greenhouse gases are responsible for the greenhouse effect, which ultimately leads to global climate change. GHG emissions are monitored in three elements -its abundance in the atmosphere, how long it stays in the atmosphere and its global warming potential.

Audience: 

Government organizations, Environmental NGOs, Members of the public

Technology

An informative website with the use of Webflow was created, to visually show the story of the annual emissions changes in Canada, understand the spread of it and the expected trajectory. Webflow is a software as a service (SaaS) application that allows designers/users to build receptive websites without significant coding requirements. While the designer is creating the page in the front end, Webflow automatically generates HTML, CSS and JavaScript on the back end. Figure 1 below shows the user interaction interface of Webflow in the editing process. All of the content that is to be used in the website would be created externally, prior to integrating it into the website.

Figure 1: Webflow Editing Interface

The website: 

The website it self was designed in a user friendly manner that enables users to follow the story quite easily. As seen in figure 2, the information it self starts at a high level and gradually narrows down (national level, national trajectory, regional level and economic sector breakdown), thus guiding the audience towards the final findings and discussions. The maps and graphs used in the website were created from raw data with the use of various software that would be further elaborated in the next section.

Figure 2: Website created with the use of Webflow

Check out Canada’s GHG emissions story HERE!

Method

Below are the steps that were undertaken for the creation of this website. Figure 3 shows a break down of these steps, which is further elaborated below.

Figure 3:  Project Process

  1. Understanding the Topic:
    • Prior to beginning the process of creating a website, it is essential to evaluate and understand the topic overall to undertake the best approach to visualizing the data and content.
    • Evaluate the audience that the website would be geared towards and visualize the most suitable process to represent the chosen topic.
    • For this particular topic of understanding GHG emissions in Canada, Webflow was chosen because it allows the audience to interact with the website in a manner that is similar to a story; providing them with the content in a visually appealing and user friendly manner.
  2. Data Collection:
    • For the undertaking of this analysis, the main data source used was the Greenhouse Gas Inventory from the Government of Canada (Environment and Climate Change). The inventory provided raw values that could be mapped and analyzed in various geographies and sectors. Figure 4 shows an example of what the data looks like at a national scale, prior to being extracted. Similarly, data is also provided at a regional scale and by economic sector.

      Figure 4: Raw GHG Values Table from the Inventory
    • The second source for this visualization was the geographic boundaries. The geographic boundaries shapefiles for Canada at both a national scale and regional scale was obtained from Statistics Canada. Additionally, the rivers (lines) shapefile from Statistics Canada too was used to include water bodies in the maps that were created.
      • When downloading the files from Statistics Canada, the ArcGIS (.shp) format was chosen.
  3. Analysis:
    • Prior to undertaking any of the analysis, the data from the inventory report needed to be extracted to excel. For the purpose of this analysis, national, regional and economic sector data were extracted from the report to excel sheets
      • National -from 1990 to 2015, annually,
      • Regional -by province/territory from 1990 to 2015, annually
      • Economic Sector -by sector from 1990 to 2015, annually
    • Graphs:
      • Trend -after extracting the national level data from the inventory, a line graph was created in excel with an added trendline. This graph shows the total emissions in Canada from 1990 to 2015 and the expected trajectory of emissions for the upcoming five years. In this particular graph, it is evident that the emissions show an increasing trajectory. Check out the trend graph here!
      • Economic Sector -similar to the trend graph, the economic sector annual data was extracted from the inventory to excel. With the use of the available data, a stacked bar graph was created from 1990 to 2015. This graph shows the breakdown of emissions by sector in Canada as well as the variation/fluctuations of emissions in the sectors. It helps understand which sectors contribute the most and which years these sectors may have seen a significant increase or decrease. With the use of this graph, further analysis could be undertaken to understand what changes may have occurred in certain years to create such a variation. Check out the economic sector graph here!
    •  Maps:
      • National map -the national map animation was created with the use of ArcMap and an online GIF maker. After the data was extracted to excel, it was saved as a .csv files and uploaded to ArcMap. With the use of ArcMap, sixteen individual maps were made to visualize the varied emissions from 1990 to 2015. The provincial and territorial shapefile was dissolved using the ArcMap dissolve feature (from the Arc Tool box) to obtain a boundary file at a national scale (that was aligned with the regional boundary for the next map). Then, the uploaded table was joined to the boundary file (with the use of the Table join feature). Both the dissolved national boundary shapefile and the river shapefile were used for this process, with the data that was initially exported from the inventory for national emissions. Each map was then exported a .jpeg image and uploaded to the GIF maker, to create the animation that is shown in the website. With the use of this visualization, the viewer can see the variation of emissions throughout the years in Canada. Check out the national animation map here!
      •  Regional map -similar to the national one, the regional map animation was created in same process. However, for the regional emissions, data was only available for three years (1990, 2005 and 2015). The extracted data .csv file was uploaded and table joined to the provinces and territories shapefile (undissolved), to create three choropleth maps. The three maps were them exported as .jpeg images and uploaded to the GIF maker to create the regional animation. By understanding this animation, the viewer can distinctly see which regions in Canada have increase, decreased or remained the same with its emissions. Check out the regional animation map here!
  4. Final output/maps:
    • The graphs and maps that were discussed above were exported as images and GIFs to integrate in the website. By evaluating the varied visualizations, various conclusions and outputs were drawn in order to understand the current status of Canada as a nation, with regards to its GHG emissions. Additional research was done in order to assess the targets and policies that are currently in place about GHG emissions reductions.
  5. Design and Context:
    • Once the final output and maps were created, and the content was drafted, Webflow enables the user to easily upload external content via the upload media tool. The content was then organized with the graphs and maps that show a sequential evaluation of the content.
    • For the purpose of this website, an introductory statement introduces the content discussed and Canada’s place in the realm of Global emissions. Then the emissions are first evaluated at a national scale with the visual animation, then the national trend, regional animation and finally, the economic sector breakdown. Each of the sections have its associated content and description that provides an explanation of what is shown by the visual.
    • The Learn More and Data Source buttons in the website include direct links to Government of Canada website about Canada’s emissions and the GHG inventory itself.
    • The concluding statement provides the viewer with an overall understanding of Canada’s status in GHG emissions from 1990 to 2015.
    • All of the font formatting and organizing of the content was done within the Webflow interface with the end user in mind.
  6. Webflow:
    • The particular format that was chosen in for this website because of story telling element of it. Giving the viewer the option to scrolls through the page and read the contents of it, works similarly as story because this website was created for informative purposes.

Lessons Learned: 

  • While the this website provides informative information, it could be further advanced through the integration of an interactive map, with the use of additional coding. This however would require creating the website outside of the Webflow interface.
  • Also, the analysis could be further advanced with the additional of municipal emissions values and policies (which was not available in the inventory it self)

Overall, the use of Webflow for the creation of this website, provides users with the flexibility to integrate various components and visualizations. The user friendly interface enables uses with minimal coding knowledge to create a website that could be used for various purposes.

Thank you for reading. Hope you enjoyed this post!

Visualizing Urban Land Use Growth in Greater Sào Paulo

By: Kevin Miudo

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

https://www.youtube.com/watch?v=Il6nINBqNYw&feature=youtu.be

Introduction

In this online development blog for my created map animation, I intend to discuss the steps involved in producing my final geovisualization product, which can be viewed above in the embedded youtube link. It is my hope that you, the reader, learn something new about GIS technologies and can apply any of the knowledge contained within this blog towards your own projects. Prior to discussing the technical aspects of the map animations development, I would like to provide some context behind the creation of my map animation.

Cities within developing nations are experiencing urban growth at a rapid rate. Both population and sprawl are increasing at unpredictable rates, with consequences for environmental health and sustainability. In order to explore this topic, I have chosen to create a time series map animation visualizing the growth of urban land use in a developing city within the Global South. The City which I have chosen is Sào Paulo, Brazil. Sào Paulo has been undergoing rapid urban growth over the last 20 years. This increase in population and urban sprawl has significant consequences to climate change, and such it is important to understand the spatial trend of growth in developing cities that do not yet have the same level of control and policies in regards to environmental sustainability and urban planning. A map animation visualizing not only the extent of urban growth, but when and where sprawl occurs, can help the general public get an idea of how developing cities grow.

Data Collection

In-depth searches of online open data catalogues for vector based land use data cultivated little results. In the absence of detailed, well collected and precise land use data for Sào Paulo, I chose to analyze urban growth through the use of remote sensing. Imagery from Landsat satellites were collected, and further processed in PCI Geomatica and ArcGIS Pro for land use classification.

Data collection involved the use of open data repositories. In particular, free remotely sensed imagery from Landsat 4, 5, 7 and 8 can be publicly accessed through the United States Geological Survey Earth Explorer web page. This open data portal allows the public to collect imagery from a variety of satellite platforms, at varying data levels. As this project aims to view land use change over time, imagery was selected at data type level-1 for Landsat 4-5 Thematic Mapper and Landsat 8 OLI/TIRS. Imagery selected had to have at least less than 10% cloud cover, and had to be images taken during the daytime so that spectral values would remain consistent across each unsupervised image classification.

Landsat 4-5 imagery at 30m spectral resolution was used for the years between 2004 and 2010. Landsat-7 Imagery at 15m panchromatic resolution was excluded from search criteria, as in 2003 the scan-line corrector of Landsat-7 failed, making many of its images obsolete for precise land use analysis. Landsat 8 imagery was collected for the year 2014 and 2017. All images downloaded were done so at the Level-1 GeoTIFF Data Product level. In total, six images were collected for years 2004, 2006, 2007, 2008, 2010, 2014, 2017.

Data Processing

Imagery at the Level-1 GeoTIFF Data Product Level contains a .tif file for each image band produced by Landsat 4-5 and Landsat-8. In order to analyze land use, the image data must be processed as a single .tiff. PCI Geomatica remote sensing software was employed for this process. By using the File->Utility->Translate command within the software, the user can create a new image based on one of the image bands from the Landsat imagery.

For this project, I selected the first spectral band from Landsat 4-5 Thematic Mapper images, and then sequentially added bands 2,3,4,5, and band 7 to complete the final .tiff image for that year. Band 6 is skipped as it is the thermal band at 120m spatial resolution, and is not necessary for land use classification. This process was repeated for each landsat4-5 image.Similarly for the 2014 and 2017 Landsat-8 images, bands 2-7 were included in the same manner, and a combined image was produced for years 2014 and 2017.

Each combined raster image contained a lot of data, more than required to analyze the urban extent of Sào Paulo and as a result the full extent of each image was clipped. When doing your own map animation project, you may also wish to clip data to your study area as it is very common for raw imagery to contain sections of no data or clouds that you do not wish to analyze. Using the clipping/subsetting option found under tools in the main panel of PCI Geomatica Focus, you can clip any image to a subset of your choosing. For this project, I selected the coordinate type ‘lat/long’ extents and input data for my selected 3000×3000 pixel subset. The input coordinates for my project were: Upper left: 46d59’38.30″ W, Upper right: 23d02’44.98″ S, Lower right: 46d07’21.44″ W, Lower Left: 23d52’02.18″ S.

Land Use Classification

The 7 processed images were then imported into a new project in ArcPro. During importation, raster pyramids were created for each image in order to increase processing speeds.  Within ArcPro, the Spatial Analyst extension was activated. The spatial analyst extension allows the user to perform analytical techniques such as unsupervised land use classification using iso-clusters. The unsupervised iso-clusters tool was used on each image layer as a raster input.

The tool generates a new raster that assigns all pixels with the same or similar spectral reluctance value a class. The number of classes is selected by the user. 20 classes were selected as the unsupervised output classes for each raster. It is important to note that the more classes selected, the more precise your classification results will be. After this output was generated for each image, the 20 spectral classes were narrowed down further into three simple land use classes. These classes were: vegetated land, urban land cover, and water. As the project primarily seeks to visualize urban growth, and not all types of varying land use, only three classes were necessary. Furthermore, it is often difficult to discern between agricultural land use and regular vegetated land cover, or industrial land use from residential land use, and so forth. Such precision is out of scope for this exercise.

The 20 classes were manually assigned, using the true colour .tiff image created from the image processing step as a reference. In cases where the spectral resolution was too low to precisely determine what land use class a spectral class belong to, google maps was earth imagery referenced. This process was repeated for each of the 7 images.

After the 20 classes were assigned, the reclassify tool under raster processing in ArcPro was used to aggregate all of the similar classes together. This outputs a final, reclassified raster with a gridcode attribute that assigns respective pixel values to a land use class. This step was repeated for each of the 7 images. With the reclassify tool, you can assign each of the output spectral classes to new classes that you define. For this project, the three classes were urban land use, vegetated land, and water.

Cartographic Element Choices:

 It was at this point within ArcPro that I had decided to implement my cartographic design choices prior to creating my final map animation.

For each layer, urban land use given a different shade of red. The later the year, the darker and more opaque the colour of red. Saturation and light used in this manner helps assist the viewer to indicate where urban growth is occurring. The darker the shade of red, the more recent the growth of urban land use in the greater Sào Paulo region. In the final map animation, this will be visualized through the progression of colour as time moves on in the video.

ArcPro Map Animation:

Creating an animation in ArcPro is very simple. First, locate the animation tab through the ‘View’ panel in ArcPro, then select ‘Add animation’. Doing so will open a new window below your work space that will allow the user to insert keyframes. The animation tab contains plenty of options for creating your animation, such as the time frame between key frames, and effects such as transitions, text, and image overlays.

For the creation of my map animation, I started with zoomed-out view of South America in order to provide the viewer with some context for the study area, as the audience may not be very familiar with the geography of Sào Paulo. Then, using the pan tool, I zoomed into select areas of choice within my study area, ensuring to create new keyframes every so often such that the animation tool creates a fly-by effect. The end result explores the very same mapping extents as I viewed while navigating through my data.

While making your own map animation, ensure to play through your animation frequently in order to determine that the fly-by camera is navigating in the direction you want it to. The time between each keyframe can be adjusted in the animation panel, and effects such as text overlays can be added. Each time I activated another layer for display to show the growth of urban land use from year to year, I created a new keyframe and added a text overlay indicating to the user the date of the processed image.

Once you are satisfied with your results, you can export your final animation in a variety of formats, such as .avi, .mov, .gif and more. You can even select the type of resolution, or use a preset that automatically configures your video format for particular purposes. I chose the youtube export format for a final .mpeg4 file at 720p resolution.

I hope this blog was useful in creating your very own map animation on remotely sensed and classified raster data. Good luck!

Creating a 3D Holographic Map Display for Real-World Driving and Flight Navigation

By: Dylan Oldfield

Geovis Class Project @RyersonGeo, SA8905, Fall 2018

Introduction:

The inspiration for this project came from the visual utility and futuristic look of holographic maps from the 2009 movie Avatar by James Cameron. Wherein there were multiple uses for holographic uses in several unique scenarios; such as within aerial vehicles, conference tables, and on air traffic control desks. Through this, the concept to create, visualize and present a current day possibility of this technology began. This technology is a form of hologram that visualizes geographically, where the user is, while operating a vehicle. For instance, the use of a hologram in a car for the everyday person displaying their navigation in the city guiding them to their destination. Imagine a 3D hologram real-time version replacing the 2D screen of google maps or any dashboard mounted navigation in a car. This application can even be used in aerial vehicles as well, imagine planes landing at airports close to urban areas, but fog or other weather conditions making safe landing and take-off difficult. With the use of the 3D hologram, visualization of where to go and how to navigate the difficult weather would be significantly easier and safer. For these 2 unique reasons, 2 scenarios or maps, were recorded into videos and made into 3D holograms to give a proof of concept for the use of the technology in cars and planes.

Data:

The data to make this project possible was taken from the City of Toronto Open Data Portal and consisted of the 3D massing and Street .shp files. It is important to note that in order for the video to work and be seen properly, the background within the video and in the real-world had to have been as dark as possible otherwise the video will not appear fully. To make this effect, features were created in ArcGIS-Pro that ensured that the background, base, and ceiling in the 3D scene of the map were black. These features were, a simple polygon for the ceiling given a different base height, and the ‘walls’ for the scene was a line surrounding the scene and extruded to the ceiling. The base of the scene was an imported night-time base map.

Methodology:

  1. Map / Scene Creation Within ArcGIS-Pro

Within the mapping program ArcGIS Pro the function to visualize 3D features was used to extrude the aforementioned .shp files for the scene. All features were extruded in 3D from the base height with meters as the measurement. The buildings were extruded to their real-world dimensions and given the colour scheme of fluorescent blue so as to provide contrast for buildings in the video. The roads were extruded in such a way so as to give the impression that sidewalks existed. The first part for making this was with buffering the roads to a 6 meter buffer, dissolving it to make it seamless, and extruding it from the base, creating the roads. The inverse polygon from the newly created roads was created and extruded slightly higher than the roads. The roads were then given differing shades of grey so as to adhere to the darkness of the scene but also to provide contrast to each other. This effect is seen in the picture below.

 

  1. Animation Videos Creation and Export

Following the creation of the scene the animation or videos of “driving” through the city and “flying” into Billy Bishop Airport were created. Within ArcGIS-Pro the function to create Animations through the consecutive placements of key frames allows for the seamless running of a video in any 3D scene created. The key frames are essentially checkpoints in a video and the program fills the time and space between each frame by traveling between the frames as a video. The key frames are the boxes at the bottom of the image below.

Additionally, as seen in the image above, is the exporting options ArcGIS-Pro makes available for the user. The video can be exported at differing qualities to YouTube, Vimeo, Twitter, MP4, and as a Gif among other options. The 2 videos created for this project were at 1080p, 60 frames a second in MP4 format. Due to the large size of the videos with these chosen options, the exporting process took over 2 hours for each video.

  1. PowerPoint Video Transposition and Formatting

The way the hologram functions is by refracting the videos through each of the lenses into the center creating the floating effect of an image. For this effect to work the video exported from ArcGIS-Pro was inserted into PowerPoint and transposed 3 times into the format seen in the image below. Once the placements were equal and exact the background, as mentioned previously, was turned black. The videos were made to play at the same time and then was exported for a second time into a MP4 as the final products.

  1. Hologram Lenses Template Creation

The hologram lenses were created out of 4 clear CD cases. The templates for the lenses needed to be physically compatible with the screen display of the video created. The screen used was from a 5th Generation iPad. After the template was defined they were cut out of the 4 CD cases with a box cutter and lightly sanded at all cut edges so as to ensure they would not cut anyone, and so that the surfaces in contact with the epoxy would bond without issue. After this an epoxy resin was used to glue the 4 lenses into their final shape. While the epoxy had a 10 setting time, it was left for 3 hours to ensure it was fully set. After this the lenses was complete and ready for use. The final lens and the iPad used for the display are seen in the image below.

Finally, here is a screen shot of the City of Toronto “Driving Navigation” video:

Using LiDAR to create a 3D Basemap

By: Jessie Smith
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

INTRO

My Geovisualization Project focused on the use of LiDAR to create a 3D Basemap. LiDAR, which stands for Light Detection and Ranging, is a form of active remote sensing. Pulses of light are sent from a laser towards the ground. The time it takes for the light pulse to be returned is measured, which determines the distance between where the light touched a surface and the laser in which it was sent from. By measuring all light returns, millions of x,y,z points are created and allow for 3D representation of the ground whether it be just surface topography or elements such as vegetation or buildings etc. The LiDAR points can then be used in a dataset to create DEMs or TINs, and imagery is draped over them to create a 3D representation. The DEMs could also be used in ArcPro to create 3D buildings and vegetation, as seen in this project.

ArcGIS SOLUTIONS

ArcGIS solutions are a series of resources made available by Esri. They are resources marketed for industry and government use. I used the Local Government Solutions which has a series of focused maps and applications to help local governments maximize their GIS efficiency to improve their workflows and enhance services to the public. I looked specifically at the Local Government 3D Basemaps solution. This solution included a ArcGIS Pro package with various files, and an add-in to deploy the solution. Once the add-in is deployed a series of tasks are made available that include built in tools and information on how to use them. There is also a sample data set included that can be used to run all tasks as a way to explore the process with appropriate working data.

IMPLEMENTATION

The tasks that are provided have three different levels: basic, schematic and realistic. Each task only requires 2 data sources, a las(LiDAR) dataset and building footprints. Based on the task chosen, a different degree of detail in the base map will be produced. For my project I used a mix of realistic and schematic tasks. Each task begins with the same steps: classifying the LiDAR by returns, creating a DTM and DSM, and assigning building heights and elevation to the building footprints attribute table. From there the tasks diverge. The schematic task then extracted roof forms to determine the shape of the roofs, such as a gabled type, where in the Basic task the roofs remain flat and uniform. Then the DEMS were used in conjunction with the building footprints and the rooftop types to 3D enable buildings. The realistic scheme created vegetation points data with z values using the DEMs. Next, a map preset was added to assign a 3D realistic tree shape that corresponds with the tree heights.

DEMs Created

DSM

DTM

Basic Scene Example

Realistic Scene

 

ArcGIS ONLINE

The newly created 3D basemap, which can be seen and used on ArcGIS Pro, can also be used on AGOL with the newly available Web Scene. The 3D data cannot be added to ArcGIS online directly like 2D data would be. Instead, a package for each scene was created, then was published directly to ArcGIS online. The next step is to open this package on AGOL and create a hosted layer. This was done for both the 3D trees and buildings, and then these hosted layers were added to a Web Scene. In the scene viewer, colours and basemaps can be edited, or additional contextual layers could be added. As an additional step, the scene was then used to create a web mapping application using Story Map template. The Story Map can then be viewed on ArcGIS Online and the data can be rotated and explored.

Scene Viewer

Story Map

You can find my story map here:
http://ryerson.maps.arcgis.com/apps/Styler/index.html?appid=a3bb0e27688b4769a6629644ea817d94

APPLICATIONS

This type of project would be very doable for many organizations, especially local government. All that is needed is LiDAR data and building footprints. This type of 3D map is often outsourced to planners or consulting companies when a 3D model is needed. Now government GIS employees could create a 3D model themselves. The tasks can either be followed exactly with your own data, or the general work flow could be recreated. The tasks are mostly clear as to the required steps and processes being followed, but there could be more reasoning provided when setting values or parameters specific to the data being used inside the tool. This will make it easier to create a better model with less trial and error.