Building digitization using Artificial Intelligence – an Open Source approach

By Nikita Markevich

Geovisualization Project Assignment, SA8905, Fall 2021

INTRO

With the development of automation and machine learning, a new approach in raw data acquisition has been opened for people to try. QGIS is a popular open-source GIS software that allows the creation of custom plugins for all sorts of geoprocessing. One such plugin is called Mapflow, developed by Russian-based company GEOAlert. Mapflow is an easy-to-use plugin to retrieve ground data from satellite imagery such as buildings, roads, construction zones, and forest canopies. This blog will introduce how to use Mapflow through a browser environment. To learn how to use the plugin, please refer to the Esri Story Maps tutorial through this link: https://storymaps.arcgis.com/stories/dfd88d7170c74f33a4dd5f7583cdc414

The difference between the use of Mapflow in the browser and through the plugin is that browser only allows detection from web-based satellite services such as Mapbox, or custom imagery through URL, while in the plugin, custom satellite imagery can be processed straight from the user’s device. The major advantage of the browser approach is that the process is using remote servers which is faster than the plugin process.

Mapflow website project page

USER INTERFACE

Mapflow online service uses free to try system by giving 500 free credits when opening an account. Each process requires credits based on the size of the data area that the user wishes to process. If the user runs out of credits, it is possible to top up the balance in the top right corner for the price of 100 CAD per 1000 points.

Let’s explore the project page. The project is organized in steps where the user can choose the data source, the type of AI Model that the user wishes to run, and post-processing operation for additional data gathering. AI Models that are available in the browser copy the models which are available in the QGIS plugin. AI can provide digitization for buildings, high-density housing, forests, roads, construction, and agricultural fields.

User Interface of the Data source tab in Mapflow. Mapbox API is used to display geographic data.

In the data source tab, a user can either use the embedded draw tool to choose the area for processing or upload polygon data in GEOJSON format. The draw rectangle tool is very intuitive in its use and as soon as it’s drawn, the website provides the area’s size in squared kilometers. This number is used by the website to determine how many credits are required to process the area. The larger the area, the more credits it costs to process.

DATA

The area of interest for this example would be focused on the same area as was used in the plugin tutorial in Esri Story Maps: the city of Ciego De Avilo in Cuba. The drawn rectangle over the city and closest suburbs estimated the area to be 45.31 squared kilometers. Originally the area was raised to my attention when I was doing some research project for the company I work for to explore the possibility of constructing fiber service in the Caribbean region. While searching for building and road data through open sources such as OpenStreetMap, I realized that some Caribbean countries and especially Cuba is missing geographic data that is required to create a fiber map model. After exploring several options, the plugin Mapflow proved to be most useful to generate geodata from available free commercial satellite imageries.

Selected Area of the City of Ciego De Avila chosen through draw rectangle tool in the data source page of Mapflow

PROCESS

The Chosen area is now inputted in our project. The next steps would be to choose the model and post-processing data. We will choose a buildings model to test the speed of the browser process and compare it with the plugin process. The big perk of the browser tool is the post-processing options. One such option is automatic polygon simplification, which would simplify the results of the model. In the plugin version, the results of the model outputted some building polygons in broken shapes or fuzzy polygons. That would create additional work post-processing polygons manually. The browser tool offers that option for free.

Project window of Mapflow right before the beginning of the process.

The area of interest costs 227 credits to be processed, which means that every 100 squared kilometers processed costs 500 points.

As soon as the Run processing button is pressed, the final step is to wait for the process to finish and download the processed data. The process finished in 32 minutes. That is 15 minutes faster than in the plugin process, which was 47 minutes.

After the process is finished, the user can view the results in the browser and download the file in the GEOJSON format.

Data results in the browser window

The process assigns id numbers to each shape as well as shape types, such as rectangle, grid snap, or l-shape. This information can help with further post-processing and solve any automation mistakes.

LIMITATIONS AND FUTURE WORK

The most important limitation of this tool is its cost, however, if the user decides to process an area larger than 100 square kilometers, one can create multiple accounts and use free credits each time. Secondly, the processed results sometimes output shapes that are very questionable in their nature. Some polygons merged multiple buildings into ones, others detected buildings partially, in other cases the orientations of polygons are off. This can be fixed in the manual post-processing by GIS professionals.

In the future, this tool can be potentially be used to populate the OpenStreetMap dataset with the building polygons and roads data. Open Source data is very important for many gis users, and AI automation is the perfect companion that makes the work of GIS enthusiasts much easier by streamlining the most tedious processes in geographic analysis.

Ontario Demographics Data Visualization

Introduction

The purpose of this project is to visualize any kind of data on a webmap. Using open source software, such as QGIS, solves one aspect of this problem. The other part of this problem is to answer this question:

How and what data can be visualized? Data can be stored in a variety of formats, and organized differently. The most important aspect of spatial data is the spatial information itself and so we need to figure out a way to display the data using textual descriptions, symbols, colours, etc. at the right location.

Methodology

In this visualization, I am using the census subdivisions (downloaded from Statstics Canada website) as the basic geographical unit, plus the 2016 census profile for the census subdivisions (also downloaded from Statistics Canada website). Once these data were downloaded, the next steps were to inspect the data and organize them in a fashion so that they could be easily visualized by the shapefiles. In order to facilitate this task, we can use any relational database management system, however, my preference was to use SQL Server 2017 express edition. Once the 2016 census profile has been imported into SQL Server, the “SQL Queries” [1] file can be run to organize the data into a relational table that can be exported, or copied directly from the result-set on management studio and pasted, into excel/csv; the sheet/file can now be opened in QGIS and joined to the shapefile of Ontario Census Subdivisions [2] using CSDUID as the common field between the two files.

Using the qgis2web plugin, all data and instructions are manually chosen on a number of tabs. You can choose the layers and groups you want to upload, and then customize the appearance and interactivity of the webmap based on available options. There is the option to use either Leaflet, or OpenLayers styles on QGIS version 3.8. You can Update preview and see what the outcome will look like. You can then Export the map and the plugin will convert all the data and instructions into json format. The most important file – index.html – is created on the directory you have specified.

index.html [1] is the file that can be used to visualize the map on the web browser, however, you need to first download all the files and folders from the source page [1]. This will put all the files on your (client) machine which makes it possible to open index.html on localhost. If the map files are uploaded on a web server, then the map can be viewed by the world wide web.

Webmap

The data being visualized belongs to the population demographics (different age groups). The map of Ontario’s census subdivisions is visualized as a transparent choropleth map of 2016 population density. Other pieces of demographics information are embedded within the pop-up for each of the census subdivisions. If you hover your cursor on each of the census subsivisions, it will be highlighted with a transparent yellow colour so you can see the basemap information on the basemap clearer. If you click on them, the pop-up will appear on the screen, and you can scroll through it.

There are other interactive utilities on the map such as controllers for zooming in and out, a (ruler) widget to make measurements, a (magnifying glass) widget to search the entire globe, a (binocular) widget to search only the layers uploaded on the map, and a (layers) widget to turn layers and basemaps on and off.

Limitations

There are some limitations that I encountered after I created this webmap. The first, and most important limitation, is the projection of the data on the map. The original shapefile was using the EPSG code of 3347 which uses the Canada Lambert Conic projection with NAD 1983 datum. The plugin converted data into the most common web projection format, WGS 1984, which is defined globally by Longitude and Latitude. Although WGS 1984 prevents the hassle of using projected coordinate systems by using one unified geographic projection for the entire globe, nevertheless, it distorts the shapes as we move farther away from the equator.

The second limitation was the fact that my transparent colours were not coded into the index.html file. The opacities are defined as 1. In order to control the level of opacities, the index.html file must be opened in a text editor, the opacities changed to the proper levels, ranging between 0 and 1, and lastly save the edits on the same index.html file.

The next limitation is the size of files that can be uploaded on github [3]. There is a limit of 100 MB on the files that can be uploaded to github repositories, and because the size of the shapefile for entire Canadian census subdivisions is over 100 MB, when converted to json, it could not be uploaded to the repository [1] with all the other files. However, it is possible to add to geojson formatted file (of census subdivisions) to the data directory of the repository on the localhost machine, and manually add its location with a pair of opening and closing script tags on the index.html file on the body tag. In my case, the script was:

<script src=”data/CensusSubdivisions_4.js“></script>

The name of the file should be introduced as the very beginning line of the geojson file as a variable:

var json_CensusSubdivisions_4 = {

And don’t forget that the last line should be a closing curly braces:

}

Now index.html is aware where to find the data for all of the Canadian census subdivisions.

What’s Next?

To conclude with the main goal of this project, which was stated in the introduction, we now have a framework to visualize any data we want. Which data we want to visualize should change our methodology becasuase the scripts can be adapted accordingly. What is more important is the way we want the data to be visualized on the webmap. This tutorial presented the basics of qgis2web plugin. Once the index.html file is generated, other javascript libraries can be added to this file, and depending on your level of comfort with javascript you can expand and go beyond the simple widgets and utilities on this webmap.

  [1]  https://github.com/Mahdy1989/GeoVisualization-Leaflet-Webmap/tree/master  

 [2] There is a simple way to limit the extent of the census subdivisions for the entire Canada, to the Ontario subset only: filter the shapefile by PRUID = '35' which is the code for Ontario.

[3]  https://help.github.com/en/github/managing-large-files/what-is-my-disk-quota 

Visualizing Alaska’s Forest Damage in Twenty Years

Author: Anitha Muraleedharan
Geovis Project Assignment@RyersonGeo,
SA 8905, Fall 2018 (Rinner)

Forest Damage in Alaska

Alaska is a dynamic region and has a long history of changeable climate. Alaska has lost a lot of its forests due to insect infestation, fire, flood, landslides, and windthrow. Aerial surveys are conducted to monitor forest health for the State of Alaska and to identify insect and some disease pest trends. This time series map animation will visualize the forest damage in Kenai Peninsula, Tanana Region and Fort Yukon Region of Alaska during the years 1989 to 2010. This blog will cover the entire processes involved in creating this visualization.

Data

The spatial data of the forest damage survey conducted during the period from 1989 to 2010 by the Alaska Department of Natural Resources are readily available for download from AK State Geo-Spatial Data Clearinghouse (http://www.asgdc.state.ak.us/?#2952). The shapefiles are available individually for each year from 1989 to 2010 except for years from 2000 to 2007. These data were used for preparing this Time Series map animation.

Preparing Data for Animation in QGIS

QGIS 3.2.3 64bit was used to prepare the data for animated map visualization of Alaska’s forest damage. QGIS is a free and open-source cross-platform desktop geographic information system (GIS) application that supports viewing, editing, visualization and analysis of geospatial data. Since the data were available only as individual files, the first step in preparing the data was to merge this data together into one shapefile. For this task, I used the Merge Vector Layers Tool in QGIS which merged all the individual shapefiles into a single shapefile.

Steps to Merge multiple vector layers into one

  • Step1: Add all the vector layers, intended to be merged, into QGIS
  • Step2: Go to Vector →Data Management Tools → Merge Vector Layers in the menu
  • Step3: Click input layers button and select all the layers needed to be merged
  • Step4: Click Merged Layer button to give a name for the merged output layer
  • Step4: Click Run in Background button to create the merged layer and add it to QGIS

Fig. 1 Merge Vector Layer Tool in QGIS

The next task was to format the timestamp column to fit the QGIS Time Manager plugin tool that will be used to create the animated map visualization. The timestamp column for this data was “SURVEY_YR” which was in four-digit format. The QGIS Time Manager Plugin requires that the timestamp data be in YYYY-MM-DD format. For this, a new field was created with name “Damage_Yr” and type string and used the Field Calculator tool in Processing Toolbox of QGIS.

Fig. 2 Field Calculator Tool in QGIS

In the Field Calculator tool, the expression “tostring(SURVEY_YR) + ‘-01-01’ ” was used to concatenate data in the field “SURVEY_YR” and the “-01-01”  together to make the timestamp in YYYY-MM-DD format and copy the data to the new field “Damage_Yr”.

Fig. 3 Attribute table showing the Damge_Yr in YYYY-MM-DD format after update.

Visualizing the Time Series

The Time Manager plugin was downloaded and installed in QGIS. The forest damage data was then added as a layer in QGIS. The Google Terrain map was added as the base map for this time series animation. The following steps were performed to add the Google Terrain map to QGIS.

  • Step1: Add a new connection to XYZ Tiles in QGIS and give it a name, say “Google Terrain”
  • Step2: Use https://mt1.google.com/vt/lyrs=t&x={x}&y={y}&z={z} as the URL.
  • Step3. Click Ok and then double-click the created layer to add the “Google Terrain” as the layer.

After the data was added, it was time to apply symbology to the polygon data showing the forest damage in QGIS. The layer was styled using the attribute “Damage_Yr” and categorized with sequential symbology. Once the data was styled, the Time Manager plugin needed to be configured to visualize the time series animation.

In the Time Manager Settings window, the Forest damage layer which needs to be animated was added using the “Add layer” button. The Damage_Yr column was chosen for the Start and End time and “Accumulate features” option was selected to show the features accumulated on the map as the year changes during the animation. 500 milliseconds duration was set in the animation options to show each year for that many seconds in the animation before showing the next year. To display each year as a label in the map during the animation, time format was set as “%Y” and the font, font size, and text color were also set.

Fig. 4 Time manager settings window

Fig. 5 Time display options.

The time frame in the Time manager dock was set as years since the forest damage in each year will be animated and displayed. The time frame size for the animation was set as 1 since we have data for each year from 1989 to 2010. The animation can be played by clicking the play button and QGIS will show the forest damage of Alaska in each year from 1989 to 2010 on the map window for 500 milliseconds each.

Fig. 6 Time Manager dock showing settings for the animation in QGIS

Converting the Time Series into Video

The Time Manager allows exporting the animation to a video. However, there is no option to add a legend onto the rendered maps in the animation in QGIS. Therefore, the maps were exported as .PNG image files. The map frames were exported first with the full extent of the map and subsequently, two more times with map zoomed to areas Tanana and Fort Yukon respectively for showing different areas in one animation. The legend along with title and data source labels were then added for each exported map frame using photoshop.

Finally, VirtualDub software was used to convert the .PNG files to video for each series of maps. VirtualDub is a free and open-source video capture and video processing utility for Microsoft Windows written by Avery Lee.  The generated .PNG files were then renamed in ascending order sequence in the format “frameXXX.png” where XXX is the frame number. For example, frame000, frame001 and so on. This is required for VirtualDub to detect the files as a sequence of images and then combine it to a video. The steps followed to create the animated video is as given below.

  • Step1: Open VirtualDub software
  • Step2: Go to File → Open video file option in the menu and navigate to the images folder
  • Step3: Click the first image in the map image series and VirtualDub will automatically add all the other images that are in sequence
  • Step5: Go to Video → Frame rate and set fps as 0.5 to show each frame for 500 milliseconds in the video
  • Step6: Preview the video and save it using File → Save as AVI option in the menu

Fig. 7 Combining the png files in VirtualDub software

Results


Watch the visualization on YouTube

Movies and Television shows filmed in Toronto but based elsewhere…

by Alexander Pardy
Geovis Class Project @RyersonGeo, SA8905, Fall 2017

Data and Data Cleaning:

To obtain my data I used https://moviemaps.org/ and selected Toronto  the website displays a map that shows locations in the Greater Toronto Area  where movies and television shows were filmed. The point locations are overlaid on top of Google Maps imagery.

If you use the inspect element tool in internet explorer, you can find a single line of JavaScript code within the map section of the webpage that contains the latitude and longitude of every single point.

The data is in a similar format to python code. The entire line of JavaScript code was inputted into a Python script.  The python script writes the data into a CSV file that can then easily be opened in Microsoft Excel. Once the file was opened in Excel, Google was used to search for the setting of each and every single movie or television show, using the results of various different websites such as fan websites, IMDB, or Wikipedia. Some locations take place in fictional towns and cities, in this case locations were approximated using best judgement to find a similar location to the setting. All the information was than saved into a CSV file. Python was then used to delete out any duplicates in the CSV file and was used to give a count of each unique location value. This gives the total number of movies and television shows filmed at each different geographical location. The file was than saved out of python back into a CSV file. The latitude and longitude coordinates for each location was than obtained from Google and inputted into the CSV file.  An example is shown below.

Geospatial Work:

The CSV file was inputted into QGIS as a delimited text layer with the coordinate system WGS 84. The points were than symbolized using a graduated class method based on a classified count of the number of movies or television shows filmed in Toronto. A world country administrative shape file was obtained from the Database of Global Administrative Areas (GADM). There was a slight issue with this shapefile,  the shapefile had too much data and every little island on the planet was represented in this shapefile. Since we are working at a global scale the shapefile contained too much detail for the scope of this project.

Using WGS 84 the coordinate system positions the middle of the map at the prime meridian and the equator. Since a majority of the films and television shows are based in North America,  a custom world projection was created. This was accomplished in QGIS by going into Settings, Custom CRS, and selecting World Robinson projection. The parameters of this projection was then changed to change the longitude instead of being the prime meridian at 0 degrees, it was changed to -75 degrees to better center North America in the middle of the map. An issue came up after completing this is that a shapefile cannot be wrapped around a projection in QGIS.


After researching how to fix this, it was found that it can be accomplished by deleting out the area where the wrap around occurs. This can be accomplished by deleting the endpoints of where the occurrence happens. This is done by creating a text file that says:

This text box defines the corners of a polygon we wish to create in QGIS.  A layer  can now be created from the delimited text file, using custom delimiters set to semi colon and well-known text. It creates a polygon on our map, which is a very small polygon that looks like a line. Then by going into Vector, Geoprocessing Tools, Difference and selecting the input layer as the countries layer and the difference layer as the polygon that was created. Once done it gives a new country layer with a very thin part of the map deleted out (this is where the wrap around occurred). Now the map wraps around fine and is not stretched out. There is still a slight problem in Antarctica so it was selected and taken out of the map.

Styling:

The shapefile background was made grey with white hairlines to separate the countries. The count and size of the locations was kept the same. The locations were made 60% transparent. Since there was not a lot of  different cities the  symbols were classified to be in 62 classes, therefore each time the number increased, the size of the point would increase.  The map is now complete. A second map was added in the print composer section to show a zoomed in section of North America. Labels and lines were then added into the map using Illustrator.

Story Map:

I felt that after the map was made a visualization should also be created to help covey the map that was created by being able to tell a story of the different settings of films and television shows that were filmed in Toronto.  I created a ESRI story map that can be found Here .

The Story Map shows 45 points on a world map, these are all based on the setting of television shows and movies that were filmed in the City of Toronto. The points on the map are colour coded. Red point locations had 4-63 movie and television shows set around the points. Blue point locations had 2-3 movie and television shows set around the points. Green point locations had 1 movie or television show set around the point. When you click on a point it brings you to a closer view of the city the point is located in. It also brings up a description that tells you the name of the place you are viewing and the number of movies and television shows whose settings takes place in that location. You also have the option to play a selected movie or television show trailer from YouTube in the story map to give you an idea of what was filmed in Toronto but is conveyed by the media industry to be somewhere else.

Creating an animated map with CartoDB Torque and QGIS

Author: Melissa Dennison, November 18, 2015
Course Assignment, SA8905, Fall 2015 (Rinner)

I chose the history of Native Title claims in Australia as the topic for this map.   “Native Title” is a general term for the legal framework and process for indigenous land claims in Australia. Following the landmark 1992 Mabo v. Queensland (No. 2) case, which was the first legal recognition of the interest of indigenous Australians in their traditional lands and waters, the Australian Parliament passed the Native Title Act in 1993. Claims under the Act began in 1994. This video provides an excellent quick introduction to the topic:

https://www.youtube.com/watch?v=1RBJhQ4hE_8

My goal was to create an animated map in CartoDB showing Native Title activity across Australia, both active claims and determined outcomes, from Mabo to the present.

My data source was the National Native Title Tribunal (NNTT) open data portal. I faced the following constraints in achieving my goal:

  • CartoDB’s animation (Torque) function is available on points only, on one layer only
  • The NNTT spatial data is for polygons, and comes in 4 datasets
  • My CartoDB account only provided 100MB storage
  • Native Title is an extremely complex topic while an animated map is simplistic

Step 1:            Downloading and reviewing the data

On September 22, 2015, I downloaded the current Native Title claims and outcomes data, and the geospatial data model, from:

http://www.nntt.gov.au/assistance/Geospatial/Pages/Spatial-aata.aspx

The data came in four shapefiles, two for current claims and two for determined outcomes.

Current claims files Determined outcomes files
Schedule of Native Title Determination Applications

Registered Native Title Determination Applications

Determinations of Native Title

Determinations of Native Title ‐ Native Title Outcomes

The Schedule of Native Title Determination Applications lists claims that have been filed but have not yet passed the registration test (the test involves meeting certain legal requirements in order for the claim to proceed). The Registered Native Title Determination Applications file lists those claims that have passed the test. Working with the files I found some overlap between them, in that they referred to many of the same tribunal cases and land areas. (I expect this has to do with a time lag between when various files are updated on the Tribunal’s main database, and then extracted and uploaded to its open data portal. By downloading them all on the same day I inevitably got some outdated information.) I chose to work only with the Schedule for this project, as it contained the most Tribunal cases.

The determined outcomes files also overlapped, but contained different information. The Determinations of Native Title file indicates areas of land where Native Title was found to exist, to have been extinguished and/or a combination of each. The Determinations of Native Title Native Title Outcomes file shows areas where Native Title exists, exclusively or non-exclusively, or where Native Title does not exist or has been extinguished. Because there is no unique identifier in the dataset for each land area, only for claims and determinations (which can refer to multiple parcels of land), it was not possible to combine the files in such a way as to capture all the determination information for individual parcels of land. I chose to use both files in order to retain this information and just live with the overlap.

Note that Native Title claims that are withdrawn, discontinued, rejected, struck out or pre‐combined are transferred to a historical dataset that is not publicly available, and are therefore not shown on this map.

Step 2:                        Merging files in QGIS

The three shapefiles were too large, over 130MB, for my CartoDB account, so I loaded them into QGIS and used the MMQGIS plugin to merge the files. I named the new shapefile NTmerge1.

What to click in QGIS to merge files: MMQGIS>Combine>Merge Layers

Step 3:                        Preparing the shapefile for uploading to CartoDB

NTmerge1.shp was 116.8MB, still too big for my 100MB limit on CartoDB, so I had to reduce its size. Fortunately converting the polygons to points, which I had to do anyway because Torque only works on point data, was the solution. I named the new file NTpoints1. Its size was 428KB, well below my CartoDB limit.

What to click in QGIS to convert polygons to points: Vector>Geometry Tools>Polygon Centroids

Step 4:                        Uploading the data to CartoDB and preparing it for Torque animation

Uploading the data on CartoDB went smoothly, literally just clicking “New Dataset” and then selecting the file. Preparing the data for animation was a bit more involved. I should note that I actually used the Torque Cat (Category) function, which is a refinement of Torque in that it enables depiction of categories within a variable. However, it is only possible to run Torque Cat on one time column and one category column. As a merged file, NTpoints1.shp contained multiple date columns (such as “date lodged” for active claims and “determination date” for determined outcomes) and null values for active claims in the “determination outcome” column. Using simple SQL statements (seen in the table below) in CartoDB’s SQL workspace, I created a single date column and a single outcome category column.

To create and fill new date column To create and fill new category column
ALTER TABLE ntpoints1

ADD COLUMN dateall date;

UPDATE ntpoints1

SET dateall=datelodged

WHERE datelodged IS NOT NULL;

UPDATE ntpoints1

SET dateall=detdate

WHERE detdate IS NOT NULL;

ALTER TABLE ntpoints1

ADD COLUMN current CHAR(255);

UPDATE ntpoints1

SET current=’active claim’

WHERE datelodged IS NOT NULL;

UPDATE ntpoints1

SET current=detoutcome

WHERE detoutcome IS NOT NULL;

 

Step 5:                        Activating Torque Cat and finishing the map

In the CartoDB “Map layer wizard”, I selected Torque Cat, my newly-created columns as the time and category columns, and colours for each category. I set the duration of the animation to 60 seconds, and left the defaults for everything else.

In the CartoCSS editor (seen in the image below), I changed the default specification “linear” to “cumulative” so that the dots would remain on the map for a cumulative effect.

Screen Shot 2015-11-17 at 6.13.17 PM

Finally, I added a title and some text for further information.

The end result can be seen at:

https://melissadennison.cartodb.com/viz/0673f7fe-866d-11e5-92d7-0e3ff518bd15/public_map

 

3D Hexbin Map Displaying Places of Worship in Toronto

Produced by: Anne Christian
Geovis Course Assignment, SA8905, Fall 2015 (Rinner)

Toronto is often seen as the city of many cultures, and with different cultures often come different beliefs. I wanted to explore the places of worship in Toronto and determine what areas have the highest concentrations versus the lowest concentrations. As I explored the different ways to display this information in a way that is effective and also unique, I discovered the use of hexbin maps and 3D maps. While doing some exploratory analysis, I discovered that while hexbin maps have been created before and 3D maps have been printed before, I was unable to find someone who has printed a 3D hexbin prism map, so I decided to take on this endeavor.

Hexbin maps are a great alternative technique for working with large data sets, especially point data. Hexagonal binning uses a hexagon shape grid, and allows one to divide up space in a map into equal units and display the information (in this case the places of worship) that falls within each unit (in this case hexagon grids). The tools used to create this project include QGIS, ArcGIS, and ArcScene, although it could probably be completed entirely within QGIS and other open-source software.

Below are the specific steps I followed to create the 3D hexbin map:

  1. Obtained the places of worship point data (2006) from the City of Toronto’s Open Data Catalogue.
  2. Opened QGIS, and added the MMQGIS plugin.
  3. Inputted the places of worship point data into QGIS.
  4. Used the “Create Grid Lines Layer” tool (Figure 1) and selected the hexagon shape, which created a new shapefile layer of a hexagon grid.

    Figure 1: Create Grid Lines Layer Tool
  5. Used the “Points in Polygon” tool (Figure 2) which counts the points (in this case the places of worship) that fall within each hexagon grid. I chose the hexagon grid as the input polygon layer and the places of worship as the input point layer. The number of places of worship within each hexagon grid was counted and added as a field in the new shapefile.

    Figure 2: Points in Polygon Tool
  6. Inputted the created shapefile with the count field into ArcGIS.
  7. Obtained the census tract shapefile from the Statistics Canada website (https://www12.statcan.gc.ca/census-recensement/2011/geo/bound-limit/bound-limit-2011-eng.cfm) and clipped out the city of Toronto.
  8. Used the clip tool to include only the hexagons that are within the Toronto boundary.
  9. Classified the data into 5 classes using the quantile classification method, and attributed one value for each class so that there are only 5 heights in the final model. For example, the first class had values 0-3 in it, and the value I attributed to this class was 1.5. I did this for all of the classes.
  10. The hexagons for the legend were created using the editor toolbar, whereby each of the 5 hexagons were digitized and given a height value that matched with the map prism height.
  11. Inputted the shapefile with the new classified field values into ArcScene, and extruded the classified values and divided the value by 280 because this height works well and can be printed in a timely manner.
  12. Both the legend and hexagonal map shapefile were converted into wrl format in Arcscene. The wrl file was opened in Windows 10 3D Builder and converted into STL format.
  13. This file was then brought to the Digital Media Experience (DME) lab at Ryerson, and the Printrbot Simple was used to print the model using the Cura program. The model was rescaled where appropriate. My map took approximately 3 hours to print, but the time can vary depending on the spatial detail of what is being printed. The legend took approximately 45 minutes. Below is a short video of how the Printrbot created my legend. A similar process was used to created the map.

The final map and legend (displayed in the image below) provide a helpful and creative way to display data. The taller prisms indicate areas with the most places of worship, and the shorter prisms indicate the areas in Toronto with the least places of worship. This hexagonal prism map allows for effective numerical comparisons between different parts of Toronto.

IMG_5392

Creating a Vintage Bathymetric Map with QGIS

Creator: Jenny Mason, Ryerson University Graduate Spatial Analysis Student, for Geovis course project, SA8905 (Dr. Rinner)

My geovisualization, in a sense, goes back in time. Despite many data and geo-visualizations recently showing how technology has changed the way we view spatial data, I wanted to create a geovisualization that tried to replicate the roots of mapping technology. I chose to create a vintage bathymetric survey map using QGIS.

Based on the numbers of tutorials and blog posts available online, replicating the characteristics and design of a vintage map using modern GIS mapping technology is popular for GIS enthusiasts. Using a combination of these geographer’s ideas, as well as my own, I chose to replicate the timeless style of these maps.

The final map product: 

Vintage Bathymetric Map Using QGIS

Technology: QGIS offers rendering options that blend your layers (map data, elements, text and desired stain/finish) in the Print Composer. For this look, the multiply function was used.

Data source: The Great Lakes bathymetry contours data were provided by Esri Canada for educational purposes only. They come in Shapefile format and it looks as if they were vectorized from the National Oceanic and Atmospheric Administration’s (NOAA) bathymetry grid, which is available at https://www.ngdc.noaa.gov/mgg/greatlakes/superior.html.

Approach: To achieve a vintage map look, open a new print composer in QGIS and import a photo along with your map layers. For my design, I used bathymetric data to replicate a historic hydrographic map/survey using the contours of Lake Superior.

Anita Graser’s Blog Post on  Vintage Map Design Using QGIS suggests using Lost and Taken’s Gallery to find aged paper designs at high resolutions. The options for paper on this site are perfect for recreating a vintage style map.

Once your map and image of desired paper/texture are added as layers in your print composer, go to the rendering options for the map in the panel on the right of your screen. Change the option to multiply. You will now see in the window that the map elements have been blended with the paper texture.

With this function, the creator is able to recreate a vintage map of any desired spatial data.

The appropriate choice of text, labels, and other map elements are essential to replicate a vintage map. I found it helpful to reference existing historic maps online and try to replicate their design within QGIS. I imported a North Arrow, but all other elements are available in QGIS.