Desperate Journeys

By Ibrahim T. Ghanem

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Background:

Over the past 20 years, Asylum Seekers have invented many travel routes between Africa, Europe and Middle East in order be able to reach a country of Asylum. Many governmental and non-governmental provided information about those irregular travel routes used by Asylum Seekers. In this context, this geovisualization project aims at compiling and presenting two dimensions of this topic: (1) a comprehensive animated spider map presenting some of the travel routes between the above mentioned three geographic areas; (2) develop a dashboard that connects those routes to other statistics about refugees in a user-friendly interface. In that sense, the best software to fit the project is Tableau.

Data and Technology

Creation of Spider maps at Tableau is perfect for connecting hubs to surrounding point as it allows paths between many origins and destinations. Besides, it can comprehend multiple layers. Below is a description of the major steps for the creation of the animated map and dashboard.

Also, Dashboards are now very useful in combining different themes of data (i.e. pie-charts, graphs, and maps), and accordingly, they are used extensively in non-profit world to present data about a certain cause. The Geovisualiztion Project applied geocoding approach to come up with the animated map and the dashboard.

The Data used to create the project included the following:

-Origins and Destinations of Refugees

-Number of Refugees hosted by each country

-Count of Refugees arriving by Sea (2010-2015)

-Demographics of Refugees arriving by Sea – 2015

Below is a brief description of the steps followed to create the project

Step 1: Data Sources:

The data was collected from the below sources.

United Nations High Commissioner for Refugees, Human Rights Watch, Vox, InfoMigrants, The Geographical Association of UK, RefWorld, Broder Free Association for Human Rights, and Frontex Europa.

However, most of the data are not geocoded. Accordingly, Google Sheets was used in Geocoding 21 routes, and thereafter each Route was given a distinguishing ID and a short description of the route.

Step 2: Utilizing the Main Dataset:

Data is imported from an excel sheet. In order to compute a route, Tableau requires data about origins,and destination with latitude and longitude. In that aspect, the data contains different categories:

A-Route I.D. It is a unique path I.D. for each route of the 21 routes;

B-Order of Points: It is the order of stations travelled by refugees from their country of origin to country of Asylum;

C-Year: the year in which the route was invented;

D-Latitude/Longitude: it is the coordinates of the each station;

F-Country: It is the country hosting Refugees;

E- Population: Number of refugees hosted in each country.

Step 3: Building the Map View:

The map view was built by putting longitude in columns, latitude in rows, Route I.D. at details, and selecting the mark type as line. In order to enhance the layout, Oder of Points was added to Marks’ Path, and changing it to dimensions instead of SUM.  Finally, to bring stations of travel, another layer was added to by putting another longitude to columns, and changing it to Dual Axis. To create filtration by Route, and timeline by year, route was added Filter while year was added to page.

Step 4: Identifying Routes:

To differentiate routes from each other by distinct colours, the route column was added to colours, and the default setting was changed to Tableau 20. And Layer format wash changed to dark to have a contrast between the colours of the routes and the background.

Step 5: Editing the Map:

After finishing up with the map formation. A video was captured by QuickStart and edited by iMovie to be cropped and merged.

Step 6: Creating the Choropleth map and Symbology:

In another sheet, a set of excel data (obtained from UNHCR) was uploaded to create a Choropoleth map that would display number of refugees hosted by each country by year 2018. Count of refugees was added to columns while Country was added to rows. The Marks’ colour ramp of orange-gold, with 4 classes was added to indicate whether or not the country is hosting a significant number of refugees. Hovering over each country would display the name of the country and number of refugees it hosts.

Step 7: Statistical Graphs:

A pie-chart and a graph were added to display some other statistics related to count of Refugees arriving by Sea from Africa to Europe, and the demographics of those refugees arriving by sea. Demographics was added to label to display them on the charts.

Step 8: Creation of the Dashboard:

All four sheets were added in the dashboard section through dragging them into the layer view. To comprehend that amount of data explanation, size was selected as legal landscape. Title was given to the Dashboard as Desperate Journeys.

Limitations

A- Tableau does not allow the map creator to change the projection of the maps; thus, presentation of maps is limited. Below is a picture showing the final format of the dashboard:

B-Tableau has an online server that can host dashboard; nevertheless, it cannot publish animated maps. Thus, the animated maps is uploaded here a video. The below link can lead the viewer to the dashboard:

https://prod-useast-a.online.tableau.com/t/desperatejourneysgeovis/views/DesperateJourneys_IbrahimGhanem_Geoviz/DesperateJourneys/ibrahim.ghanem@ryerson.ca/23c4337a-dd99-4a1b-af2e-c9f683eab62a?:display_count=n&:showVizHome=n&:origin=viz_share_link

C-Due to unavailability of geocoded data, geocoding the routes of refugees’ migration consumed time to fine out the exact routes taken be refugees. These locations were based on the reports and maps released by the sources mentioned at the very beginning of the post.

The Toronto Financial Institution Market: Bridging the gap between Cartography and Analytics using Tableau

Nav Salooja

“Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019”

<script type='text/javascript' src='https://prod-useast-a.online.tableau.com/javascripts/api/viz_v1.js'></script><div class='tableauPlaceholder' style='width: 1920px; height: 915px;'><object class='tableauViz' width='1920' height='915' style='display:none;'><param name='host_url' value='https%3A%2F%2Fprod-useast-a.online.tableau.com%2F' /> <param name='embed_code_version' value='3' /> <param name='site_root' value='/t/torontofimarketgeovisprojectsa8905fall2019' /><param name='name' value='TheTorontoFIMarketDashboard/TorontoFIMarket' /><param name='tabs' value='yes' /><param name='toolbar' value='yes' /><param name='showAppBanner' value='false' /></object></div>

Introduction & Background

Banking in the 21st century has evolved significantly especially in the hyper competitive Canadian Market. Big banks nationally have a limited population and wealth share to capture given Canada’s small population and have been active in innovating their retail footprint. In this case study, TD Bank is the point of interest given its large branch network footprint in the Toronto CMA. Within the City of Toronto the bank has 144 branches and is used as the study area for the dashboard created.  The dashboard analyzes the market potential, branch network distribution, banking product recommendations and client insights to help derive analytics through a centralized and interactive data visualization tool.

Technology

The technology selected for the geovisualization component is Tableau given its friendly user interface, mapping capabilities, data manipulation and an overall excellent visualization experience. However, Alteryx was widely used for the build out of the datasets that run in Tableau. As the data was extracted from various different sources, spatial element and combining datasets was all done in Alteryx. The data extracted for Expenditure, Income and Dwelling Composition was merged and indexed in Alteryx. The TD Branches was web scrapped live from the Branch Locator and the trading areas (1.5KM Buffers) are also created in Alteryx. The software is also used for all the statistical functions such as the indexed data points in the workbook were all created in Alteryx. The geovisualization component is all created within the Tableau workbooks as multiple sheets are leverged to create an interactive dashboard for full end user development and results.

Figure 1 represents the Alteryx Workflow used to build the Market, Branch and Trade Area datasets
Figure 2 provides the build out of the final data sets to fully manipulate the data to be Tableau prepared

Data Overview

There are several data sets used to build the multiple sheets in the tableau workbook which range from Environics Expenditure Data, Census Data and webscrapped TD branch locations. In addition to these data sets, a client and trade area geography file was also created. The clients dataset was generated by leveraging a random name and Toronto address generator and those clients were then profiled to their corresponding market. The data collected ranges from a wide variety of sources and geographic extents to provide a fully functional view of the banking industry. This begins by extracting and analyzing the TD Branches and their respective trade areas. The trading areas are created based on a limited buffer representing the immediate market opportunity for the respective branches. Average Income and Dwelling composition variables are then used at the Dissemination Area (DA) geography from the 2016 Census. Although income is represented as an actual dollar value, all market demographics are analyzed and indexed against Toronto CMA averages. As such these datasets combined with Market, Client and TD level data provide the full conceptual framework for this dashboard.

Tables & Visualization Overview

Given the structure of the datasets, six total tables are utilized to combine and work with the data to provide the appropriate visualization. The first two tables are the branch level datasets which begin with the geographic location of the branches in the City of Toronto. This is a point file taken from the TD store locator with fundamental information about the branch name and location attributes. There is a second table created which analyzes the performance of these branches in respect to their client acquisition over a pre-determined timeframe.

Figure three is a visualization of the first table used and the distribution of the Branch Network within the market

The third table used consists of client level information selected from ‘frequent’ clients (clients transacting at branches 20+ times in a year. Their information builds on the respective geography and identifies who and where the client resides along with critical information that is usable for the bank to run some level of statistical analytics. The client table shows the exact location of those frequent clients, their names, unique identifiers, their preferred branch, current location, average incomes, property/dwelling value and mortgage payments the bank collects. This table is then combined to understand the client demographic and wealth opportunity from these frequent clients at the respective branches.

Figure four is the visualization of the client level data and its respective dashboard component

Table four and five are extremely comprehensive as they visualize the geography of the market (City of Toronto at a DA level). This provides a trade area market level full breakdown of the demographics and trading areas as DAs are attributed to their closest branch and allows users to trigger on for where the bank has market coverage and where the gaps reside. However, outside of the allocation of the branches, the geography has a robust set of demographics such as growth (population, income), Dwelling composition and structure, average expenditure and the product recommendations the bank can target driven through the average expenditure datasets. Although the file has a significant amount of data and can be seen as overwhelming, selected data is fully visualized. This also has the full breakdown of how many frequent clients reside in the respective markets and what kind of products are being recomened on the basis of the market demographics analyzed through dwelling composition, growth metrics and expenditure.

Figure five is the visualization of the market level data and its respective dashboard component

The final table provides visualization and breakdown of the five primary product lines of business the bank offers which are combined with the market level data and cross validated against the average expenditure dataset. This is done to identify what products can be recommended throughout the market based on current and anticipate expenditure and growth metrics. For example, markets with high population, income and dwelling growth with limited spend would be targeted with mortgage products given the anticipated growth and the limited spend indicating a demographic saving to buy their home in a growth market. These assumptions are made across the market based on the actual indexed values and as such every market (DA) is given a product recommendation.

Figure six is the visualization of the product recommendation and analysis data and its respective dashboard component

Dashboard

Based on the full breakdown of the data extracted, the build out and the tables leveraged as seen above, the dashboard is fully interactive and driven by one prime parameters which controls all elements of the dashboard. Additional visualizations such as the products visualization, the client distribution treemap and the branch trends bar graph are combined here. The products visualization provides a full breakdown of the products that can be recommended based on their value and categorization to the bank. The value is driven based on the revenue the product can bring as investment products drive higher returns than liabilities. This is then broken down into three graphs consisting of the amount of times the product is recommended, the market coverage the recommendation provides between Stocks, Mortgages, Broker Fees, Insurance and Personal Banking products. The client distribution tree map provides an overview by branch as to how many frequent clients reside in the branch’s respective trading area. This provides a holistic approach to anticipating branch traffic trends and capacity constraints as branches with a high degree of frequent clients would require larger square footage and staffing models to adequately service the dependent markets. The final component is the representation of the client trends in a five year run rate to identify the growth the bank experienced in the market and at a branch level through new client acquisition. This provides a full run down of the number of new clients acquired and how the performance varies year over year to identify areas of high and low growth.

This combined with the primary three mapping visualizations, creates a fully robust and interactive dashboard for the user. Parameters are heavily used and are built on a select by branch basis to dynamically change all 6 live elements to represent what the user input requires. This is one of the most significant capabilities of Tableau, the flexibility of using a parameter to analyze the entire market, one branch at a time or to analyze markets without a branch is extremely powerful in deriving insights and analytics. The overall dashboard then zooms in/out as required when a specific branch is selected highlighting its location, its respective frequent clients, the trade area breakdown, what kind of products to recommend, the branch client acquisition trends and the actual number of frequent clients in the market. This can also be expanded to analyze multiple branches or larger markets overall if the functionality is required. Overall, the capacity of the dashboard consists of the following elements:

1. Market DA Level Map
2. Branch Level Map
3. Client Level Map
4. Client Distribution (Tree-Map)
5. Branch Trending Graph
6. Product Recommendation Coverage, Value and Effectiveness

This combined with the capacity to manipulate/store a live feed of data and the current parameters used for this level of analysis bring a new capacity to visualizing large datasets and providing a robust interactive playground to derive insights and analytics.

The link for this full Tableau Workbook is hosted here (please note an online account is required):https://prod-useast-a.online.tableau.com/t/torontofimarketgeovisprojectsa8905fall2019/views/TheTorontoFIMarketDashboard/TorontoFIMarket?:showAppBanner=false&:display_count=n&:showVizHome=n&:origin=viz_share_link

Geovisualization of Crime in the City of Toronto Using Time-Series Animation Heat Map in ARCGIS PRO

Hetty Fu

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Background/Introduction

The City of Toronto Police Services have been keeping track of and stores historical crime information by location and time across the City of Toronto since 2014. This data is now downloadable in Excel and spatial shapefiles by the public and can be used to help forecast future crime locations and time. I have decided to use a set of data from the Police Services Data Portal to create a time series map to show crime density throughout the years 2014 to 2018. The data I have decided to work with are auto-theft, break and enter, robbery, theft and assault. The main idea of the video map I want to display is to show multiple heat density maps across month long intervals between 2014 to 2018 in the City of Toronto and focus on downtown Toronto as most crimes happen within the heart of Toronto.

The end result is an animation time-series map that shows density heat map snapshots during the 4-year period, 3-month interval at a time. Examples of my post are shown at the end of this blog post under Heat Map Videos.

Dataset

All datasets were downloaded through the Toronto Police Services Data Portal which is accessible to the public.

The data that was used to create my maps are:

  1. Assault
  2. Auto Theft
  3. Robbery
  4. Break and Enter
  5. Theft

Process Required to Generate Time-Series Animation Heat Maps

Step 1:  Create an additional field to store the date interval in ArcGis Pro.

Add the shapefile downloaded from the Toronto Police Services Portal intoArcGIS Pro.

First create a new field under View Table and then click on Add.             

To get only the date, we use the Calculate Field in the Geoprocessing tools with the formula

date2=!occurrence![:10]  

where Occurrence is the existing text field that contains the 10 digit date: YYYY-MM-DD. This removes the time of day which is unnecessary for our analysis.

Step 2: Create a layer using the new date field created.

Go into properties in the edited layer. Under the time tab, place in the new date field created from Step 1 and enter in the time extent of the dataset. In this case, it will be from 2014-01-01 to 2018-12-31 as the data is between 2014 to 2018.

Step 3: Create Symbology as Heat Map

Go into the Symbology properties for the edited layer and select heat map under the drop down menu. Select 80 as its radius which will show the size of the density concentration in a heat map. Choose a color scheme and set the method as Dynamic. The method used will show how each color in the scheme relates to a density value. In a Dynamic setting versus and constant, the density is recalculated each time the map scale or map extent changes to reflect only those features that are currently in view. The Dynamic method is useful to view the distribution of data in a particular area, but is not valid for comparing different areas across a map (ArcGIS Pro Help Online).

Step 4: Convert Map to 3D global scene.

Go to View tab on the top and select convert to global scene. This will allow the user to create a 3D map feature when showing their animated heat map.

Step 5: Creating the 3D look.

Once a 3D scene is set, press and hold the middle mouse button and drag it down or up to create a 3D effect.

Step 6: Setting the time-series map.

Under the Time tab, set the start time and end time to create the 3 month interval snapshot. Ensure that “Use Time Span” is checked and the Start and End date is set between 2014 and 2018. See the image below for settings.

Step 7: Create a time Slider Steps for Animation Purposes

Under Animation tab, select the appropriate “Append Time” (the transition time between each frame). Usually 1 second is good enough, anything higher will be too slow. Make sure to check off maintain speed and append front before Importing the time Slider Steps. See below image.

Step 8: Editing additional cosmetics onto the animation.

Once the animation is created, you may add any additional layers to the frames such as Titles, Time Bar and Paragraphs.

There is a drop down section in the Animation tab that will allow you to add these cosmetic layers onto the frame.

Animation Timeline by frames will look like this below.

Step 9: Exporting to Video

There are many types of exports the user can choose to create. Such as Youtube, Vimeo, Twitter, Instagram, HD1080 and Gif. See below image for the settings to export the create animation video. You can also choose the number of frames per second, as this is a time-series snapshot no more than 30 frames per second is needed. Choose a place where you would like to export the video and lastly, click on Export.

Conclusion/Recommendation/Limitation

As this was one of my first-time using ArcGIS Pro software, I find it very intuitive to learn as all the functions were easy to find and ready to use. I got lucky in finding a dataset that I didn’t have to format too much as the main fields I required were already there and the only thing required was editing the date format. The number of data in the dataset was sufficient for me to create a time series map that shows enough data across the city of Toronto spanning 3 months at a time. If there was less data, I would have to increase my time span. The 3D scene on ArcGIS Pro is very slow and created a lot of problems for me when trying to load my video onto set time frames. As a result of the high-quality 3D setting, I decided to use, it took couple of hours to render my video through the export tool. As the ArcGIS Pro software wasn’t made to create videos, I felt that there was lack of user video modification tools.

Heat Map Videos Export

  1. Theft in Downtown Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.
  2. Robbery in Downtown Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.
  3. Break and Enter in Downtown Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.
  4. Auto Theft across the City of Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.
  5. Assault across the City of Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.

Visualizing Toronto Fire Service Response

By: Remmy Zelaya

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

CARTO is an online tool to create online maps, dashboards, and perform spatial analysis. Basic membership is free and no coding experience is required to get your maps online. I creating my project on visualizing Toronto Fire Service data entirely in CARTO. The embedded map is below or you can click here to see it in a new tab.

I’ll briefly explain how I created my map and how you can too. 

Before we get to CARTO, we’ll need our data. The City of Toronto’s Open Data portal contains lots of free data on city services and life. From the portal I downloaded shapefiles of TFS stations and run areas (catchment areas for fire stations), and a CSV file of fire incidents.

Next create a CARTO account if you don’t already have one. Once logged in, the CARTO home page will have links to “Getting Started”, “New Map”, and “New dataset.” The Getting Started page is an excellent tutorial on CARTO for first time users. 

Before we start making a map, we will need to upload our data. Click “new dataset” and follow the prompts. Note, CARTO requires shapefiles to be archived in a ZIP file. 

Once that is done, click on “new map” and add your uploaded datasets. CARTO will add your datasets as layers to the map, zoom to layer extent, and automatically create a point layer out of the CSV file. 

The map is on the right side of the screen and a control panel with a list of the uploaded layers is on the right. From here we can do a couple of things;

  • Re-title our map by double clicking on the default title
  • Rearrange our layers by dragging and dropping. Layer order determines drawing order. Rearrange the layers so that the stations and incidents points are on top of the run area polygon.
  • Change the base map. I’ve used Positon Lite for a simple and clean look. Note, CARTO has options to import base maps and styles from other site, or to create your own.
  • Click on the layer card to bring up that layer’s options menu.

Let’s click on the fire stations layer. As with the map we can rename the layer by double clicking on the name. The layer menu has five panes, Data, Analysis, Style, Pop-Up, Legend. The Style pane will be selected by default. The first section of the Style pane is aggregation, which is useful for visualizing dense point layers. We’ll keep the default aggregation of By Point. Section 2 Style controls the appearance of the layer. I’ve changed my point colour to black and increased the size to 12. I need the stations to stand out from the incident points. 

Now with the incidents layer, I decided to use the Animation aggregation option. If the point layer has a column representing time, we can use this to create an animation of the points appearing on the map over time. This option creates a time slider widget at the bottom of the map with a histogram representing the amount of fires over time.

With the run areas, I decided to create a choropleth map where run areas with higher amount of incidents would appear darker on the map. To do this, I first need to determine how many incidents points fall into each run area. Go to the run area menu, click on Analysis, then “+Add New Analysis.” CARTO will navigate to a new page with a grid of its spatial analysis options. Click on “Intersect and Aggregate” which finds “overlapping geometries from a second layer and aggregate its values in the current layer.”

CARTO will navigate back to the Analysis pane of the run area menu and display options for the analysis. Run area should already be selected under Base Layer. Choose incidents as the target layer, and under Measure By select count. CARTO will display a message stating new columns have been added to the data, count_vals and count_vals_density. 

There will be an option to style the analysis. Click on it. Choose “by value” for Polygon Colour, and choose the new count_vals_density for Column, then select an appropriate colour scheme.

CARTO’s widget feature creates small boxes on the right of the map with useful charts and stats on our data. You click on the Widgets pane to start add new widgets from a grid (as with Analysis) or can add new widgets based on a specific layer from that layer’s Data pane. CARTO has four types of widgets;

  • Category creates a horizontal bar chart measuring how many features fit into a category. This widget also allows users to filter data on the map by category. 
  • Histogram creates a histogram measuring a selected variable
  • Formula displays a statistic on the data based on a selected formula
  • Time Series animates a layers according to its time information.

As with layers, clicking on a widget brings up its option menu. From here you can change the source data layer, the widget type, and configure data values. For my Fires by Run Area widget, I used the incidents layer as the source, aggregated by id_station (fire station ID numbers) using the count operation. This widget counts how many incidents each station responded to and displays a bar chart of the top 5 stations. Clicking on a station in the bar chart will filter the incidents by the associated station. After this, I added four formula based widgets.

We’re nearly done. Click on the “publish” button on the bottom left to publish the map to the web. CARTO will provide a link for other users to see the map and an HTML embed code to add it to a web page. I used the embed code to added the embedded map to the beginning of the post.

Thanks for reading. I hope you’ll use CARTO to create some nice maps of your own. You may be interested in checking out the CARTO blog to see other projects built on the platform or the Help section for my information on building your own maps and applications.

Visualizing the Story of Forest Fires in BC with Operational Dashboards

By: Anderson Webber

Background

2017 and 2018 were the two worst fire years in the BC’s known history, diminishing provincial air quality and destroying healthy ecosystems beyond natural levels. These consecutive record-breaking years have led to many discussions regarding the causes of such fires, with the hope to better understand why these events occurred and hopefully prevent such events from reoccurring. The purpose of this project is to help aid in the understanding of BC wildfires through an interactive summary of the 179,000 wildfires occurring within the province over the last 68 years via an Operations Dashboard.

This image has an empty alt attribute; its file name is UEvIDLI36LdfJ3DBJcZZI1UVB6dJopmn-_2ZbtI7bAB67OThU9KmieQf5J5_ZHrI_8ar62-rI2dNWdogy94w7yWQ99vqxKKxX8BLaQ1mw2VxCVySZelO-5UylqDFJOpKSWpxlK77

Data and Technology

Dashboards have become a very trendy tool for geovisualizations, designed to display location aware visualizations and analytics packaged in an easy to use web or mobile app. ESRI largely markets Operational Dashboards for real-time analytics which aid in such tasks as emergency response, but in this example I will be looking at the dashboard’s utility in understanding a large historical data set; past BC wildfires. BC wildfire data was sourced from the BC open data catalogue containing point locations and attribute data of any fire incident updated on April 1st 2019 with all previous years data since 1950. The following tutorial will allow anybody with an ArcGIS license to replicate this project.

Step 1. Getting the data

Data was downloaded from here:

https://catalogue.data.gov.bc.ca/dataset/fire-incident-locations-historical

Step 2. Data Cleaning

Beyond the date of report and location, the wildfire CSV contained information for each fire including its size, cause and which zone it occurred in… In order to create the widgets I wanted, the data had to be cleaned a bit. To determine which months the fires were worst for example, the date field in the format YYYYMMDD had to be split into three separate fields; Year, Month and Day. This could be done in any data manipulation software such as SQL server, alterex or excel. The simple query needed to select included:

=LEFT(D2,4) to select year

=MID(D2,5,2) to select month 

=RIGHT(D2,2) to select days

Step 3. Hosting layer file online

Once data was split it had to be uploaded into ArcGIS online as a hosted layer file in order to be brought into the dashboard webapp. Hosted layers allow for the uploading of large files which can be used in web, desktop or mobile apps. In order to post hosted features you must be part of an ArcGIS organization, and have privileges to post hosted layers. When you add a layer through ArcGIS Online, this is what your options should look like to host a layer:

Step 4. Making a web map

Before creating the dashboard itself, the user must create a web map first. This is done by clicking the “Create” tab then clicking the “Map” button. Once the map is open simply click the “Add +” button and bring in the hosted feature classes we just created. Now you have made a web map with the fire data. Edit symbology and add any other layers you would like in this step. I chose a dark basemap and red ‘firefly’ symbology. 

Step 5. Adding fields

Depending on what you want your dashboard to display, more data cleaning and manipulating could be done at this step. As I wanted to see what the different sizes of fires were within BC, I created an ESRI Arcade expression which would calculate a field with ranges. To do this I created a new field with data type ‘Text’ in the Fires table called ‘Sizes’ and calculated the field with: 

iif($feature[“SIZE_HA”] <=0.25 , “<0.25”,

iif($feature[“SIZE_HA”] <=10, “0.25-10”,

iif($feature[“SIZE_HA”] <=100, “10-100”,

iif($feature[“SIZE_HA”] <=1000, “100-1,000”,

iif($feature[“SIZE_HA”] <=10000, “1,000-10,000”,

iif($feature[“SIZE_HA”] <=50000, “10,000-50,000”,

iif($feature[“SIZE_HA”] >50000, “>50,000”, 0)))))))

Step 6. Creating the dashboard

Now we are ready to make the dashboard! On the same webmap which was just made, click the Share button -> Create new Webapp -> Operations Dashboard. You can call it whatever you like. Now you have a dashboard shell.

Step 7. Add widgets

Now comes the fun part. Click the “+” dropdown on the top left of the dashboard and add whatever widgets you want. Widgets can be dragged, resized and stacked allowing for a high level of customization.

Step 8. Making charts interactive

To make charts interactive, within the widget configured in the ‘actions’ toolbar and add whatever action you like. This means that selecting a bar on the chart below will change the points, indicators and all accompanying data visualizations to the months chosen. The same methods can be applied to any other aspect, including the map itself.

Keep playing around with the widgets. You can also add images. The final product http://arcg.is/1WDSyy. A screenshot can be seen below:

Limitations

Limitations for this project regarded both the data and software itself. For starters, in order to create an Operations Dashboard, you need an authorized ArcGIS account which is not free and therefore accessible to everyone. Another major limitation has to do with the size of the data set. With almost 180,000 fire points, the rending of these points online can encounter problems such as lag and may even may crash if you have limited ram. The third limitation regards the projection. ArcGIS Online defaults to a global WGS 1983 projection, which is not optimal for looking presenting data at the provincial level. Finally, the user’s screen size also has a major impact on the experience of the user. Although it can be opened on mobile devices and tablets, the dashboard becomes more limited, as graphics and titles are cut off or crushed together, taking away from the visual appeal and usability of the dashboards.

A Shot in the Dark: Analyzing Mass Shootings in the United States, 2014-2019

By: Miranda Ramnarayan

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

The data gathered for this project was downloaded from the Gun Violence Archive (https://www.gunviolencearchive.org/), which is a non-for Profit Corporation. The other dataset is the political affiliation per state, gathered by scrapping this information from (https://www.usa.gov/election-results). Since both of these datasets contain a “State Name” column, an inner join will be conducted to allow the two datasets to “talk” to each other.

The first step is importing your excel files, and setting up that inner join.

There are four main components this dashboard is made of: States with Mass Shootings, States with Highest Death Count, Total Individuals Injured from Mass Shootings and a scattergram displaying the amount of individuals injured and killed. All of these components were created in Tableau Worksheets and then combined on a Dashboard upon completion. The following are steps on how to re-create each Worksheet. 

1. States with Mass Shootings

In order to create a map in Tableau, very basic geographic information is needed. In this case, drag and drop the “State” attribute under the “Dimensions” column into the empty frame. This will be the result:

In order to change the symbology from dots to polygons, select “Map” under the Marks section.

To assign the states with their correct political affiliation, simply drag and drop the associated year you want into the “Colour” box under Marks.

This map is displaying the states that have had mass shootings within them, from 2014 to 2019. In order to automatic this, simply drag and drop the “Incident Date” attribute under Pages. The custom date page has been selected as “Month / Year” since the data set is so large.

This map is now complete and when you press the play button displayed in the right side of this window, the map will change as it only displays states that have mass shootings within them for that month and year.

2. States with Highest Death Count

This is an automated chart that shows the Democratic and Republican state that has the highest amount of individuals killed from mass shootings, as the map with mass shootings above it runs through its time series. Dragging and dropping “State” into the Text box under Marks will display all the states within the data set. Dragging and dropping the desired year into Colour under Marks will assign each state with its political party.

 In order for this worksheet to display the state with the highest kill count, the following calculations have to be made once you drag and drop the “# Killed” from Measures into Marks.

To link this count to each state, filter “State” to only display the one that has the maximum count for those killed.

This will automatically place “State” under Filters.

Drag and drop “Incident Date” into Pages and set the filter to Month / Year, matching the format from section 1.

Format your title and font size. The result will look like:

3. Total Individuals Injured from Mass Shootings

In terms of behind the scenes editing, this graph is the easiest to replicate.

Making sure that “State Name” is above “2016” in this frame is very important, since this is telling Tableau to display each state individually in the bar graph, per year.

4. Scattergram

This graph displays the amount of individuals killed and injured per month / year. This graph is linked to section 1 and section 2, since the “Incident Date” under Pages is set to the same format. Dragging and dropping “SUM (#Killed)” into Rows and SUM (#Injured) into Columns will set the structure for the graph.

In order for the dot to display the sum of individuals killed and injured, drag and drop “# Killed” into Filter and the following prompt will appear. Select “Sum” and repeat this process for “# Injured”.

Drag and drop “Incident Date” and format the date to match Section 1 and 2. This will be your output.

Dashboard Assembly

This is where Tableau allows you to be as customizable as you want. Launching a new Dashboard frame will allow you to drag and drop your worksheets into the frame. Borders, images and text boxes can be added at this point. From here, you can re-arrange/resize and adjust your inserted workbooks to make sure formatting is to your desire.  

Right clicking on the map on the dashboard and selecting “Highlight” will enable an interactive feature on the dashboard. In this case, users will be able to select a state of interest, and it will highlight that state across all workbooks on your dashboard. This will also highlight the selected state on the map, “muting” other states and only displaying that state when it fits the requirements based on the calculations set up prior.

Since all the Pages were all set to “Month/Year”, once you press “play” on the States with Mass Shootings map, the rest of the dashboard will adjust to display the filtered information.

It should be noted that Tableau does not allow the user to change the projection of any maps produced, resulting in a lack of projection customization. The final dashboard looks like this:

Visualizing New York City yellow cabs and their origin-destination over time

Fana Gidey
SA8905 – Cartography and Geovisualization
Fall 2019
@Ryersongeo

Background

Taxi networks can uncover how people move within neighbourhoods and detect distinct communities, cost of housing and other socio-economic features.  New York City is famous for its yellow cabs and diverse neighbourhoods providing a good study area. This project will look at trip records for Yellow Cab taxi’s in order to visualize New York residents travel patterns over-time.

On New Year’s Eve, New York taxi riders are expected to make their way to see the ball drop, watch the fireworks from the east of Hudson River, Battery Park, and Coney Island. Lastly movement from the outer boroughs into Manhattan and Brooklyn for entertainment is expected. 

Marketers, policy makers, urban planners and real estate industry can leverage this spatial data to predict activity and features of human society.

Technology

The technology used for visualization is Kepler.gl. Kepler.gl is an open-source geospatial data analysis tool. I picked the tool because it can visualize paths over time with time-series and animations that can communicate a very powerful data narrative. Previous examples were flight and refugee movement data. Kepler has drag and drop options to highly skilled scripting.

Step 1: Gathering the Data

The data was obtained NYC Open Data Portal – Transportation – City of New York. Here you can obtain Yellow, Green Cab and For-Hire Vehicle trip records. Initially I wanted to compare trip records between two years (i.e. 2009 and 2016) however this data set is so robust (131,165,043 records). I decided to narrow down and focused on yellow cabs and only a single date that may have lots of taxi activity (January 1st 2016). The columns in the data set include: VendorID, Pickup and Drop-off Date Timestamp, pickup and drop off latitude and longitudes, trip distance, payment type, payment amount, tax, toll amount, and total amount.

Step 2: Cleaning the Data

It is imperative to know how data needs to be structured when drawing paths over time using origin-destination data. In order to create a path over time map, the data source should include the following types of information:

  • The Latitude and Longitude coordinates for each trip data point in a path
  • A column that defines the order to connect the points (in my case I used the date timestamp information, or you can manually applied surrogate key is also acceptable (i.e. 1, 2, 3, 4, 5)
  • The source data has a sufficient amount of data points to create lines from points

Before

The data was then cleaned and prepped for use in excel. The fields were formatted to currency (2 decimal spaces $) date (m/d/yyyy h:mm:ss) and null values were removed. A trip duration field was calculated and obsolete data is removed. The csv now has 345,038 records.

After

Step 3: Create Visualization

Now that that the data is cleaned and prepped for use it can be implemented to an interactive visualization software. As soon as you navigate to kepler.gl and select ‘Get Started’. You will be prompted to add your data (i.e. csv, json, and geojson).

Once your data is loaded, you can start with the “Layer” options. The software was able to pick up the pick-up and drop-off latitude and longitude. The pick-up and drop-off are represented as point features, you can use the drop-down menu to select lines, arcs, etc.

The origin-destination points are now represented by arcs. In order to animate the feature, a field must be selected to sort by.

In the filters tab, you can choose a field to sort by (i.e. “pick_up datetime”).

You can edit the map style by select the “Base Map” tab.

Other customization features are highlighted below.

Final Results

https://fanagidey.github.io/

Batman’s Trending: A Spatial Approach to Normalizing Google Trends and it’s Inferences on Geographic Data

With the growing fascination of map forums such as Map Porn on Reddit or Terrible Maps on Instagram, it’s evident that mapping has gone beyond the needs of the typical geographer. Open data is used every day and sometimes use to produce “cool” map that nevertheless, relays information. One of my obsessions is superheroes from the DC and Marvel universe. Comic books, movies and conventions have steadily increased and with Disney overseeing all Marvel movies this trend will not be dwindling anytime soon. Google Trends is one of the most useful, real time and accessible data resources that is freely available to anyone who has internet access. Google Trends is keyword-related data that uses the volume index as well as the amount of search engines within the geographical area. I wanted to explore this type of data to see its limitations, and its usefulness.

The limitation with Google Trends is the output data provided has already been normalized by Google themselves. Google’s normalization algorithm is not provided, instead, they state:

“While only a sample of Google searches are used in Google Trends, this is sufficient because we handle billions of searches per day. Providing access to the entire data set would be too large to process quickly.”

 Instead, results are normalized through the following process:

  1. Data points are divided by the total searches of the geographic location and time to compare relative popularity.* This eliminated locations with the most search volume being ranked highest.
  2. Results are scaled from 0 to 100 range with consideration to the topic’s amount to all searches on every topic generated.
  3. Regions that show the same search interest will not have the same total search volume.

*it should be noted that Google’s normalization method throughout time causes data to slightly changes depending on the day you gather the information despite the time frame you consistently use. To have a more accurate sets of data, it is best to gather all the data within the same week to avoid larger differences

The design of the Geovis project is that of a physical map and the aesthetic of an old fashion comic book pop art piece, that is simple, artistic and comprehensive of the data. 

Laser Cut

After speaking with the laser cutting company Hot Pot, three laser cut pieces were produced, one of the United States, one with all the States, and one with Alaska and Hawaii. The maps are on a 1:20,000,000 scale which is then be blown up to 2.5x its size to fit the boards. Each state will be covered with its trending superhero or villain from within the past year.

In order to create the laser cut template, I imported a shapefile of the US states into ArcPro and used Albers Equal Area Conic projection. A dissolve tool was then used on the US shapefile to create an outline of the continental United States. The maps were then exported as an SVG in order to create a vector file the company could use. Since there are many intricate details of the US states there were some errors from the SVG file ArcPro produce. To correct this the US state file was imported into Adobe Illustrator and traced to create the vectors.

Next, the superhero/villain data needed to be gathered. Below is a step by step process on how I normalize Google Tends Data and create a visual physical map that any fan would love to hang in their living room.

Google Trends Superhero Data

17 characters were chosen from both universes to populate 50 states. 20 male characters and 14 female characters that have had a major motion picture or television appearance between 2017 and 2019, which exception to staple characters and upcoming movies in 2020.

DC Superheroes and antiheroes
that have made an
appearance in 2014 -2019
Marvel Superheroes and
antiheroes that have made an
appearance in 2014-2019
Batman (Movie 2017) Iron Man (Movie 2019)
Superman (Movie 2017) Captain America (Movie 2019)
Joker (Movie 2019) Hulk (Movie 2019)
Aquaman (Movie 2018) Spiderman (Movie 2019)
Flash Barry Allen (TV show 2019/Justice League 2018) Black Panther (Movie 2019)
Shazam (Movie 2018) Thor (Movie 2019)
Cyborg (Movie 2017) Venom (Movie 2019)
Green Lantern *staple in the DC universe Deadpool (Movie 2019)
Green Arrow (TV show 2019) Groot (Movie 2019)
Dick Grayson (TV show 2019) Hawkeye (Movie 2019)
   
WOMAN WOMAN
Wonder woman (Justice League) Black Widow (Movie 2019)
Harley Quin (upcoming movie 2020) Captain Marvel (Movie 2019)
Catwoman *staple in the DC Universe Scarlet Witch (Movie 2019)
Raven (TV show 2019) Gamoura (Movie 2019)
Starfire (TV show 2019) Storm (Movie 2019)
Supergirl (TV Show 2019) Jean Grey (Movie 2019)
Bat Woman (TV show 2019) Nebula (Movie 2019)

Once all characters were established each character was imputed into Google Trends to find there rating. A year from September 30, 2019, was used as the time frame and a score 75 and above was recorded on an excel sheet in order to keep track of all the rankings. The Related Topic and Related Queries tables were used to make sure the term was relating to the character and not something else. If the character showed unrelated topics and queries because of the ambiguity of their name, they were kicked out. To keep universes equal in numbers, if one character was kicked out of the list another from the opposite universe needed to be kicked out as well. Venom and Storm were kicked out of Marvel and Shazam and Raver were kicked out of DC.

Once all the characters were recorded the next was to determine how a superhero/villain won a state. Since I wanted all characters to be represented on the map, I first ranked the character scores and then the state’s score.

If a character scored 100% for a state and no other score the character instantly won that state. These included Aquaman, Hawkeye, Green Lanter and Green Arrow.

Utah could not be won by one superhero alone because of the large amount of 100% scores. Instead, Utah would be split into 4 characters who score the most 100% scores.

Next, I looked at superheroes with the lowest number of total scores. If a superhero had a low total number of scores but only one 100% ranking score the superhero would win the state. If there were multiple 100% scores for a state (Alaska) those states needed to be ranked later on.

Once all the characters won over a state the next step was to rank the state scores. Whichever character held the highest score within that state they won that state.

Once all the scores were established the next step was to make the physical map

Making the Map

First thing to do was gather all necessary comic books. After attaining over 30 I had all my characters.

After receiving the laser cut of the map each character was cut out to fit the shape of the state and glued and sealed onto the continental US map. Since Alaska was too big to fit on to the same map as the continental US, it was given its own separate board along with Hawaii.

After all characters were glue down and sealed, fabric reflecting the original pop art colours was glued down to foam board before mounting the maps. This step was done for both the continental map and Alaska/Hawaii map.

Since some east coast states were too small to have characters directly represented on the stated, yellow fabric was used as filler. The east coast states where then outline and used an array of coloured strings and pins to connect pictures of the state characters to the shape of the states then to the geographic location.

Once all the states were now represented by characters from both universes a scale bar was added to both maps as final touches.

The maps turned out amazing and something truly artistic. I could not make this map alone and would like to thank my friends in the program Jeremy, Miranda and Fana for all their help and support, my friend Kevin who helped with all Adobe Illustrator needs, and my cat for being himself.

Toronto Theft: A Neighbourhood Investigation

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

By: Julia DiMartella-Orsi

Introduction:

ESRI’s creation of the Story Map changed the way we could visualize data. Not only did it allow for a broader audience to interact and create their own maps due to its easy to use design, it also contained many new amazing functions, templates, and themes. Users can personalize their story by adding in their own images, text, videos, and map layers by creating their own free ArcGIS Online account. Popular templates include Map Series, Tour, Journal, and Cascade.

Get started making your own Story Map here: http://storymaps-classicqa.arcgis.com/en/app-list/

Creating Your Story Map:

Once you have selected the template you want to use the choice is up to you. By clicking the “+” symbol you can choose to include text, media sources such as a videos, a new title page, or immersive content such as a web map.

ESRI also designed Story Maps to link to outside content and various social media sites such as Flickr and Unsplash. ‘Link to Content’ is also extremely useful as it allows users to add photos and videos found on the internet directly to their story map by copying and pasting their link.

To add interactive web maps into your story map users can link map layers from their ArcGIS Online account. Layers can be created in ArcGIS Online, but also in ArcMap where layers are exported as a zip file and imported onto your ArcGIS Online base map. Map layers can also be found online using the ‘add layer from the web’ or ‘search for layers’ options.  The layers that appear are based on the type of ArcGIS Online account you have created. Enterprise accounts contain additional layers provided by your organization, however ESRI also has free downloadable layers available for users without an organization.

Users also have the option to make their story maps public by clicking the globe icon, or private for their own personal use by clicking the lock icon. To save your story map select the floppy disk icon. Your saved map will appear under ‘My Content’ in your ArcGIS Online account.

My Story and Creating Web Maps:

Over the last few years, theft in Toronto has been increasing at a rapid rate. According to the Toronto Police Service, Toronto experienced a total of 5430 thefts between 2014-2018. However, these are only those that have been reported and documented by police. In order to analyze the distribution of theft across the city, the Toronto Police created a point dataset that summarized when and where each theft took place. Additional datasets were also created for prominent types of theft such as bicycle and auto theft.

To compare the number and types of theft in each Toronto neighbourhood I decided to create a story map using the Cascade template. This created a scrolling narrative that would allow viewers to observe the data in a clear, unique way. The reason why I chose to use a story map was due to the number of layers I wanted to compare, as well as use the ‘swipe tool’ to easily compare each neighbourhood. Therefore, I created a series of choropleth maps based on the 2014-2018 theft/crime data from the Toronto Police Open Data Portal.

The following steps were used to create each web map used in my Story Map:

Step 1: Download the point data and add the layer into ArcMap.

Step 2: Use the ‘spatial join’ analysis tool and select your neighbourhood boundary file as the target layer and the theft point data as the join feature. Make sure to select ‘join one to one’. This will produce a new layer with a ‘count’ field that counts the number of thefts in each neighbourhood – each neighbourhood is given a count.

Step 3: In order to produce accurate results, you must normalize your data. To do so add a new field into your attribute table (same layer with the count field) titled ‘Area’, and right click to select ‘calculate geometry’. Change the property to ‘area’ and choose the units you wish to use. Click ‘ok’ and the results will populate your new field.

Step 5: Export the layer and save it as a compressed zip folder. Import the data into ArcGIS Online by clicking the “Add” tab.

Step 6: Once you import your layer you are given a variety of styles to choose from. Select the one you like best (ex: choropleth) as well as the field you wish to map – in this case select ‘count’. To normalize ‘count’ select the ‘divided by’ dropdown and choose your ‘Area’ field. Change the colour of your map to your preference by clicking ‘symbols’.

Step 7: Save your layer to and select the tags that relate to your topic. The layer will now appear in ‘My Content’ where it can be added to your Story Map.

Step 8: To compare each layer add both layers you wish to compare to your story map by using the “+” symbol. Once you have done so, choose the transition type (ex: horizontal swipe) you want to use by clicking on the arrow below. The transition will take place as the user scrolls through your story map.

My Story Map titled “Toronto Theft: A Neighbourhood Investigation” can be viewed here:

https://arcg.is/uiemr

Putting BlogTO on the map (literally) – Tutorial

Kyle Larsen
SA8905 – Cartography and Geovisualization
Fall 2019

Instagram is a wealth of information, for better or worse, if you’ve posted to Instagram before and your profile is public, maybe even if it’s not, then your information is out there just waiting for someone, someone maybe like me, to scrape your information and put it onto a map. You have officially been warned.

But I’m not here to preach privacy or procure your preciously posted personal pics. I’m here to scrape pictures from Instagram, take their coordinates, and put them onto a map into a grid layout over Toronto. My target for this example is a quite public entity that thrives off exposure, the notorious BlogTo. Maybe only notorious if you live in Toronto, BlogTo is a Toronto-based blog about the goings on in the 6ix as well as Toronto life and culture, they also have an Instagram that is almost too perfect for this project – but more on that later. Before anything is underway a huge thank-you-very-much to John Naujoks and his Instagram scraping project that created some of the framework for this project (go read his project here, and you can find all of my code here)

When scraping social media sometimes you can use an API to directly access the back end of a website, Twitter has an easily accessible API that is easy to use. Instagram’s API sits securely behind the brick wall that is Facebook, aka it’s hard to get access to. While it would be easier to scrape Twitter, we aren’t here because this is easy, maybe it seems a little rebellious, but Instagram doesn’t want us scraping their data… so we’re going to scrape their data.

This will have to be done entirely through the front end, aka the same way that a normal person would access Instagram, but we’re going to do it with python and some fancy HTML stuff. To start you should have python downloaded (3.8 was used for this but any iteration of python 3 should give you access to the appropriate libraries) as well as some form of GIS software for some of the mapping and geo-processing. Alteryx would be a bonus but is not necessary.

We’re going to use a few python libraries for this:

  • urllib – for accessing and working with URLs and HTML
  • selenium – for scraping the web (make sure you have a browser driver installed, such as chromedriver)
  • pandas – for writing to some files

If you’ve never done scraping before, it is essentially writing code that opens a browser, does some stuff, takes some notes, and returns whatever notes you’ve asked it to take. But unlike a person, you can’t tell python to go recognize specific text or features, which is where the python libraries and HTML stuff comes in. The below code (thanks John) takes a specific Instagram user and return as many post URLs as you want and adds them to a list, for your scraping pleasure. If you enable the browser head you can actually watch as python scrolls through the Instagram page, silently kicking ass and taking URLs. It’s important to use the time.sleep(x) function because otherwise Instagram might know what’s up and they can block your IP.

But what do I do with a list of URLs? Well this is where you get into the scrappy parts of this project, the closest to criminal you can get without actually downloading a car. The essentials for this project are the image and the location, but this where we need to get really crafty. Instagram is actually trying to hide the location information from you, at least if you’re scraping it. Nowhere in a post are coordinates saved. Look at the below image, you may know where the Distillery District is, but python can’t just give you X and Y because it’s “south of Front and at that street where I once lost my wallet.”

If you click on the location name you might get a little more information but… alas, Instagram keeps the coordinates locked in as a .png, yielding us no information.

BUT! If you can scrape one website, why not another? If you can use Google Maps to get directions to “that sushi restaurant that isn’t the sketchy one near Bill’s place” then you might as well use it to get coordinates, and Google actually makes it pretty easy – those suckers.
(https://www.google.com/maps/place/Distillery+District,+Toronto,+ON/@43.6503055,-79.35958,16.75z/data=!4m5!3m4!1s0x89d4cb3dc701c609:0xc3e729dcdb566a16!8m2!3d43.6503055!4d-79.35958 )
I spy with my little eye, some X and Y coordinates, the first set after the ‘@’ would usually be the lat/long of your IP address, which I’ve obviously hidden because privacy is important, that’s the takeaway from this project right? The second lat/long that you can gleam at the end of the URL is the location of the place that you just googled. Now all that’s left is to put all of this information together and create the script below. Earlier I said that it’s difficult to tell python what to look for, and what you need is the xpath, which you can copy from the html (right-click an element and then right-click that html and then you can get the xpath for that specific element. For this project we’re going to need the xpath for both the image and the location. The steps are essentially as follows:

  • go to Instagram post
  • download the image
  • copy the location name
  • google the location
  • scrape the URL for the coordinates

There are some setbacks to this, not all posts are going to have a location, and not all pictures are pictures – some are videos. In order for a picture to qualify for full scraping it has to have a location and not be a video, and the bonus criteria – it must be in Toronto. Way back I said that BlogTO is great for this project, that’s because they love to geotag their posts (even if it is mostly “Toronto, Ontario”) and they love to post about Toronto, go figure. With these scripts you’ve built up a library of commands for scraping whatever Instagram account your heart desires (as long as it isn’t private – but if you want to write script to log in to your own account then I guess you could scrape a private account that has accepted your follow request, you monster, how dare you)

With the pics downloaded and the latitudes longed it is now time to construct the map. Unfortunately this is the most manual process, but there’s always the arcpy library if you want to try and automate this process. I’ll outline my steps for creating the map, but feel free to go about it your own way.

  1. Create a grid of 2km squares over Toronto (I used the grid tool in Alteryx)
  2. Intersect all your pic-points with the grid and take the most recently posted pic as the most dominant for that grid square
  3. Mark each square with the image that is dominant in that square (I named my downloaded images as their URLs)
  4. Clip all dominant images to 1×1 size (I used google photos)
  5. Take a deep breath, maybe a sip of water
  6. Manually drag each dominant image into its square and pray that your processor can handle it, save your work frequently.

This last part was definitely the most in need of a more automated process, but after your hard work you may end up with a result that looks like the map below, enjoy!