Visualizing an Individual Tree (Quantitative Structural Model)

LiDAR point clouds are a challenging form of data to navigate. With perplexing noise, volume and distribution, point clouds require rigorous manipulation of both parameters and computational power. This blog post introduces a series of tools for the future biomass estimator, with a focus on detecting errors, and finding solutions through the process of visualization.

Programs you will need: RStudio, MATLAB, CloudCompare

Packages you will need: LidR, TreeQSM

Tree Delineation – R (lidR Package)

Beginning with an unclassified point cloud. Point clouds are generated from LiDAR sensors, strapped to aerial or terrestrial sources. Point clouds provide a precise measurement of areas, mapping three dimensional information about target objects. This blog post focuses on individual trees, however, entire forest stands, construction zones, and even cities may be mapped using LiDAR technology. This project aims to tackle a smaller object for simplicity, replicability, and potential for future scaling.

Visualized point cloud in DJI Terra. Target tree outlined in red.

The canopy of these trees overlap. The tree is almost indistinguishable without colour values (RGB), hidden between all the other trees. No problem. Before we quantify this tree, we will delineate it (separate it from the others).

library(lidR) # Load packages

library(rgl)

las <- readLAS("Path to las file")

Before we can separate the trees, we must classify and normalize them to determine what part of the point cloud consists of tree, and what part does not. Normalizing the points helps adjust Z values to represent above ground height rather than absolute elevation.

las <- classify_ground(las, csf())

las <- normalize_height(las, tin()) 

plot(las, color = "Z")

Now that the point cloud has been appropriately classified and normalized, we can begin to separate the trees. Firstly, we will detect the tree heights. The tree tops will define the perimeter of each individual tree.

hist(ttops$Z, breaks = 20, main = "Detected Tree Heights", xlab = "Height (m)")

plot(las, color = "Z") 

plot(sf::st_geometry(ttops), add = TRUE, pch = 3, col = "red", cex = 2)

The first iteration will not be the last. 158 trees were detected this first round. An urban park does not have more than 15 in a small, open region such as this. Adjusting window size and height will help isolate trees more accurately. These parameters will depend on factors such as the density of your trees, density of your point cloud, tree shape, tree size, tree height and tree canopy cover. Even factors such as terrestrial versus aerial lasers will effect which parts of the tree will be better detected. In this way, no two models will be perfect with the same parameters. Make sure to play around to accommodate for this.

ttops <- locate_trees(las, lmf(ws = 8, hmin = 10))

A new issue arises. Instead of too many trees, there are now no individual, fully delineated trees visible within the output. All trees consist of chopped up pieces, impossible for accurate quantification. We must continue to adjust our parameters.

This occurred due to the overlapping canopies between our trees. The algorithm was unable to grow each detected tree model into its full form. Luckily, the lidR package offers multiple methods for tree segmenting. We may move on to the dalponte2016 algorithm for segmenting trees.

chm <- rasterize_canopy(las, res = 0.5, p2r())

las <- segment_trees(las, dalponte2016(chm, ttops, th_tree = 1, th_seed = 0.3, th_cr = 0.5, max_cr = 20))

table(las$treeID, useNA = "always") # Check assignment

plot(las, color = "treeID")

A Canopy Height Model is produced to delineate the next set of trees. This canopy model…

…is turned into…

While this is much improved, the tree of choice (mustard yellow) has a significant portion of its limbs cut off. This is due to its irregular shape. This requires a much more aggressive application of the dalponte2016 parameters. We adjust parameters so that the minimum height (th_tree) is closer to the ground, the minimum height of the treetop seeds (th_seed) are inclusive of the smallest irregularities, the crown ratio (th_cr) is extremely relaxed so that the tree crowns can spread further, and the crown radius (max_cr) is allowed to expand further. Here is the code:

las <- segment_trees(las, dalponte2016(chm, ttops, th_tree = 0.2,  th_seed = 0.1, th_cr = 0.2, max_cr = 30))    

Finally, the trees are whole. Our tree (purple) is now distinct from all other trees. It is time to remove our favorite tree. You can decide which tree is your favourite by using the rgl package. It will allow you to visualize in 3D, even while in RStudio.

GIF created using Snipping Tool (Video Option) and Microsoft Clip Champ.

tree <- filter_poi(las, treeID == 13)
plot(tree, color = "Z")
coords <- data.frame(X = tree$X, Y = tree$Y, Z = tree$Z)
coords$X <- coords$X - min(coords$X)
coords$Y <- coords$Y - min(coords$Y)
coords$Z <- coords$Z - min(coords$Z)
write.table(coords, file = "tree_13_for_treeqsm.txt", row.names = FALSE, col.names = FALSE, sep = " ")

While the interactive file itself is too large to insert into this blog post. You can click on the exported visualization here. This will give you a good idea of the output. Unfortunately, Sketchfab is unable to accommodate the point data’s attribute table, but it does have the option for annotation if you wish to share a 3D model with simplified information. It even has the option to create virtual reality options.

Now if you have taken the chance to click on the link, try and think of the new challenge we now encounter after having delineated the single tree. There is one more task to be done before we will be able to derive accurate tree metrics.

Segmentation – CloudCompare

If you didn’t notice from the final output, the fence was included in our most aggressive segmentation. If this occurs, fear not, we can trim the point cloud further, with cloud compare.

Download cloud compare, drag and drop your freshly exported file, and utilize the segmentation tool.

Click on your file name under DB Tree so a cube appears, framing your tree.

Then click Tools –> Segmentation –> Cross Section.

Utilize the box to frame out the excess, and use the “Export Section as New Entity” function under the Slices section to obtain a fresh, fence (and ground) free layer.

Our tree can now be processed in Matlab!

Quantitative Structural Modeling – MATLAB (TreeQSM)

Allows visualization of your individual tree. A new window named “Figures” will pop up to show your progress.

The following parameters can be played around with. These are the initial parameters that worked for me. Further information about these parameters can be found here, in the official documentation of TreeQSM. Please ensure you have the Statistics and Machine Learning Toolbox installed if any errors occur.

inputs.PatchDiam1 = 0.10;
inputs.PatchDiam2Min = 0.02;
inputs.PatchDiam2Max = 0.05;
inputs.BallRad1 = 0.12;
inputs.BallRad2 = 0.10;
inputs.nmin1 = 3;
inputs.nmin2 = 1;
inputs.lcyl = 3;
inputs.FilRad = 3.5;
inputs.OnlyTree = 1;
inputs.Taper = 1;
inputs.ParentCor = 1;
inputs.TaperCor = 1;
inputs.GrowthVolCor = 1;
inputs.TreeHeight = 1;
inputs.Robust = 1;
inputs.name = 'tree_13';
inputs.disp = 1;
inputs.save = 0;
inputs.plot = 1;
inputs.tree = 1;
inputs.model = 1;
inputs.MinCylRad = 0;
inputs.Dist = 1;

QSM_clean = treeqsm(P, inputs);

fprintf('\n=== TREE 13 - NO FENCE ===\n');
fprintf('Total Volume: %.4f m³ (%.1f liters)\n', 

QSM_clean.treedata.TotalVolume/1000, QSM_clean.treedata.TotalVolume);

fprintf('Tree Height: %.2f m\n', QSM_clean.treedata.TreeHeight);
fprintf('DBH: %.1f cm\n', QSM_clean.treedata.DBHqsm * 100);
fprintf('Number of Branches: %d\n', QSM_clean.treedata.NumberBranches);

If this is your output, you have successfully created a Quantitative Structural Model of your tree of choice. This consists of a series of cylinders enveloping every branch, twig and trunk part of your tree. This cylindrical tree model can quantify every part of the tree and produce a spatial understanding of tree biomass distribution.

For now, we have structural indicators, such as total volume, number of branches, and height, which may be used as proxy for physical measurements. The summary of this process has been posted below in the form of a visualized workflow.

Summary of QSM Workflow

This project provides a foundational learning curve that encourages the reader to explore the tools available for visualizing and manipulating point clouds, by classification, segmentation and volumetric modeling. This has potential for studies in carbon sequestration, where canopy height and volume can be linked to broaden the understanding of above-ground biomass, and the spatial-temporal factors that affect our national resources. There is potential for vegetation growth monitoring, where smaller plantations can be consistently, and accurately monitored for structural changes (growth, rot, death etc.). While a single tree is not the most exhilarating project, it lays the groundwork for defining ecological accounting systems for expansive 3D spatial analysts.

Limitations

This project was unable to obtain ground truth for accurate comparison of the point cloud. With ground truth, the parameters could have been more accurately defined. This workflow relied heavily on visual indicators to segment and quantify this tree. This project requires further expansion, with a focus on verifying the extent of accuracy. The next step in this process consists of comparison to allometric modelling, as well as more expansive forest stands.

For now, we end with one tree, in hopes of one day tackling a forest.

Resources

Moral Support

Professor Cheryl Rogers – For the brilliant ideas and positivity.

Data

Roussel, J., & Auty, D. (2022). lidRbook: R for airborne LiDAR data analysis. Retrieved from https://r-lidar.github.io/lidRbook/

Åkerblom, M., Raumonen, P., Kaasalainen, M., & Casella, E. (2017). TreeQSM documentation (Version 2.3) [PDF manual]. Inverse Tampere. https://github.com/InverseTampere/TreeQSM/blob/master/Manual/TreeQSM_documentation.pdf

Toronto Metropolitan University Library Collaboratory. (2023). Oblique LiDAR flight conducted in High Park, Toronto, Ontario [Dataset]. Flown by A. Millward, D. Djakubek, & J. Tran.

Using LiDAR to create a 3D Basemap

By: Jessie Smith
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

INTRO

My Geovisualization Project focused on the use of LiDAR to create a 3D Basemap. LiDAR, which stands for Light Detection and Ranging, is a form of active remote sensing. Pulses of light are sent from a laser towards the ground. The time it takes for the light pulse to be returned is measured, which determines the distance between where the light touched a surface and the laser in which it was sent from. By measuring all light returns, millions of x,y,z points are created and allow for 3D representation of the ground whether it be just surface topography or elements such as vegetation or buildings etc. The LiDAR points can then be used in a dataset to create DEMs or TINs, and imagery is draped over them to create a 3D representation. The DEMs could also be used in ArcPro to create 3D buildings and vegetation, as seen in this project.

ArcGIS SOLUTIONS

ArcGIS solutions are a series of resources made available by Esri. They are resources marketed for industry and government use. I used the Local Government Solutions which has a series of focused maps and applications to help local governments maximize their GIS efficiency to improve their workflows and enhance services to the public. I looked specifically at the Local Government 3D Basemaps solution. This solution included a ArcGIS Pro package with various files, and an add-in to deploy the solution. Once the add-in is deployed a series of tasks are made available that include built in tools and information on how to use them. There is also a sample data set included that can be used to run all tasks as a way to explore the process with appropriate working data.

IMPLEMENTATION

The tasks that are provided have three different levels: basic, schematic and realistic. Each task only requires 2 data sources, a las(LiDAR) dataset and building footprints. Based on the task chosen, a different degree of detail in the base map will be produced. For my project I used a mix of realistic and schematic tasks. Each task begins with the same steps: classifying the LiDAR by returns, creating a DTM and DSM, and assigning building heights and elevation to the building footprints attribute table. From there the tasks diverge. The schematic task then extracted roof forms to determine the shape of the roofs, such as a gabled type, where in the Basic task the roofs remain flat and uniform. Then the DEMS were used in conjunction with the building footprints and the rooftop types to 3D enable buildings. The realistic scheme created vegetation points data with z values using the DEMs. Next, a map preset was added to assign a 3D realistic tree shape that corresponds with the tree heights.

DEMs Created

DSM

DTM

Basic Scene Example

Realistic Scene

 

ArcGIS ONLINE

The newly created 3D basemap, which can be seen and used on ArcGIS Pro, can also be used on AGOL with the newly available Web Scene. The 3D data cannot be added to ArcGIS online directly like 2D data would be. Instead, a package for each scene was created, then was published directly to ArcGIS online. The next step is to open this package on AGOL and create a hosted layer. This was done for both the 3D trees and buildings, and then these hosted layers were added to a Web Scene. In the scene viewer, colours and basemaps can be edited, or additional contextual layers could be added. As an additional step, the scene was then used to create a web mapping application using Story Map template. The Story Map can then be viewed on ArcGIS Online and the data can be rotated and explored.

Scene Viewer

Story Map

You can find my story map here:
http://ryerson.maps.arcgis.com/apps/Styler/index.html?appid=a3bb0e27688b4769a6629644ea817d94

APPLICATIONS

This type of project would be very doable for many organizations, especially local government. All that is needed is LiDAR data and building footprints. This type of 3D map is often outsourced to planners or consulting companies when a 3D model is needed. Now government GIS employees could create a 3D model themselves. The tasks can either be followed exactly with your own data, or the general work flow could be recreated. The tasks are mostly clear as to the required steps and processes being followed, but there could be more reasoning provided when setting values or parameters specific to the data being used inside the tool. This will make it easier to create a better model with less trial and error.

 

 

 

Creating a Flight Simulator and Virtual Landscape Using LiDAR Data and Unity 3D

by Brian Mackay
Geovis Class Project @RyersonGeo, SA8905, Fall 2017

The concept for this project stems from the popularity of phone apps and computer gaming. It attempts to merge game creation software, graphic art and open source geographic data. A flight simulator was created using Unity 3D to virtually explore a natural landscape model created from Light Detection and Ranging (LiDAR) data collected from the US Geological Survey (USGS). 

The first step is to define the study area, which is a square mile section of Santa Cruz Island off the coast of California. This area was selected for the dramatic elevation changes, naturalness and rich cultural history. Once the study area is defined, the LiDAR data can be collected from the USGS Earth Explorer in an .LAS file format.

After data is collected, ArcMap is used to create a raster image before use in Unity 3D. The .LAS file was imported into ArcMap in order to create the elevation classes. This was done by using the statistics tab in the property manager of the .LAS file in ArcCatalog and clicking the calculate statistics button. Once generated an elevation map is displayed using several elevation classes. The next step is to convert the image to a raster using the .LAS dataset to raster conversion tool in ArcToolbox. This creates a raster image that must then be turned into a .RAW file format using Photoshop before it is compatible with Unity.

The data is now ready for use in Unity. Unity 3D Personal (free version) was used to create the remainder of the project. The first step is to import the .RAW file after opening Unity 3D. Click the GameObject tab → 3D Object → Terrain, then click on the settings button in the inspector window, scroll down, click import raw and select your file. 

Next, define the area and height parameters to scale the terrain. The USGS data was in imperial units so this had to be converted to meters, which is used by Unity. The Length and width after conversion were set as 1609m (1 sq mile) and the height was set as 276m (906ft), which was taken from ArcMap and the .LAS file elevation classes (seen right and below). Once these parameters are set you can use your graphic art skills to edit the terrain.

 

 

Editing the terrain starts with the inspector window (seen below). The terrain was first smoothed to give it a more natural appearance rather than harsh, raw edges. Different textures and brushes were used to edit the terrain to create the most accurate representation of the natural landscape. In order to replicate the natural landscape, the satellite map from Google was used as a reference for colours, textures and brush/tree cover.

This is a tedious process and you can spend hours editing the terrain using these tools. The landscape is almost finished, but it needs a sky. This is added by activating the Main Camera in the hierarchy window then going to the inspector window and clicking Add Component → Rendering → Skybox. The landscape is complete and it is time to build the flight simulator.

To build the plane you must create each of its parts individually by GameObject → 3D Object → Cube. Arrange and scale each of the planes parts using the inspector window. The final step is to drag and drop the individual airplane parts into a parent object by first clicking GameObject → Create Empty. This is necessary so the C# coding applies to the whole airplane and not just an individual part. Finally, a chase camera has to be attached to the plane in order for the movement to be followed. You can use the inspector window to set the coordinates of the camera identical to the plane and then offsetting the camera back and above the plane to a viewing angle you prefer.

C# coding was the final step of this project and it was used to code the airplane controls as well as the reset game button. To add a C# script to an object, click on the asset in the hierarchy you want to code and select Add Component → New Script. The C# script code below was used to control the airplane.

Parameters for speed and functionality of the airplane were set as well as the operations for the chase camera. Finally, a reset button was programmed using another C# script as seen below. 

The flight simulator prototype is almost complete. The last thing is inserting music and game testing. To insert a music file go to GameObject → Create Empty, then in the inspector window click Add Component → Audio → Audio Source and select the file. Now it’s time to test the flight simulator by clicking the game tab at the top of the viewer and pressing play.

Once testing is complete, it’s ready for export/publishing. Click File → Build Settings then select the type of game you want to create (in this case WebGL), then click Build and Run and upload the files to the suitable web host. The game is now complete.. Try it here!

There are a few limitations to using Unity 3D with geographic data. The first is the scaling of objects, such as the airplane and trees. This was problematic because the airplane was only about 5m long and 5m wide, which makes the scale of other objects appear overly large. The second limitation is the terrain editor and visual art component. Without previous experience using Unity 3D or having any graphic arts background it was extremely time consuming to replicate a natural landscape virtually. The selection of materials available for the terrain were limited to a few grasses, rocks, sand and trees in the Personal version. Other limitations include the user’s skills related to programming, particularly the unsuccessfully programmed colliders, which create invisible barriers to limit the airplane from flying off the landscape. Despite these limitations Unity 3D appears to provide the user with an endless creative canvas for game creation, landscape modeling, alteration and conceptual design, which are often very limited when using other GIS related software.