Showing posts with label gis4048. Show all posts
Showing posts with label gis4048. Show all posts

Monday, August 5, 2024

Corridor Suitability Analysis - Coronado National Forest

The final scenario for the lab of GIS Applications Module 6 is to determine a potential protected corridor linking two areas of black bear habitat in Arizona's Coronado National Forest. Data provided included the extent of the two existing areas of known black bear habitat, a DEM, a raster of land cover and a feature class of roads in the study area. Parameters required for a protected corridor facilitating the safe transit of black bear included land use away from population and preferably with vegetation, mid level elevations and distances far from roadways.

Geoprocessing flow chart for Scenario 4
Geoprocessing flow chart for Scenario 4

The initial geoprocessing in our Corridor Suitability Analysis reclassifies the DEM and landcover rasters into suitability rasters using predetermined scales. The development of a suitability raster for the roads feature class commenced with creating a multi-ring buffer feature class, and then converting the derived polygons into a suitability raster using the Reclassify tool.

Reviewing the previous scenarios on outputting buffers from a polyline, I also ran the Euclidean Distance tool on the roads feature class. The succeeding output raster was then Reclassified using the distance suitability values that rank higher proximities with lower values. The results mirrored those using the Multi-Ring Buffer tool:
Suitability Raster for proximity to roads using the Euclidean Distance tool
The suitability raster for the distance to roads derived from the raster output from the Euclidean Distance tool.

With suitability raster files finalized for elevation, landcover and proximity to roads, we can proceed with the analysis using the Weighted Overlay tool. The objective is to generate a cost raster using the integer scale of 1 through 10, based upon the influence percentages of 60% for land cover, 20% for elevation and 20% for distance to roads.

The result shows the highest suitability score for mid level elevations representative of undeveloped forest land that mostly avoids roads. Low level elevations represented by urban areas, agriculture and barren land factor into low suitability areas:
Weighted Overlay raster of Suitability areas
Weighted Overlay raster with the values of 1-10 where lighter colors represent lower suitability scores

Utilizing the Weighted Overlay raster, a cost surface raster is generated by using the Raster Calculator geoprocessing tool. The cost surface values were obtained by inverting the suitability model so that higher habitat suitability values translated into lower travel costs:
Cost Surface Raster
Cost Surface raster where the darker colors represent higher costs

With the Cost Surface raster, the Corridor Suitability Analysis continues with the Cost Distance tool run on the two Coronado National Forest black bear habitat area feature classes. This outputs Cost Distance and Cost Distance Backlink Rasters.
Coronado N.F. Destination Raster - 1st feature class
The cost distance raster for the northeastern unit of Coronado N.F.
Coronado N.F. Destination Raster - 2nd feature class
The cost distance raster for the southwestern unit of Coronado N.F.

Together the two cost distance rasters for Coronado National Forest are the parameters for the Corridor geoprocessing tool, which generates the Least-Path Corridor raster. The threshold value for determining the best corridors was subjective, so I went with percentages used in the previous scenario, where the minimum destination cost value multiplied by 1.05 represented the optimal corridor. Chose a color scheme based upon the ColorBrewer web site.

Black Bear Suitability Corridor Analysis
The Least-Path Corridor for a protected Black Bear Corridor between Coronado National Forest units



Sunday, August 4, 2024

Least-Cost Path and Corridor Analysis with GIS

The second half of Module 6 for GIS Applications conducts Least-Cost Path and Corridor Analysis on two scenarios. The first continues working with the Jackson County, Oregon datasets from Scenario 2.

There are several ways that GIS measures distance. Euclidean, the simplest, represents travel across a straight line or "as the crow flies". Manhattan distance simulates navigating along a city street grid, where travel is restricted to either north-south and east-west directions. Network Analysis models travel in terms of time, where travel is restricted by a road network or transit infrastructure.

Least-Cost Path Analysis models travel across a surface. It determines the single best course, a polyline, that has the lowest cost for a given source and destination, which are represented by points. This can be described as the routing over a landscape that is not restricted by road networks. 

The course through the landscape is modeled as a cost. More specifically each cell in a cost raster has a value which represents the cost of traveling through it.

Typical cost factors are slope and land cover. A cost surface can vary from just a single factor to a combination of them. Even if multiple factors are considered, the analysis only uses a single cost raster.

Least-Cost Path Analysis can be expanded to Corridor Analysis. Instead of resulting in a single base solution represented by a polyline, corridor analysis produces multiple solutions, representing a zone where costs are close to the least cost. The corridor width uses is somewhat subjective. It is controlled by deciding what range of cost to consider. Values of a few percentage points above the lowest cost to as much as 10% above the lowest cost are common.

Scenario 3 uses least-cost path analysis on an area of land in the planning for a potential pipeline. Cost factors include elevation, proximity to rivers and potential crossings of waterways. Datasets used for these cost factors include a DEM, a rivers feature class and feature classes determining the source and destination of the proposed pipeline. Analysis proceeds focusing on each cost factor individually.


Geoprocessing Flowchart for Scenario 3 - Analysis D
Geoprocessing flowchart for least-cost path analysis factoring solely on slope

Focusing first on the DEM, the raster is converted to a slope raster, and subsequently reclassified using a cost factor range of eight values. The next analysis step utilizes the Cost Distance geoprocessing tool. Using an iterative algorithm, a cost distance raster is generated that represents the accumulated cost to reach a given cell from the source location point.

A cost backlink raster is also created, which traces back how to reach a given cell from the source. This reveals the actual path utilized to obtain the lowest cost. The actual cell values of the backlink raster represent either cardinal directions or the intercardinal point (NE, NW, etc.) instead of cost. The combination of the two output rasters contain every least cost path solution from the single source to all cells within the study area.

Cost Distance Raster
Cost Distance Raster - Values represent the cost of traveling through a cell
Cost Distance Backlink Raster
Cost Distance Backlink Raster - Values correspond with compass directions

The final step of least cost path analysis obtains the least cost path from the source to one or more destinations. The result of the Cost Path geoprocessing tool, this consists of a single polyline representing the lowest accumulated cost.

Output least-cost path and DEM for a proposed pipeline in Oregon
The result of Least-Cost Path Analysis solely on slope

Continuing our analysis of a proposed pipeline in Jackson County, Oregon, we factor in river crossings as a cost factor.

Geoprocessing Flowchart for Scenario 3 - Analysis E
Geoprocessing flowchart for least-cost path analysis factoring in both slope and river crossings

The result of factoring in river crossings to the cost analysis reduces potential crossings to five from the 16 when factoring in slopes alone:

Least-Cost Path Analysis factoring in both river crossings and slope

Furthering our analysis, we change from factoring in river crossings to instead factor in the distance to waterways. Using a multiple ring buffer, cost factors are set high for areas within 100 meters of hydrology and moderate for areas within 500 meters. Distances beyond 500 meters from a waterway are zero, reflecting no cost.

Geoprocessing Flowchart for Scenario 3 - Analysis F
Geoprocessing flowchart for least-cost path analysis factoring slope and proximity to waterways

As the cost factor criterion for the least-cost path analysis is adjusted to better compensate natural factors, the least cost path adjusts accordingly:
Least-Cost Path Analysis with the cost factors of Slope and Proximitys to Waterway
The final analysis for third scenario looks at Least-Cost Path Corridor Analysis, which not only includes the least-cost path, but also a multiple of other least-cost alternatives within a corridor determined on a case-by-case basis.
Geoprocessing Flowchart for Scenario 3 - Analysis G
Geoprocessing flowchart for least-cost path corridor analysis

The geoprocessing to develop the least-cost path corridor utilizes the previously generated cost raster factoring in the proximity to waterways and slope. Instead of using the source point as the feature source data, the Cost Distance tool is based off the destination point feature class.

Together the two cost distance rasters, one based off the "destination" feature class and the other off the "source" feature class, are input into the Corridor geoprocessing tool. This outputs the Least-Path Corridor raster, which areas of least-cost paths symbolized based upon a percentage from the minimum cost value:
Least-Cost Path Corridor Analysis for a proposed pipeline


Thursday, August 1, 2024

Suitability Modeling with GIS

Module 6 for GIS Applications includes four scenarios conducting Suitability and Least-Cost Path and Corridor analysis. Suitability Modeling identifies the most suitable locations based upon a set of criteria. Corridor analysis compiles an array of all the least-cost paths solutions from a single source to all cells within a study area.

For a given scenario, suitability modeling commences with identifying criteria that defines the most suitable locations. Parameters specifying such criteria could include aspects such as percent grade, distance from roads or schools, elevation, etc.

Each criteria next needs to be translated into a map, such as a DEM for elevation. Maps for each criteria are then combined in a meaningful way. Often Boolean logic is applied to criteria maps where suitability is assigned the value of true and non suitable is false. Boolean suitability modeling overlays maps for all criteria and then determines where all criterion is met. The result is a map showing areas suitable versus not suitable.

Another evaluation system in suitability modeling use Scores or Ratings. This scenario expresses criterion as a map showing a range of values from very low suitability to very high, with intervening values in between. Suitability is expressed as a dimensionless score, often by using Map Algebra on associated rasters.

Scenario 1 for lab 6 analyzes a study area in Jackson County, Oregon for the establishment of a conservation area for mountain lions. Four sets of criterion area are specified. Suitable areas must have slopes exceeding 9 degrees, be covered by forest, be located within 2,500 feet of a river and more than 2,500 feet from highways. 

Flow Chart outlining the Suitability Modeling
Flowchart outlining input data and geoprocessing steps.

Working with a raster of landcover, a DEM and polyline feature classes for rivers and highways, we implement Boolean Suitability modeling in Vector. The DEM raster is converted to a slope raster, so that it can be reclassified into a Boolean raster where slopes above 9 feet are assigned the value of 1 (true) and those below 0 (false). The landcover raster is simply reclassified where cells assigned to the forest land use class are true in the Boolean.

Buffers were created on the river and highway feature classes, where areas within 2,500 feet of the river are true for suitability and areas within 2,500 feet of the highway are false for suitability. Once the respective rasters are converted to polygons and the buffer feature classes clipped to the study area, a criteria union is generated using geoprocessing. The suitability is deduced based upon the Boolean values of that feature class and selected by a SQL query to output the final suitability selection.

We repeat this process, but utilizing Boolean Suitability in Raster. Using the Euclidean Distance tool in ArcGIS Pro, buffers for the river and highway feature classes were output as raster files where suitability is assigned the value of 1 for true and 0 for false. Utilized the previously created Boolean rasters for slope and landcover.

Obtaining the suitable selection raster with the four rasters utilizes the Raster Calculator geoprocessing tool. Since the value of 1 is true for suitability in the four rasters, simply adding the cell values for all result in a range of 0 to 4, where 4 equates to fully suitable. The final output was a Boolean where 4 was reclassified as 1 and all other values were assigned NODATA.

Scenario 2 determines the percentage of a land area suitable for development in Jackson County, Oregon. The suitability criteria ranks land areas comprising meadows or agricultural areas as most optimal. Additional criterion includes soil type, slopes of less than 2 degrees, a 1,000 foot buffer from waterways and a location within 1,320 feet of existing roads. Input datasets consist of rasters for elevation and landcover, and feature classes for rivers, roads and soils.

Flowchart showing data input and processes to output a weighted suitability raster
Flowchart of the geoprocessing for Scenario 2

With all five criteria translated into respective maps, we proceed with combining them into a final result. However with Scenario 2, the Weighted Overlay geoprocessing tool is implemented. This tool utilizes a percentage influence on each input raster corresponding to the raster's significance to the criterion. The percentages of each raster input must total 100 and all rasters must be integer-based.

Cell values of each raster are multiplied by their percentage influence and the results compiled in the generation of an output raster. The first scenario evaluated for lab 6 includes an equal weight scenario, where the 5 raster files have the same percentage influence (20%). The second scenario assigned heavier weight to slope (40%) while retaining 20% influence to land cover and soils criterion, and decreasing the percentage influence of road and river criterion to 10%. The final comparison between the two scenarios:

Land Development Suitability Modeling - Jackson County, OR
Opted to symbolize the output rasters using a diverging color scheme from ColorBrewer.

Tuesday, July 30, 2024

Damage Assessment - Hurricane Sandy

Module 5 for GIS Applications continues our focus on Hurricane Sandy and explores damage assessment for the storm's impact in the Garden State.

Our first task was to create a formal hurricane track map showing the path Sandy took from the Caribbean Sea to the Northeastern U.S. The symbology uses custom color coded coordinate points showing the hierarchy of storm intensity. Included are the maximum sustained winds and the barometric pressure shown in 12 hour increments to improve legibility.

Map showing the path of Hurricane Sandy.

The next section of lab 5 was the creation of a damage assessment survey using Survey123. This was a pretty straightforward process, with options to add multiple choice questions pertaining to damage to be documented, a field for describing that damage surveyed, the option to include an image or have the mobile device take a photo, and a required location setting either through GPS or map locator. Following the form creation, we determine what the survey application does after the submission of a completed survey and we set the restrictions on what an individual viewing the survey can see.

Our next task is the preparation of raster data of air photos showing an area of New Jersey both before and after Superstorm Sandy. An array of .SID raster images of pre-storm photos created the first mosaic dataset using geoprocessing. .JPG images of the post-storm photos were compiled into the second mosaic dataset.

With both mosaic datasets in place, we revisit the Flicker and Swipe tools (located in the Compare group below the Mosaic Layer tab), which were previously used in the Remote Sensing course, to alternate the display between the pre and post storm imagery. These are both fast methods to visually compare the two imageries.

Example of the Swipe Tool in ArcGIS Pro
An example of the Swipe tool showing pre-storm imagery above and post-storm imagery below

Step 3 of the lab focuses on the creation of data. For this, we revisit the concept of Domains previously covered in Intro to GIS last Fall. Attribute domains constrain values allowed in an attribute for a table of feature class. Domains create a rule set of acceptable attribute values, or in the case of Module 5, a range of integers associated with predefined aspects of damage assessment:

Domains set for an attribute of a newly created feature class
Helping to ensure data integrity, domains limit the number of acceptable values for a field.

Attribute domains are store in a geodatabase. They can be utilized by multiple feature classes, tables and subtypes in a geodatabase. Through Catalog in ArcGIS Pro, Data Design>Fields, these can be added to an existing feature class.

Using the aforementioned air photos showing Seaside Heights, NJ before and after Superstorm Sandy, we were tasked with conducting damage assessment within a study area of parcel data using the preset domains for the various damage categories. Symbolization uses a continuous color scheme from green for no damage to red for destroyed.

Point feature class showing damage assessment for each parcel within a study area
Damage assessment study area for Superstorm Sandy at Seaside Heights, NJ

Given the four domains of Structure Damage, Inundation, Wind Damage and Structure Type, each parcel within a seaside neighborhood was evaluated for damage based upon the two air photos. This was a tedious task due to relatively low image resolution and long shadows in the post-Sandy aerial imagery. Without in-situ data collection, evaluating parcels for wind damage was impractical given details such as missing roof shingles was not possible.

Expanding our analysis, we aggregate the damage assessment points into buffers of within 100 meters of the coastline, between 100-200 meters and between 200-300 meters. Using the Multi-ring Buffer geoprocessing tool, created the three storm surge zones. Proceeded to run a Spatial Join on the Structure Damage point file with the MultipleRing Buffer polygon file to quantify the damage type by buffer zone. The Summary Statistics geoprocessing tool does the tabulation for us:

Hurricane Sandy Damage Assessment - GIS Applications
The final results of our damage analysis confirms the penetration of Superstorm Sandy's storm surge varied as the distance from the coastline increased from 100 to 300 meters. Structures facing the ocean were generally pulverized, while buildings located around 300 meters inland fared much better, some with seemingly no visible damage. This damage estimation appears to be consistent along other parts of the barrier island where the elevation and slope are similar. Exceptions were noted, such as further south of the study area in Seaside Heights, New Jersey, where the barrier provided by a boardwalk and two piers protected adjoining neighborhood areas from the storm surge.

Wednesday, July 24, 2024

Coastal Flooding Analysis - Storm Surge

Module 4 for GIS Applications performs analyses on coastal flooding and storm surge. Storm surge is generally associated with landfalling tropical storms and hurricanes, but it can also be attributed to extratropical storms, such a Nor'easters along the Eastern Seaboard, or powerful winter storms with low barometric pressure and tight wind gradients. Coastal flooding events can also be due to spring tide events based upon the moon's cycle.

Storm surge from Hurricane Idalia inundated Bayshore Boulevard in Tampa, FL
Storm surge inundating Bayshore Boulevard in Tampa during Hurricane Idalia on August 30, 2023.

The first lab assignment revisits Superstorm Sandy, which made landfall as a hurricane transitioning into a powerful extratropical storm along the New Jersey coastline on October 29, 2012. The second and third part of the lab assignment uses Digital Elevation Models (DEMs) to develop scenarios for a generalized storm surge.

The lab analysis on Hurricane Sandy works with LiDAR data covering a barrier island along the Atlantic Ocean between Mantoloking and Point Pleasant Beach, New Jersey. LAS files were downloaded showing the conditions before the storm's impact and afterward.

Initial work in the lab for Module 4 created DEMs by converting the two LAS files to TIN files using geoprocessing in ArcGIS Pro. The TINs were then converted to a raster with a separate geoprocessing tool running upwards of ten minutes.

Comparing the two raster datasets, some pronounced impacts from the hurricane turned extratropical storm were visible. Several datapoints representing structures along the beach were noticeably missing. Additionally a wide breech was cut across the island, with several smaller breeches visible further north. It also appearing that severe scouring of the sand along the coast occurred with a wide area of lower data returns on the post Sandy dataset.

Indicative of the large file size of LiDAR data, when substracting the raster cell values of the post Sandy dataset from the pre Sandy dataset, geoprocessing took 12 minutes and 59 seconds. The result is a raster with values ranging from 33.69 to -35.87. Values toward the high range reflect earlier LiDAR returns, representing the build-up of material, such as sand or debris. Lower values in the change raster indicate later returns, or returns of bare-Earth. This correlates to areas where significant erosion may have occurred or the destruction of a structure.

The change in the the LiDAR pointclouds reveal parcels where homes were destroyed or where the barrier island was breeched by storm surge. The change raster quantifies the amount of change.


LiDAR before Superstorm Sandy

LiDAR showing a major breech caused by Superstorm Sandy

The difference between the two LiDAR pointclouds showing the breech and associated destruction of structures

Recent aerial imagery of Mantoloking, NJ where the breech occurred

The overall impact of Hurricane Sandy on the boroughs of Mantoloking, Bay Head and Point Pleasant Beach in Ocean County, New Jersey:

The raster quantifying the rate of change between the LiDAR datasets before and after Sandy

Output raster using a Boolean

The second analysis for Module 4 utilizes a storm surge DEM for the state of New Jersey. Our task was to reclassify the raster where all cells with values of 2 meters or less constitute areas potentially submerged as a result of Hurricane Sandy. Those cells with values above 2 meters were classified as "no data."

I began the process by adding a new field to the DEM for flooded areas due to storm surge. Cells where the elevation value was equal to or less than 2 were assigning a flood value of 1 for the Boolean of true. All other cells with an elevation value above 2 were assigned 0, for false.

With the added field, I used the Reclassify geoprocessing tool to output a raster of the DEM showing potentially flooded areas versus those at higher ground. The mask was set to the feature class of the New Jersey state outline to exclude areas of the DEM outside of the state that were not needed for our analysis.

Our analysis then focused on Cape May County in South Jersey, where we quantify the percentage of the county potentially inundated with a 2 meter storm surge. The storm surge raster was converted to a polygon and subsequently clipped to the the polygon of the Cape May County boundary.

Another issue encountered was that the storm surge data and county boundary were in different units of measurement. Ended up clipping the storm surge polygon from the county polygon, then comparing the output with the unclipped county boundary for the final percentage. This workaround succeeded as both used the same units.

Clipped feature class of the storm surge polygon over Cape May County, NJ
2-ft storm surge data clipped to Cape May County, NJ

The third analysis for Lab 4 focuses on a potential 1 meter storm surge in Collier County, Florida. Two DEM's are provided, one derived from LiDAR data and another from the regular elevation model from the USGS. Commenced working with this data by reclassifying each DEM to a new raster using a Boolean where any elevation 1 meter or less is considered flooded and anything above is not flooded.

Since we are only interested in storm surge related flooding, any areas shown inland that are entirely disconnected from the tidal basin are omitted from analysis. Accomplished this by using the Region Group geoprocessing tool, where all cells in a raster are reclassified by group and assigned a new ObjectID number.

The Region Group tool takes all of the cells within the hydrologic area of open waters extending into the Gulf of Mexico, and all associated bays and waterways seamlessly feeding into it, and assigns them to a single ObjectID. Similarly, the mainland of Florida is assigned an ObjectID as well. Islands, lakes, ponds, etc. that are independent of one another are also assigned unique ObjectID numbers.
Results of Region Group geoprocessing
Region Group assigns a unique ObjectID for each homogenous area of raster cells. The different colors in this sample from Naples shows separate groups for each land and hydrologic feature based upon the 1 meter elevation threshold
Using the Extract by Attribute geoprocessing tool, selecting the hydrologic area comprising the entire tidal basin is straightforward once the ObjectID number is determined. With that, a new raster comprising just water areas subjected to storm surge is output and subsequently converted to a polygon. The polygon feature class was juxtaposed with a feature class of building footprints for quantitative analysis.

There are a variety of methods in ArcGIS Pro that can be used to determine the number of impacted buildings of a 1 meter storm surge. One such process was to Select by Location based upon the Intersect relationship. This selects records where any part of a building footprint polygon falls within the storm surge raster polygon. Having preadded two fields to the buildings feature class based upon the Boolean of 1 = impacted and 0 = unaffected, with those records selected, used Calculate Field to assign each a value of 1. Repeated the process for both rasters and then proceeded with statistical calculations.

The final analysis quantified whether a building was located within the storm surge zone for the LiDAR based DEM, the USGS based DEM, or both. Errors of omission were calculated where a building was impacted by storm surge in the LiDAR DEM but not the USGS DEM, with that total divided by the overall total number of buildings affected in the LiDAR DEM. Errors of commission were calculated using the opposite and taking that result and dividing it again by the overall total number of buildings affected in the LiDAR DEM. The result tabulates affected buildings by feature type:

Storm surge inundation of areas 1 meter or less in elevation based upon DEMs







Wednesday, July 17, 2024

Visualizing data in 3D with GIS

The course work for GIS Applications Module 3 looks further into 3D visualization, Line of Sight (LOS) analysis and Viewshed analysis with ArcGIS Pro. We are tasked with completing four exercises through ESRI training and read an article accessing the Lines of Torres Vedra, the defensive lines constructed to defend the Portuguese capital of Lisbon against Napoleon in 1810, with LOS analysis in GIS.

Presentation of GIS data in three dimensions has a variety of potential benefits for the end user. 3D visualization can help identify patterns that may not be apparent in 2D. 3D data can provide context or a different perspective for a presentation or display of various phenomena. Allowing users to see vertically stacked content, a 3D scene can also go beyond terrain modeling.

3D scene of San Diego, California
Downtown San Diego 3D scene with marker symbols, fill symbol layers and procedural symbology

Furthermore, 3D visualization can spark imagination and enhance understanding while also accenting environmental effects, such as shadowing and depth. Rendering quantitative data in terms of three-dimensional shapes, 3D visualization can also more strongly contrast data ranges.

3D scene of San Diego visualized with a Summer sun angle
San Diego 3D scene with a procedural rule package for buildings and illumination visual enhancement

The ESRI Get Started with Visibility Analysis learning plan covered three-dimensional basics in GIS. Any data that uses a z-value, or vertical coordinate system, can be considered 3D. Z-values do not have to represent elevation or height, and instead can be some numerical value of a particular attribute, such as population or precipitation amount.

Vertical coordinate systems comprise both a linear unit of measurement and a direction. The direction defines if values are positive up or positive down. Positive down represents the depths below a surface. A scene requires information from where to start measurements. This is provided by the ground or elevation surface.

A two-dimensional (2D) map is a top-down or orthographic view of data that shows a flat representation of features. A 3D scene is viewable from any angle. A midpoint between the two is a functional surface. While technically not 3D, a functional surface contains a single associated z-coordinate for every x,y coordinate pair. 3D data, on the other hand, may be associated with multiple z-values at a single point or vertex.

A raster dataset can be used to create a 3D rendering. One of the most common is a digital elevation model (DEM), where each cell of a raster stores an elevation value. A more detailed 3D rendering revealing sharp, defined edges is a triangular irregular network (TIN). Also a type of functional surface, a TIN consists of nonoverlapping triangles that border each other that vary in size and proportion.

Triangular Irregular Network (TIN)
Triangulated irregular network (TIN)
Input points for a TIN form the triangle vertices (nodes). These vertices are connected by lines that form the legs of a triangle (edges). Using mathematical interpolation, the elevation for any surface on a TIN can be obtained. Calculations also produce the slope and aspect for each triangle space, where aspect is the compass direction that the slope faces.

Data in a 3D scene must include a height source value / base height (elevation source), which is the height at which it is displayed. This can be stored in a feature's geometry as a z-value or as an attribute value in a table. The elevation type of a layer determines how to draw data in relation to the ground. The surface provides content for the elevation of the data and the basis for layer measurements.

30-meter resolution
3-foot resolution

With a 30-meter resolution, WorldElevation3D/Terrain3D is the default elevation surface service from ArcGIS Online. Some projects may require a elevation dataset with higher resolution to reveal more detail.

You must determine how to draw 2D features in relation to the ground surface when visualizing in a 3D scene. The elevation type for a feature built from the ground up is referenced as on the ground. Features drawn at a consistent or known height above the ground are referenced at an absolute height. Features with variable heights, including subsurface depths, are positioned relative to the ground.

Height variables can be manipulated using cartographic offset and vertical exaggeration among other techniques.

Vertical Exaggeration
Horizontal desert terrain enhanced visually with vertical exaggeration by amplifying relief

Cartographic Offset
Setting the cartographic offset by 50 feet in this scene reveals point locations previously obscured

3D Features

2D features can be displayed in 3D by using extrusion. Extrusion creates a 3D shape using a numerical attribute as a z-value. This could be points on map representing precipitation amounts where the height of the column is the amount. It can also be used to display a building footprint as a 3D object if a value for a height is included in the attribute data.

3D buildings at the UWF Campus from Intro to GIS
3D objects of buildings on the UWF campus from the Geoprocessing lab in Intro to GIS last Fall
Extrusion of parcels based upon property value
The z-value of these extruded polygons in Manhattan, Kansas is based upon the property value of the parcel

A common way to create 3D features is to use a mesh. A mesh is a collection of triangles combined to create models of the real world. Multiple z-coordinates can be associated with x,y coordinate in a mesh. This is unlike a TIN where the x,y pair only has one z-value. Integrated meshes represent discrete objects on the ground surface as a single 3D object. These do not have feature attributes.

Multipatch features is a type of 3D geometry representing the outer surfaces or shells of a feature that occupy a discrete area of volume in three-dimensional space. They consist of 3D rings and triangles which can be used to represent both simple and complex objects. Multipatches can store texture, image, color and transparency within a feature's geometry. Buildings rendered in Google Earth utilize multipatches.

As covered in the previous Module, point clouds from LiDAR data also produce 3D scenes. Stored in a LAS file, the mass of coordinates in a point cloud consist of high accurate x,y,z measurements of the earth's surface.

LiDAR Point Cloud data of an overpass at State College, PA
LiDAR Point Cloud data showing part of I-99 at State College, PA. Structures like overpasses, where the laser pulses cannot reach the ground level, reveal portions of the basemap.

3D Visualization

Within ArcGIS Pro there are three views that an analyst can start with. The map view can map 3D data but visualizes it in 2D, with both drawn the same way. Visualizing data in 3D requires converting to one of two 3D viewing modes. These display 3D content from a real-world perspective.

The Global scene is used for data covering a large extent where the curvature of the earth is important for analysis. The Local scene covers other cases where the earth's curvature is not necessary for analysis and where the content covers a smaller, fixed extent. As to which mode to use, questions to ask include what is the minimum surface resolution, what is the appropriate basemap, and will thematic styling detract or aid the GIS information to be presented, will visualization include subsurfaces?

A 3D scene should have multipatch features, thematically symbolized layers, an elevation surface, and either an imagery or topographic layer as a basemap. Structuring the content well in a 3D scene benefits the end user with reduced complexity and increased understanding. Furthermore defining an area of interest (AOI) for the scene both limits the amount of data to be published (and geoprocessing time) and focuses on a subset of the data. This also reduces unneeded data that could be distracting to the end user.

3D scene of Portland, Oregon
3D scene of Portland, Oregon where buildings and trees area multipatch features created from 2D data.

Geoprocessing for a 3D scene results in 3D lines or multipatch features using the 3D display properties of the input feature layer. The multipatch feature data is used to create a scene layer, which can then be saved as part of a scene layer package to be published to ArcGIS Online. Depending upon the permission settings for who can access it, the scene layer package can be then be accessed or used in Scene Viewer and ArcGIS Pro.

View my 3D Buildings scene of Portland, Oregon from this week's exercise at https://pns.maps.arcgis.com/home/webscene/viewer.html?webscene=7e10ab3514b04c8796cd1b0f8c07d8f7

Line of Sight Analysis

With the definitions of 3D data, we can proceed with Line of Sight (LOS) analysis. A line of sight (LOS) calculates intervisibility along a straight line between an observer and target. This calculation factors in obstructions, which can include any combination of rasters (surfaces), TINs, extruded polygons or lines, and multipatch features. If the target is a line or polygon feature as opposed to a point, then a sampling distance can be input to determine the density of sight lines along the target.

The array produced by LOS analysis rom an observer point at a fictional parade route
The array of sight lines generated from an observer to a target line feature during the LOS analysis.

The Line of Sight geoprocessing tool in ArcGIS Pro determines the visibility along sight lines. The result includes the TarlsVis attribute field, which consists of a Boolean where 1 is visible and zero is not visible. With these values, sight lines with obstructions can be removed from analysis.

LOS lines with those with TarlsVis values of zero (not visible) removed
LOS lines where features with the TarlsVis value of zero removed.

Viewshed analysis models the visibility from a vantage point in up to a 360 degree view. The geoprocessing tool models the visibility of the horizon, shadows and line of sight. Outputs are a raster showing visible areas from the specified vantage point.

The Viewshed tool considers the height of the vantage point and obstructions surrounding it. Parameters that can be factored into the Viewshed tool control elevation values, vertical offsets, horizontal and vertical scanning angles and distance bounds. 
Optional Viewshed Settings
The various Fields in the Viewshed geoprocessing tool controlling vertical offsets, angles and radius

Additionally while the tool is set to reflect visible light, the model can work with other wave-based transmissions such as radar by adjusting the refractivity coefficient.