Monday, May 15, 2017

UAS Mission Plan in Tomah, WI

In the Field

This week the students drove down to some wetlands in Tomah,Wisconsin. Here is where we met up with Peter Menet and the Trimble UX5. Very similar to last weeks lab we had set up the GCP's in the wetlands over a large area. Using the total station, accurate GPS coordinates were taken wherever a GCP was placed.

The first thing that was assembled for the fixed wing UX5 was the rail catapult launch system which extended to roughly 6 to 7 meters. The launch system had a crank and bungee that were used to allow the UX5 to be slung into the air before the motor took over.

Figure 1. UX5 setup and installation.

Following the setup of the launcher, preflight checklist and mission planning for the UX5 were put in place. The cool part about the UX5 and its software system that is already installed is that before you even fly, the program on the controller makes you go through a preflight checklist. Then you input certain parameters for the area you are flying and what type of conditions that you are in. The unfortunate part about this software is that there are no manual controls in case of an emergency you cannot just take over. After about a half an hour to forty minutes of preflight checking and mission planning the UX5 was ready for take-off. The launch was as easy as pressing a clamp and letting the program takeover once its in the air. The UX5 flew at roughly 55 mph and an altitude of 400 feet. When the mission was done for the UX5, it came in for the automatic landing and was extremely rough, but it lives to die another day.

Figure 2. UX5 launcher setup.

The next flight was with the M600 which was a multi-rotor with 6 propellers and had much larger dimensions. After the preflight checklist and the M600 was put together, and a mission plan was put together.

Figure 3. M600 being setup up.

Figure 4. M600 in midflight with the landing gear up.

Wednesday, May 10, 2017

UAS Mission Plan near South Middle School

In the Field

This week's project consisted of using our constructed GCP's and getting used to different UAS platforms at the community garden which is located near the South Middle School here in Eau Claire. First we took out 9 of the GCP's that were constructed and placed them in a grid pattern inside the garden area. We then used the Total Station to get accurate GPS points  as shown in figure 1. This is done by placing the Total Station directly in the middle of a GCP and obtain a level position for best accuracy of the GPS point.
Figure 1. Setting up the GCP and placing the correct GPS coordinates in the center of the GCP with the Total Station.


After the 9 GCP's were in place and had the GPS points established, a preflight checklist was being established with the DJI Phantom 3 Advanced. Part of the preflight checklist included drawing out a mission plan for the platform. There were various mission planning software on the tablet that could be utilized. Two flights with the Phantom 3 Advanced were performed with two separate plans. The first plan was flown at 70 meters over the community garden. The second flight performed was flown at 70 meters over the group of cars with an oblique camera angle of 75 degrees. After both of the mission plans were completed the Phantom automatically landed basically within 5 feet of where it took off.
Figure 2. Phantom 3 Advanced going through the preflight checklist before take-off.

The last flight conducted used the DJI Inspire which was also a multi-rotor that was slightly larger than the phantom 3 advanced. This flight did not have a mission plan to it as this platform was only used for the students to get an idea of how the platform performs. This included rotating, throttle up/down, and turning the platform. It was very smooth in its motions and seemed like it would be very hard to crash.
Figure 3. DJI Inspire assembled and ready for a preflight check.

Results
Figure 4. Oblique image of flight 1 processed data with GCP's calibrated to the images.


Figure 5. Plan view of flight 1 processed data.

Figure 6. Processed oblique imagery of vehicles during flight 2 with the angled camera and 3D modeling used in Pix4D.

Monday, May 1, 2017

Creating Ground Control Points (GCP's)



Ground control points (GCP's) are objects placed on the ground with specific locations in the area of interest so the processing and accuracy of UAS imagery can be increased on Pix4D. The GCP's that have been created will be used for the next couple projects.

Creating ground control points required a multiple stage process. The materials we used were cloth blankets for covering the cement, spray paint, plastic material about 1/4 inch thick, and a table saw. Of course all proper safety gear and equipment were used during the cutting process. First we had to cut the plastic material into squares using the table saw (figure 1).
Figure 1. Dr. Hupy and two students using the table to cut the plastic material into squares.

The next step was to use the pink spray paint and place a wood triangle stencil on the plastic material. The plastic squares were laid down on the cloth so the spray paint would not get on the floor. There was two triangles painted on opposite sides for each individual square. On the sides that were not spray painted with a pink triangle, a number was spray painted yellow to keep track of each individual GCP (figure 2).
No automatic alt text available.
Figure 2. The GCP's  that have been spray painted and completed for drying.

After all the GCP's were spray painted, we set them up against the wall for them to dry. Afterwards we discussed UAS related material in the shop.

Monday, April 24, 2017

UAS Mission Planning Software

Introduction
This week's lab covers a mission planning software that enables us to utilize the proper steps taken prior to flying the UAS platform. The mission plan is created and allows for the platform to conduct a safe flight in which you define the parameters or mission settings for your specific flight plan. Multiple steps need to be taken not only with your platform and equipment that you are using but also the surroundings that you are going to be flying in. This includes knowing your area of interest, weather, terrain, and any other possible implications to the scene that may alter your flight planning.

Methods
Before you even leave the office there are some key planning essentials that may come back to haunt you if your not aware of the situation you are running into. This includes knowing the area that is being study such as, is the cell signal good, are there lots of people or traffic. Also knowing the vegetation and terrain can make a difference on how your flight plan will go like natural obstacles and land features. Checking the weather is another good observation to take note before leaving the office. Lastly, make sure you have all the right equipment prepared and the batteries are fully charged to get the maximum flight time needed.

When you first get into the field, check again to make sure all of your equipment is with and operational. As far as weather, check for wind, both speed and direction, check the temperature, and check the dew point. Also get an idea of the vegetation around you, this may decide where you want to land the UAS for soft and smooth landing conditions. Another point to add is assessing if there is any EMI issues such as power lines, underground cables, or power stations in the area. Next, get the elevation of the launch site and establish the units in which the team is working in to maintain consistency. Once everything in the surroundings is assessed and seems good to fly, confirm the cellular network is good and bring in the data of your field observations into the pre-flight check and flight log.

Software Demonstration
In this demonstration Bramor's C3P Software is used to easily setup mission parameters out in the field and on the go. Taking a look at the map there are a few symbols that should be known for take-off and landing reasons. There is 4 primary symbols; H - Home, T - Takeoff, R - Rally, L - Landing. The mission settings have 5 parameters that can be set during the pre-flight checklist (figure 1). After the mission settings are set, the Draw tool on the right hand side is used to determine the specified flight route and the 3D map view allows the flight route to be displayed in the flight area with the surrounding terrain in 3D. 

In this software 4 different experimental flight missions were proposed. The first mission consisted of drawing customized waypoints around the Bramor Test Field while experimenting with different side-lap, overlaps, speeds, and absolute vs. relative flying altitudes. These waypoints represent the flight path the UAS will take until the last waypoint is established and it goes into the rally point and landing position (figures 2 and 3). The second flight mission plan is a corridor mission that follows along a linear feature such as a road in the area (figure 4). This corridor flight covers a perfect flight plan that uses the overlap and sidelap parameters in the mission settings so that the area is covered most efficiently. The third flight mission plan took into account some of the terrain in the surrounding Bramor Test Field areas (figures 5 and 6). The larger percentage of overlap in the flight plan there is, the more waypoints the mission planner will establish which makes the flight much longer. A problem that arise is the difference between "absolute" and "relative" altitude. Figure 6 displays a planned mission that makes the UAS fly right into the side of the hill (absolute altitude) and a planned mission that follows topography (relative altitude). Obviously in this case relative altitude is chosen so the UAS follows topography and does not crash into the hillside. The last mission was customized mission plan which was located in the backyard of a childhood home of mine (figures 7 and 8). This route flies on both sides of the a river that is excellent for fishing. Unfortunately when I first created the flight plan I didn't realize there were so many tree's right next to the river.

Figure 1. The Mission Setting parameters for pre-flight.

Figure 2. Waypoint flight plan with custom waypoints, home, takeoff, rally, and landing

Figure 3. 3D view of the custom waypoint flight plan in ArcGIS Earth

Figure 4. 3D view of the mission corridor flight plan along the nearest road with the takeoff and landing points represented very well near the left hand side of the image

Figure 5. Plan view of a flight plan with some elevation increase on terrain towards the left side of the image.

Figure 6. 3D view of the terrain flight plan displaying an "absolute" altitude mission (goes into the hill) and a "relative" altitude" mission (follows topography).  

Figure 7. Plan view of a customized flight plan of the river near a childhood home.

Figure 8. 3D view of the customized flight plan created near a childhood home.


Mission Planning Essentials Review
The importance of going through numerous steps before the UAS is in the air can be arguably just as, if not more important than the actual flight. Without the pre-flight checklist and mission planning essentials the data cannot be collected nearly as easy or as efficiently. The great part about this program is that the mission settings and parameters make it very user friendly and understandable. One of the neatest parts I thought is with different terrain you can determine whether you have an "absolute" altitude or a "relative" altitude. Another great integration is the ability to bring up your flight plan into ArcGIS 3D. Also having certain icons flash at you lets you know what you are missing before you can take flight which I believe is pretty important.  An issue that I thought was difficult to deal with is the duration of the flight path simulation. I didn't seem to find any glitches or spots that were really standing out or lacking in. Overall this program was pretty great to use and seems like an awesome mission planning software.

Sunday, April 16, 2017

Processing and Annotating Oblique Imagery

Introduction

The lab this week consisted of taking oblique imagery from a UAS platform that flew in a corkscrew fashion around an object and annotating the images to receive a better quality 3D model for the finished product. In the previous projects we have used nadir format meaning the picture was taken looking straight down at the ground. An advantage of oblique imagery is the display is more representative of what the human eye sees at the surface. The three different objects that were captured in this project was a front-end loader, a shed, and a truck. The first experiment will include using 3D modeling without image annotation and comparing it to 3D modeling with image annotation. The image annotation in this project for 3D modeling is supposed to eliminate unwanted background content so that the 3D model is not distorted by those surrounding objects in the background.

Study Area
The study area included a front-end loader at the Litchfield mine site. The background consisted of pretty much all sand. The shed and the truck were both done at the South Middle School in Eau Claire. The shed has the sky, trees, and a running track in the background. The truck has the parking lot and some grass in all of its background. For all three of the locations, the images appeared to have been taken mid day because there are hardly any shadows other than the shed which may have been done in the evening.

Methods
The first part of the lab was to copy all of the oblique imagery into our own folder. Before starting on the project in Pix4D, as you go through and select the correct the images you want for the project, at the "Processing Options Template" you want to select the "3D Models" template as shown in figure 1.
Figure 1. 3D Model template option in a new project

The next step is to start processing the images WITHOUT doing any annotation on the images. On this part I selected the front-end loader to process and compare without annotating images. After the processing was done, a video animation of the 3D model was the best way to represent the final product. In order to get a video animation, first you start by selecting the "New Video Animation" button in the Create tab. Next you setup your camera angles to portray the best possible angles of the 3D model. Each angle that is desired should be used by selecting "Record Trajectory Waypoint." after the waypoints are selected and finished, slect the parameters you want for the final output as shown in figure 2. So the video animation of the 3D model without annotating the front-end loader images is shown in the video below.
Figure 2. Trajectory waypoints for a video animation of the 3D model


Front-End Loader without annotation.

The next part is to annotate some of the oblique imagery with the "Image Annotation" tool. Annotation on the images is used to remove an object that appears in some of the images such as the sand in the front-end loader images, the trees within the shed images, or the grass and blacktop in the truck images. First off, select the image you want to annotate by going into Cameras > Calibrated Cameras > and select an image. Over on the right hand side, the image will appear and a tool that looks like a pencil is the "Image Annotation" tool (figure 3). For annotating, the Mask setting was used to highlight the background in all of the images that were annotated (figure 3). Once that is selected the tedious work highlighting of the background can begin. After about 8 images were annotated on the front-end loader, select the processing steps 2 and 3 to recreate the 3D model with annotated images. The final product can be seen in the video below.
Figure 3. The window that is displayed while annotating the selected image from the left hand side

Front-End Loader with annotation.

The next two projects with the shed and the truck in the parking lot consisted of the same exact workflow. Each projected had about 6 to 8 images annotated at different angles to get the best quality out of processing the 3D model for the final product.

Results
For the results of comparing the front-end loader with and without annotating some of the images was pretty minute. Some of the areas that may have been affected for the better were areas with gaps or holes to see the background such as the hydraulic hoses in the front, or near the muffler in the back. Otherwise the 3D model did not have much differences. Also on the front-end loader, the bucket was distorted and looked like it had something in it when the images show different.

The middle school shed honestly turned out pretty bad as far as getting rid of background objects and distortions. The peak of the roof has a white to light green and blue bubble like figure across the top. I'm assuming this may have to do with all of the objects in the background such as the trees and sky that make it difficult to produce the 3D model. Also the one wall on the shed was distorted and appeared to go through the roof which obviously is impossible. The fly through animation can be seen in the video Middle School Shed Annotated

The truck 3D model turned out probably the best as far as quality goes. the parts that got distorted were areas underneath the truck and the windows of the truck. This may have turned out the best because the only background that needed to be annotated was the ground which is completely flat all around the truck. The fly through animation of the truck can be seen in the video Truck Annotated.

Discussions
The oblique imagery in the project is very helpful in processing a 3D model of an object. However, when it came to annotating the images so that any background noise can be ignored was a very tedious task. When looking at the front-end loader images, the sand with tracks in it made it difficult for the sand to be masked as a whole. The worst one to annotate though was the truck in my opinion. the reason is because when annotating the blacktop parking lot, the pixels hardly ever match so you are even more tedious with masking the background. The easiest annotating was in the shed with the sky as the background. The similar pixel color of the blue sky made it so large chunks of the sky could be annotated with one click.
In order to get the best quality with annotating images to get a better 3D model would probably be to select images at different angles rather than multiple images from the same angle. This allows for the surroundings in the image to make the focused object to become less distorted. I feel like this may have been one of my problems when it came to the final 3D model of the middle school shed. I simply didn't pick enough different angles to annotate.

Sunday, April 9, 2017

Calculating Volumes using Pix4D and ArcMap

Introduction

This lab consists of calculating volumes of stockpiles in both Pix4D and ArcMap using the Litchfield data from a previous lab. In the lab there are three different methods on how to calculate the volumes of stockpiles with different software. The first will be using Pix4D, then ArcMap with Raster Clip, and ArcMap with TIN. This is a very quick and efficient way of calculating the leftover stockpiles in mines, quarries, or other related applications.Using the UAS is also very accurate, detailed, cost effective, and time efficient for whomever is wanting to obtain the volumes of stockpiles.

Operations Utilized

Raster Clip - The clip is a data management tool that allows a portion of a raster dataset to be cut out of the original feature dataset.

Raster to TIN - Raster to TIN is a 3D analyst tool that converts a raster into a triangulated irregular network (TIN) dataset.

Add Surface Information - Add surface information is another 3D analyst tool that produces attributes to features with spatial information deriving from a surface.

Surface Volume - Surface Volume is a 3D analyst to that calculates the area and volume of a refion between a surface and a reference plane.

Polygon Volume - Polygon Volume is a 3D analyst tool that calculates the volume and surface area between a polygon and terrain or TIN surface.

Cut Fill - Cut fill is a 3D analyst tool that calculates the volume change between two surfaces. This tool is typically used for cut and fill operations.




Methods

Pix4D stockpile volume calculations

Starting this lab will consist of creating a new folder for the Litchfield data and copying that data to the new folder. This process will take a fair amount of time because of the large amount of data being copied over. After the Litchfield map is opened up in Pix4D, the first place to go is to the volumes tab on the left hand side of the project. Next select the "add volume" button to add the 3 different stockpiles. After it is selected, you can start digitizing your 3 stockpiles by digitizing around the base of each stockpile as shown in figure 1. After the digitizing is done, select calculate to get the volumes of the stockpile which is also shown in figure 1. The stockpile to the far right has an estimated volume of 978.60 cubic meters, the middle pile is 527.37 cubic meters, and the smallest pile up top is 16.13 cubic meters.
Figure 1. Pix4D area of all 3 stockpiles used for volumetric analysis


ArcMap raster clip stockpile volume calculations

First part of this method is to open the mosaic that was from Pix4D and create a new geodatabase with 1 feature class for each individual stockpile. Next step is to digitize the three piles by using the Extract By Mask tool with the correct parameters for each individual pile as shown in figure 2.
Figure 2. Extract by mask tool with the parameters for stockpile 1


Next, use the surface volume tool to calculate the area and volume of each stockpile with the correct parameters as shown in figure 3. In order to find a reference plane use the identify tool to get information on the surface elevation on each stockpile given by the pixel value field. After the tool is ran, an attribute table is created with values of plane height, area, and volume as shown in figure 4. In figure 5, model builder is shown how to use the tools for stockpile 1. This can be used for the other two stockpiles as well.
Figure 3. Surface volume tool inputs

Figure 4. Attribute table after the surface volume tool is ran

Figure 5. Data flow model of stockpile one using raster clip


ArcMap TIN stockpile volume calculations

The first step to finding the volumes as a TIN, you must convert the three raster clips into a TIN of course. This is done using the Raster to TIN tool as shown in figure 6. Now that there are TIN files for each of the three stockpiles, the next tool to use is the Add Surface Information tool with the correct parameters in place (figure 7). The last tool you need to use is the Polygon volume tool to calculate the volume and surface area of the TIN (figure 8). Figure 9 displays the data flow model for the TIN volume calculations.
Figure 6. Raster to TIN tool inputs

Figure 7. Add surface information tool inputs

Figure 8. Polygon Volume tool inputs

Figure 9. Data flow model of the TIN for stockpile


Results

The table below displays the three different calculations used from Pix4D, Raster, and TIN.

Table 1. Results table for volumetric analysis

Map of the Litchfield mine site with the stockpiles used for volumetric analysis



Discussion
In the results it shows that the Pix4D has lower calculations than both the ArcMap Raster and TIN calculations. The Pix4D method was very simple and to the point just by digitizing the base of each stockpile and having the program calculate the rest. With ArcMap Raster and TIN calculations however multiple tools needed to be use correctly with the correct parameters inputted.

Monday, March 27, 2017

Processing Multi-Spectral UAS imagery For Value Added Data Analysis

Introduction

The RedEdge sensor is a multispectral camera that can capture five distinguishable spectral bands to generate precise and quantitative information on the vigor and health of crops. These 5 spectral bands include Blue, Green, Red, Red Edge, and Near Infrared (NIR). Some of the specs on the RedEdge sensor includes a 1 capture per second of all bands, 8.2 cm/pixel at 120 m AGL, 150 grams, and powered by 5.0 volts. The RedEdge sensor can provide much greater detail when it comes to imagery over the Red, Green, and Blue (RGB) sensor. This includes using the RedEdge sensor to gain a greater understanding specific agricultural areas of interest to study.

Methods

Copying data in this lab included making a new folder to store all of the RedEdge imagery and any further information or maps that are created throughout this lab. This folder will be used when processing the RedEdge imagery.

Next step is to process the Imagery in Pix4D that was copied over into the specified folder. When creating the new project, make sure to select the “Ag-Multispectral”(figure 1) template rather than the 3D template like what we had used in the previous labs. When opening the processing options, select all the necessary options shown in figure 2. Once all the correct options are selected, processing can begin. This imagery will take longer to process than the previous labs did.

Figure 1. Ag Multispectral processing template is selected for value added data analysis


 Figure 2. Processing options dialog box


After the imagery is done processing, the next step is to create an image composite. This is done using a series of geotiffs, one for each spectral band. First, go into ArcMap or ArcCatalog and create a geodatabase within the newly created folder. Next, in the search box, type in Composite Bands tool (figure 3). With this tool, enter the bands in the correct order starting with Blue, Green, Red, RedEdge, and NIR. Name and save the composite to the new geodatabase. Once the composite is created, Arcmap can display different properties of the image by altering the RGB composite using the 5 different spectral bands. This is found in the Layer properties dialog box in the symbology tab with the RGB Composite selected on the left hand side (figure 4).

Figure 3. Composite Bands tool dialog box from the Data Management toolbox


Figure 4. Layer properties dialog box where the Red, Green, and Blue channels can be manipulated with the 5 different RedEdge sensor spectral bands


The first map is just the regular RGB display with spectral bands 3, 2, and 1.Various other types of maps are shown below such as Normalized Difference Vegetation Index (NDVI), False Color IR, and the permeability of the Fall Creek house area of interest. The NDVI required using the Image Analysis Tool under the Windows tab. In the Image Analysis window, select the Composite raster and under processing select the NDVI button (figure 5). Then I created values to display the vegetation health as shown in Map 1. The False color IR uses the higher spectral bands within the Red, Green, and Blue to be changed. I used Red as Band 5, Green as Band 3, and Blue as Band 2. With this I created a map showing the False color IR values of the vegetation in this area (Map 2).

Figure 5. Image Analysis window that is highlighting the NDVI leaf selection under "Processing"

Map 4 brings in the skills and knowledge from the previous lab on how to obtain permeable and impermeable surface data.First, extracting the spectral bands will be used to distinguish urban features from natural features. The tool used here is the "Extract Bands" to create a new image with only three bands. Here the three bands for the false color IR will be bands 5, 3, and 2 in that order so that the vegetation shows up red and the roofs and roads appear darker green to gray. Next, requires using the Segment Mean Shift tool (figure 6). This tool makes it easier to classify the image. After the image becomes easy to classify, the Image Classification tool is used to label the differences in vegetation, roads, driveway, and roof (figure 7). After saving the classification, the next tool to use is called "Train Support Vector Machine Classifier" (figure 8). This tool creates a raster and allows you to classify the features the way you want. After the classified raster is completed, the coded values of 0 (impermeable) and 1 (permeable) are inputted to distinguish between the impermeable surfaces (house, driveway, roads) and the permeable surfaces (yard and fields).

Figure 6. Segment Mean Shift tool window for inputting the RGB image.

Figure 7. The training sample manager allows you to group similar classes together such as similar vegetation, roads, driveways, or roofs

Figure 8. Train Support Vector Machine Classifier tool window


Results

The first map shows the regular RGB image with bands 3, 2, and 1 all placed in the correct spectral band placement. The lower second and third maps display a significant pattern with the vegetation having a bright green to yellow color within the NDVI map and a red to pink color in the False color IR map. The roads and house however display a bright orange to yellow color in the NDVI map while the False color IR map is a dark green to bright blue. The fourth map reveals the difference between the permeable and impermeable layers in the area of interest. Through the process of creating the fourth map however there was some difficulties so the car in the driveway and a small spec of the driveway seem to show as permeable however we know this is untrue.

Map 1. RGB image of the Fall Creeak area of interest

Map 2. NDVI map of the Fall Creek area of interest

Map 3. False color IR map of the Fall Creek area of interest


Map 4. Map of the Fall Creek area of interest Permeable vs. Impermeable layers