Skip navigation

Category Archives: ArcGIS

Recall when I went on ad nauseum about my struggles with the Lower
Walker River map as I was trying to document (in part) the struggles
that the lower Walker River has had in dealing with its shrinking
lake? If you missed that fun, experience it here:

Well, I hardly made a peep about this map…mainly because it
was finished earlier and was out of the ‘buffer’ at the time. But now,
it has reached a comparable state of completion.

The twist with this map (which is along the lower Colorado River
between Hoover and Davis Dams) is that the river has lost its battle
with a lake by virture of having been dammed downstream. Thus, all of
the bluish-greenish units are submerged under the lake.

I was able to map these features with reasonable confidence using
sonar data and large-scale, pre-dam topographic maps of the valley.

Interesting side note: Since getting the map to this point, I have
gotten my hot little hands on a pre-dam aerial photo mosaic of the map

Revision time!

Posted via email from Fresh Geologic Froth

It was recently brought to my attention that the graphing tool in ArcGIS could be really useful if you had the right type of data (thanks to ND at UO). Well, I spent most of today trying to refine a longitudinal profile of the Owyhee River from my coveted LiDAR data set, and it occurred to me that I had some useful data.

My goal beyond just examining the profile was to indicate the locations of major landslide complexes along the river corridor to investigate how they may influence the river’s gradient.  I actually extracted the profile data from the data using a tool in GlobalMapper which I like. I converted the data to an excel spreadsheet, opened the sheet in Arc and then exported it into my Geodatabase as a feature dataset. Once it was in there, I created a graph of the data (basically the profile) and began to select points on the profile along key reaches that I had mapped. Lo and behold, those points i selected on the map lit up in the profile graph. Sweet. This was huge. It goes both ways as well. Select points on the graph, and they light up on the map.





Restrict the displayed points on the graph to those selected on the map and you can export them as a subset of the data. This step comes in really handy for plotting the exact position of the landslide complex-reaches on the overall profile figure. Previously, I had stupidly brute-forced this process. Typical. The result is below:



Also very useful is to plot the profile data in the form of cumulative distance vs. slope of channel segment. This graph immediately indicates important trends and anomalies in the data. Turns out that the anomalously high slope values and negative slope values relate, in this case, mainly to inadvertently collected data from vegetated bars, extremely coarse gravel bars, and even wave trains at some of the rapids. Thus, an important and informed QA step can be taken to clean out the riff-raff. In general, though, you can see how useful this method is for zeroing-in on areas of key interest. For example, many of the points on the map below correspond to rapids





Posted via email from Fresh Geologic Froth

It is tortured river season in my office. Lately, I have been tackling Nevada’s mighty Walker River and its shrinking terminal lake (new term is terminus lake…but that is a bit soft); and Oregon’s Owyhee River and its travails with lava and landslides; but now I am back on to the Mighty Bill Williams River of Arizona. You know, the Bill Williams River.

Included below is a snippet of the map I am working on. Shown are 6 generations of lines that document major changes in the channel, most since a dam was finished in the late 60s. One day soon, this map will actually make sense, I promise.


The BWR is a special case. It is a roughly 35 mi stretch of river that traverses the hot desert below the confluence of two rivers that collectively drain more than 5000 square miles of western Arizona. Alamo Dam sits just below the confluence and traps essentially all of the sediment that would otherwise have gone down the BWR and to the Colorado River (well, at least to Lake Havasu). Also important to note is that the pre-dam BWR could attain peak discharges ranging up to 100,000 cfs, whereas the post-dam BWR can hardly exceed 7000 cfs owing to the outlet works of the dam. Thus, large runoff events that would have otherwise blasted through the system in a week or less (Spikes) are now converted to protracted, flat-topped hydrographs that lumber through the channel for up to several weeks to months (Bricks). Recall that these bricks are also sediment-free except for the sed picked up in the channel below the dam.


The result is an interesting experiment in channel change, sediment budgeting, and inadvertent (or otherwise) tamarisk farming.


I won’t be posting daily updates of this map, so don’t worry. Be assured, however, that I will make a lot of noise when I finally finish it. This one is a long, long, long, time coming. Just ask the sponsors.


Some other BWR info:


Posted via email from Fresh Geologic Froth

Sure, I have gone on and on about the amazing visualizations you can get with some tweaking of LiDAR data; however, it turns out that a pretty basic representation is also quite useful…contours. Yes, contours. Sometimes smaller scale features remain somewhat ambiguous in hillshades or slopeshades, but high-res, short interval contours from the LiDAR data can eliminate most of the ambiguity. In this case, it is a tiny area that I have struggled with on the Owyhee River. Here, a large landslide entered from the north, shoved the river channel to the south, and the river eventually worked its way back to the north to some extent. The array of surficial deposits in the void that comprises the right hand side of the image south of the river record this sequence of events as well as subsequent sedimentation by tributary fans. The contours really highlight the fans, and in conjunction with discernible drainage patterns evident in the LiDAR, it is clear what is fan and what is river, right?

2-m Contours were generated in GlobalMapper and exported as shapefile to view in Arc.
Note, Ian Madin (at DOGAMI) gave me the tip on contours especially as they relate to resolving fan features. He was right…it works!

Posted via email from Fresh Geologic Froth

I created this lake by generating a contour from the LiDAR dataset at an elevation of 1046 m. GlobalMapper does this in about 1.5 minutes. Then, exported the vector as a shapefile, cut out the parts of the line that occur downstream from the dam, stitch the remaining loose ends, build a poly from the line and there it is.

This lake has an interesting topographic correspondence with the old landslides on the south side of the Hole in the Ground as well as the ancient fan remnants that come in from the north side. Don't forget that much of the topography you can see through the lake didn't exist at the time of the lava dam. The valley floor was probably formed on the Bogus Rim lava which forms the flat-topped features that flank the left and right banks of the river near the eastern end of the lake. The top of the Bogus Rim lava is only about 25 m below the surface of this lake. Thus, the link between this lake and the landslides is dubious as there was nowhere for the landslides to slide.

Posted via email from Fresh Geologic Froth

I am late on reporting these useful tidbits and for that I apologize. I learned of these techniques from Ian Madin from DOGAMI while I was at the AASG meeting in Park City way back in June. Ian is my LiDAR hero for the time being. Basically, he showed me some simple tricks that make complete sense in hindsight but struck me as nothing short of revolutionary when I applied them to my data set. Before I get into it, I will say again that LiDAR changes everything. It is a truly revolutionary tool for geologic mapping of any kind, but particularly for surficial geologic mapping.

OK. So you have your LiDAR and you love the super neato hillshade images that it can be used to generate. But, hey, what about those damn shadows in areas of key interest? Well, you can apply a redundant brute force approach to making hillshade images with different solar geometries…but that would be downright nutty. You could crank it up a methodological notch and use GlobalMapper or Surfer to create these images far more quickly and choose your favorite to export…but that would be silly as well (but kind of fun…except for the exporting part).
Step back and think about what you are trying to visualize with the hillshade….wait for it….slopes, right?!. So, what you do is effectively create a universal \ isotropic (?) hillshade image by using the ‘slope’ tool in the ArcGIS toolbox (3D analyst\ raster surface\ slope). Trust me, it works. However, you can’t just go with the default settings. You have to stretch the resulting data (std dev works best for me), invert the grayscale ramp (important) and sit back and take it all in. Sweet! But wait, you need to overlay a slightly tranparensized color ramp of the elevation data (stretched as well for simplicity) to make it tasty. Now you have it all. For some real fun, change the n value in the standard deviation stretch and see what happens (maybe stay between 1 and 3).
Click through the photoset for some comparisons and you just may become a believer. Obviously, having all of these visualizations at your immediate disposal is the way to go…the beauty of GIS for geology, no?
Maybe you noticed that the last one has a comfortably smooth contour overlay…how the hell did that get there? Stay tuned for a tip that even took an ESRI LiDAR braniac by surprise at the Users Conference.

Posted via email from Fresh Geologic Froth

Note from Kyle: A new contributor has entered the froth! I think he is smarter than I am…

We’ve all read Kyle’s rants against paper maps, and we’ve all seen some of the potential for online, digital mapping by using websites like MapQuest or software like Google Earth. So if we want to try and step away from producing flat, paper maps and want to be able to share our digital geologic data with each other, what kind of format are we going to use? The answer is that we’re going to need to learn to use map services.

Your users are desperate to get your data

Your users are desperate to get your data

What is a map service? Think of it this way: Your map and data exists on a computer somewhere that everyone can see over the Internet. Everyone wants to see your map data, but you can’t just open the barn doors and let them all tear into it. Think wolves starved by the longest, harshest New England winter suddenly turned loose inside the barn where your sheep have been warm, toasty and chubby for months. This just won’t do. Instead of a wild feeding frenzy, you need some organization. A map service provides organization by specifying how to ask for the data, and how it will be returned. In these terms, a map service lets the wolves have all the sheep they want, as long as they ask nicely.

This allows for a variety of client applications (for example your web browser, Google Earth, or ArcGIS Desktop) to provide an interface by which people can look at your map data. As long as an application knows how to ask the right questions, and what to do with the answers, it will be able to view the map.

Map services are an important step in moving our geologic maps off of paper and getting them online. They allow us to share both our data and our cartographic work with each other. Ever downloaded a “map package” that consisted of an indecipherable slew of files, databases, interchange formats, acronyms and cryptic field names? Well map services are the solution, providing not only an image of your map, complete with all the cartographic work you put into it, but also a simple interface for grabbing the data itself.

At the Arizona Geological Survey, we’ve put together a map service for the geologic map of the State of Arizona. You can view it here:

AZGS Map Services: Geologic Map of Arizona

Quickly and easily view and query the map within ArcMap

Quickly and easily view and query the map within ArcMap

In the future we’ll be putting together more and more of these services, and including with them more information – photos, field notes, descriptions of important contacts and structures, etc. Perhaps the most intriguing part of all this though, is how easy it is to create these map services. If you have a functioning web server and have access to ArcGIS Server, then you have absolutely no excuse not to start figuring out how to use it. If you don’t have a web server or ArcGIS Server, well, stay tuned, because the AZGS is also working on putting together a software package, or “stack”, of entirely free, open-source applications that do very similar things to what ArcGIS Server can do. Our goal is to make it possible for everyone to begin exchanging maps and data using map services.

We (I have some partners in crime now) have recently been exploring the application of generalization routines in Arc to one of my excessively detailed published geologic maps. As part of a larger mapping effort (ND2MP: The Nevada Digital Dirt Mapping Project) I am walking the fine line between the rationality of automated generalization and the impracticality of manually generalizing detailed mapping that I have already completed.

A lot of basic concepts of cartography in general and geologic mapping in particular come to the fore when you start visualizing blotch maps (i.e. those based on polygons) at different scales. Some interesting complexities involving the analog to digital map world also arise…those issues will eventually be aired on the ND2MP blog. For now, I will show some of the results of automated generalization routines in Arc.

The detailed map in question is NBMG Map 156, a map of Ivanpah Valley, Nevada that was compiled at ~1:12k but was released in Dead Tree Edition at 1:50k so it would fit on the plotter/tree killer.

After perusing various options, we decided that the ‘aggregate’ generalization tool was the closest to what we wanted…but not exactly what we wanted. This tool melds polys/blotches together on the basis of only a couple of criteria: how close together two like polys can be before they meld into one, and how small the resulting polys (or holes) can be. Both of these concepts involve deciding on a minimum mappable unit (MMU) dimension (a post and discussion for another day fellow mappers).

The map below is an ungeneralized version of a part of the Ivanpah Valley map (in this case the Jean 7.5 Quad) shown at (roughly) 1:150k:

A generalized version wherein two groups of the most intricately mapped surficial units are aggregated is shown below at the same scale (the yellow and red ones):

At face value, the lower map is a bit more legible. In this instance we aggregated like-polys that were less than 40m apart and eliminated polys (in the same group) that were smaller than 5 ha (50,000 sq. meters). We are considering an MMU of 9 ha for a final compilation of Clark County surficial geology to print (yes…I said print) at 1:150k. Note that the centroids of the eliminated polys will be retained as a point data set in case it actually matters that they are gone.

The generalization routine shown above essentially eliminated numerous reaches of narrow, active desert washes. We are interested in retaining these for various reasons, but maybe only as lines. If anyone has a suggestion for how to extract the lines from the eliminated wash reaches as part of the generalization process (or has a suggestion for a better generalization routine) please speak up!

Here are the maps side by side for better comparison:

Howdy Dummies. Are you like me? Do you get so wrapped up in mapping lines on high-res imagery that you fail to judiciously attribute them? You know, that ‘oh man, I can just keep mapping this obvious contact until it disappears’ feeling. Do you do the same with label points (you do use label points, right?)? Well, you can control your attention deficit by selecting a key option in Editor>Options interface:

Once you select the correct attribution option, you will be interrogated by the program as to what the attribute of the feature you just created is. Yes, you will have to make the call then. You really don’t have time for that second, or third, or fourth sweep through the map do you? Do it right the first time. Be particularly judicious about your label points since those are much harder to formulate well after the fact.

I had no idea this option was available until fairly recently. If you knew of it, way to go. You are less of a dummy than I.

If you make your geologic maps using ArcGIS and work with nicely detailed color imagery, then you already know how useful a stretch is. If not, check this previous posts for dummies:

Now that you are back up to speed, I will share a simple trick I figured out by brute force that eliminates areas that may skew your stretch in an inconvenient way. Namely, large bodies of water. Right now, I am supposed to be finalizing mapping in the Spirit Mtn NW quad which includes parts of Nevada, Arizona, and Lake Mohave. Mapping along the lakeshore in the field is a joy; whereas compiling along the lakeshore is a pain in the neck…particularly when you use the standard deviation stretch restricted to the ‘current display extent’ which is usually the best option for contrast enhancement. The problem is caused by the black hole of lake pixels that dominate the statistics. The solution? Mask out the lake in a new raster using the ‘extract’ tool:

Here are the results from my current map area:

Epilogue. Someone with considerably more knowledge in GIS than I once explained to me how I could do this with raster math. I screwed around with that and failed. After numerous scans through Arc Toolbox (haven’t you scanned that stuff over and over looking for something?), I finally found some commands that sounded useful. Remember, this is digital geoscience for dummies.