The Problem with 3D GIS

3D GIS is game changing, it can change the way you view your analysis, it can provide insights which you may have overlooked….but there is a little problem which I may had not shared in my deluge of blogs about how great 3D GIS is…..

…Sometimes it can be hard work. What I mean by this, is that it can sometimes need a lot more consideration than standard methods. Let’s look at some of the major culprits when using Esri ArcGIS Pro 1.4 & Desktop 10.5, keeping in mind that there isn’t many (or any) other 3D GIS you can do this kind of work in.

One of the 3D models built of Kendal, UK

Editing

Once you have converted or built your 3D model into a multipatch polygon you may find yourself struggling to edit or adjust your model.

Split

Having built entire cities, one issue that I’ve come across a fair bit is removing parts of multipatch polygons. For example, when adding multiple models, you may find that two features might overlap and need to remove part of one. In a 2D, standard GIS, you would simply turn the editing on and split the offending polygon.

This doesn’t work in a 3D GIS…Think about it, how does the GIS know where all the planes you require breaking lie with a 3D plane? 3D GIS can do some pretty clever assumptions but I am yet to find a way to remove part of a complex multipatch polygon.

True, within ArcGIS Pro 1.4 there are some editing tools for 3D multipatch features but it is early days at the moment. For simple cubes it is quite easy to adjust and manipulate the model a little but you don’t stand a chance if you have a curved edge.

 

Don’t despair, don’t give up on the 3D just yet, as you know, 3D isn’t new and there are many “workarounds”. One of my favourites is using the ArcGIS Pro “Replace Multipatch” tool. If you want to make multiple edits to a model (feature) you can export the multipatch to a collada format or keyhole markup language (kml) format, edit it in Sketchup, Blender, Meshlab or any of your favourite modelling suites and then import it again without affecting the other features in that layer.

 

If you are extruding simple 2D polygons with the intention of creating 3D mulitpatch polygons, it is a good idea to keep the conversion to multipatch until you explicitly need to. This way, you can edit, split and reshape your 2D polygon within normal ArcGIS Desktop without any issues at all and then draw up and extrude when you are 100% sure it all fits and works okay.

 

In my experience, pre-planning and clarity of the end goal means that you can be prepared for these slight niggles in advance.

Using 3D multipatches in ArcGIS Desktop

Overlay analysis 

So, I’ve built out an entire city of 3D buildings, it looks amazing, last thing to do is clip them by the city boundary….oh, I forgot, the 2D polygon doesn’t truly intersect the multipatch polygon in 3D space, the “clip” tool doesn’t work, the “intersect tool” nor the “merge” tool…in fact you can ignore using modelbuilder.

Clip features

I learned the hard way that 3D features work best with 3D features. Unless the boundary polygon is a 3D feature, then you won’t be able to use spatial analysis to use any kind of overlay querying.

But, there is always a way to trick the system…

Although the multipatch polygon feature is a 3D object, you can still open it in ArcGIS Desktop, meaning that you can use the good ol’ fashioned “select by location” tool. No, it shouldn’t work but it does, furthermore, you can then export your selected data to a new multipatch feature. It begs the question why you can do this and not the “clip” as, in geoprocessing terms, they are the same thing (the clip tool is just the separate bits combined into a single script).

Let’s not get too hung up on it, as it works,

Volumetric Analysis

If you thought that the  “volume” tool was just put there to look clever, think again. It can be a great tool for easily representing floor space or calculating the tonnage of aggregate that needs extracting from the ground to lay a new pipeline.

There is a slight problem though…it can be a little hit and miss, especially when using sourced models.

Calculating Volume

Try adding a model you built in Sketchup, Blender or Meshlab to ArcGIS Pro and then calculate volume (using the add z information tool), Nothing, right? But why? The reason is that it doesn’t like “open” multipatch 3D features, it isn’t fully enclosed. Even if there is a slither of a gap in the polygon and it isn’t 100% enclosed, the tool cannot calculate the volume.

There are other methods whereby you can use a raster surface, like the “surface volume” tool but this isn’t quite as accurate as using your super detailed vector multipatch.

You could try the “Enclose Multipatch” tool, as this closes the multipatch and then running the volume tool BUT you need to consider that unless the multipatch is cut to the surface, for example, where a building sits on a hill and the base isn’t perfectly flat, the volume will not be ideal. So please consider using the data as a high resolution TIN which is merged with the terrain to provide a more accurate volume result.

Oh, last point on this – make sure you use a projected coordinate system that uses metric for your data, a geographic coordinate system will leave you with your volume in degrees….is that even possible?!

Which brings me nicely to –

Issues occur when you don’t specify the datum

Vertical referencing

I distinctly remember my first adventures into 3D through Google Earth, trying to create some of Romsey, UK as 3D buildings using Sketchup. The first hurdle was always figuring out whether the building was “Absolute height”, “On the Ground” or “Relative to the ground”…I mean, what does that mean anyway? I just drew it to sit on the floor, why does it need to ask me more questions about it?

Correct Height Placement

Right now, if you are a regular reader of xyHt or a 3D Geoninja, you will be calling me a muppet. In reality though, you wouldn’t believe how often I am asked about this, especially now that Digital Surface Models (DSMs), Digital Terrain Models (DTMs), bathymetry and other elevation data are so readily available.

Elevation is never easy, worst still, there are either too many or too few options. With the Esri ArcGIS Pro, I am 100% confident that I know where my data sits within 3D space but it is only because I’ve worked with this day in and day out and understand the limitations and data sources.

Let’s consider the Esri “scene” – it’s a cool 3D map and as you zoom into that lovely globe you can see lovely mountains and valleys all popping out of the surface, my question to you is, what elevation data is it using? what is the resolution of that data? You see, I love that Esri provide a detailed and complete coverage elevation surface for the entire globe but the flip side of it is that you cannot know the exact limitations of that surface easily (the information is provided by Esri but it is not a simple “point & click” exercise).

My words of advice here are to use your own terrain when placing 3D multipatch features. Therefore you are in control of both the vertical datum and the resolution of the height.

While I’m here, I want to also point out that there isn’t a “snap to ground” feature in the editing tools within ArcGIS Pro either. This becomes an issue when you bring a model in which isn’t vertically reference, has no vertical datum because you then need to sit it on the surface. Even when your model is a captured point cloud and accurate to 0.5cm, you have no way to accurately place it on the ground. You can adjust it up and down and sit it by sight, though you cannot “snap” it.

The big takeaway here is that firstly, you need to ensure you are confident and know your elevation data if you plan to work in the 3D scene views and secondly that you need to set up your x,y & z coordinate systems correctly from the start to ensure that all the work you do is as precise as possible.

…and yes, I now know the difference between “absolute”, “relative to the ground” & “on the ground”….maybe an interesting blog for another day, though feel free to contact me if you need quicker answers!

And everything else

There are still many things I have not had a chance to mention, for example the complexities of cartographic representation using 3D models in a GIS, or ways of minimising the clashing of overlapping data plus other 3D centric issues such as shadow and light. Maybe a blog for another day?….

Dragons8mycat

 

 

Explain Georeferencing To Me as If I Were a Five-Year-Old

Blog post copied from Adrian Welsh on GeoNet 30/11/2016 – Too good not to share!!

Explain Georeferencing To Me as If I Were a Five-Year-Old

 

I really liked how Denzel Washington used the phrase “explain this to me as if I were a xxx-year-old” in the movie Philadelphia (1993).

Reference: Philadelphia. Philadelphia, PA: Jonathan Demme, 1993. film.

So, I will take it one step further and attempt to explain the concept of georeferencing to an actual five-year-old.

Five-year-old:

Five-year-old engineer says, “I have this PDF of a site plan. I want to put this on a map and have it line up properly.”

Here is my map.

We need to zoom in a little bit closer.

A little bit more.

Open Street Map 1:5,000

Almost there. Zoom in some more so that our site plan will fit better.

Open Street Map 1:1,050

Much better. Now, we need to shrink the site plan to a more usable size. Currently, it’s larger than our map.

Let’s make it a little bit smaller.

Perfect. Now we need to place the site plan on our zoomed in map and adjust it to fit by rotating it and resizing it.

Great! Now, after some quality control of adjustments and transformations, we can rectify this image and call it georeferenced!

OSM 1:1,050 with Image

We can make the georeferenced image transparent to where we can see the basemap behind it.

OSM 1:1,050 with Image, Transparency 50%

Finally, we can add existing linework and other GIS files to give the image a more solid reference.

OSM 1:1,050 with Image, Transparency 50% and Linework

Many thanks to Adrian Welsh for letting me share this!

Dragons8mycat

Vertical transformations in GIS

You know those moments where you are sat in a pub, the office or at a friends house and you say something, then suddenly wonder why you haven’t thought of it before?….I had that very issue in the Brew Dog pub near Spitalfields in London, UK a couple of weeks ago. You see, I was discussing the use of web mapping in 3D and writing a specification for the data being hosted and then it came out….

“I wonder if 3D GIS systems use 3D transformations”…...

Before I even finished the sentence I was starting to get palpitations at the thought of throwing all my hard work over the last few months away. Okay, so maybe that is a bit of an exaggeration. I do take into consideration the vertical transformation when working between data and am a bit of a tide datum nerd, though the question was now out there….

3D GIS…a quick reminder

First, let me address the “3D GIS” elephant in the room. Is there such a thing? My current stance is yes. Having used many different GIS over the last few years, I would say that many would tell you that they are, but they would almost all be wrong. Take for example CesiumJS, it is a fantastic 3D webgl viewer but can you edit the data on screen? Can you transform data between coordinate systems? (The answer you are looking for is no) Other systems which I have discounted are QGIS, MapInfo, CADCorp, Google Earth, ArcScene, GRASS, OpenGIS and also AutoCAD Map¹, purely due to them not having the capability of showing multiple coordinate system 3D data on a globe. Yes, I understand that you can pre-process the data so that it is all in the same coordinate system and then process the datum so that it is all consistent, but NOT on the fly.

There is only one “3D GIS” at the moment, the much overlooked ESRI ArcGIS Pro.

flight-paths-for-wind-farm-analysis

Compared to the 3D capabilities of the other GIS, asking it to handle vertical transformations on the globe as well as the most up to date horizontal transformations would be a bit of an ask, but you know what?…..It is pretty good…..

Disclaimer time, I am currently running the Beta version of ArcGIS Pro 1.4, though I am told it shouldn’t matter, also I have been talking with a few of the developers on functionality but this is a 3D GIS which is out on the shelves now and works. For the OCD spatial nerd, it ticks the boxes, I can use the provided international transformations or I can create my own, though more importantly, I can work with my survey (lidar) data alongside my topographic data, which is alongside my hydrographic data- so I can easily QA the data and share the results quickly.

For the n00bs, why do you need vertical transformations?

Simply put, a horizontal datum is a reference system for specifying positions on the Earth’s surface, so that the coordinate system is of uniform measure, likewise, a vertical datum is a reference to the height of the surface. All simple at the moment….a datum is based on an ellipsoid/spheroid (these geo people can’t decide what shape round they want to call it, so interchange all the time) Each ellipsoid is the best approximation of the Earths surface at any point on the Earth, you may have seen “Clarke 1866, GRS80 or WGS84” in the coordinate information of your data

clarke1866_wgs84_grs80

So you can start to see, just transforming your 3D data from one coordinate system to another isn’t quite so simple as the 2D Helmert you’ve been using. Relax, I’m not going to now go into a geoids and equipotential surfaces (unless you want me to) I am merely pointing out that we need vertical transformations in out lives, otherwise, when you transform your beautiful 3D village model from WGS84 to British National Grid, you will be wondering why it is floating in the air.

This might not matter much to the casual 3D GIS dabbler, as most of the work may be small and the effects won’t be large enough to be noticeable, though when it gets to city size, you will certainly start seeing issues with the buildings at the periphery not sitting right (when you edit the centre to sit right).

In summary

Since joining Garsdale Design in January, I have been doing a LOT of 3D GIS and putting my geodetic knowledge to ensuring things are correct. In the last 6 months I have been having to use ArcGIS Pro more and more, partly due to functionality and partly due to data formats which clients use. My opinion is that this software is becoming the one stop shop for 3D GIS and it is capable of supporting the whole project lifecycle of the project….it could turn out to be the BIM solution that everyone has overlooked. More importantly though, is that ESRI have done their homework and are supporting vertical datum transformations….okay, there are a few missing here and there BUT the list is growing and they are putting these vertical transformations into the geoprocessing tools of ArcGIS Desktop too.

verticaltransforms
Screenshot of (a small portion of) the vertical transformation list

If you know of another 3D GIS, please let me know, as I’d be eager to try it. Please feel free to either contact me direct or through comment below.

QGIS – What do you do when you move your .qgs file?

What do you do when you move your file location in QGIS and lose all your links? Maybe try this….

So, the situation occurred yesterday where I was giving a workshop and sent out a load of QGIS styles, layer definition files and also a project file (.qgs)….Smugly, I told everyone to open the project file, then realised, as hands raised across the room that QGIS doesn’t work with relative paths and it also doesn’t do a “map package”. Working with so many different GIS, it’s hard to keep track of which ones do different things but I really should have remembered this one.

Surprisingly, the solution to repairing all the links and getting it all up and running is relatively easy if you are working with disconnected databases or vector files (shapefiles etc). Just make sure you have a text editor and away you go….

1

First of all, open the rogue .qgs file in your text editor, in the example above, I am using sublime text editor but during the workshop I found Windows notepad was just as capable. Upon opening, you will see that the project file is just a standard xml file with references to several processes.

Use your “Find” option in the text editor to find one of the <datasource> tags (as shown above)

It is simply a case of then changing the folders within that datasource tag to locate the correct location (most people store their data in a single location).

2

As you can see above, I want the project to read all the data from C:OS Southampton rather than the G:Work_Admin_Backup_Nov15GIS Core DataOS Southampton location, so using the REPLACE function (sometimes called the find/replace in some text editors) we can simply change ALL the locations in one go.

Pretty easy huh? A lot easier that using the interface which is provided by QGIS for updating each file link individually, after all, most times we just change folders, we don’t scatter our data around a drive location.

I am sure that this sort of functionality (changing the folder to reference all the links) could be done in bash or as an extra option within QGIS, if you know how, I look forward to hearing from you!

Dragons8mycat

Add OSTN15 to QGIS 2.16

As you may be aware, the United Kingdom has a new transformation model that is OSTN15…..But why? What does it mean to the geospatial community?

Without being too nerdy, tectonic plate movement means that the “model” surface (the geoid) is slowly moving from best fit for the coordinate system. It has been 13yrs since Ordnance Survey implemented OSTN02 so the shift since then is enormous…..a whole 1cm and vertically it is 2.5cm. See this article here from Ordnance Survey.

The whole story is that sensors and our ability to calculate our positi0n relative to both the mathematical models and our relative position to those is constantly evolving too. So, just as OSTN02 revolutionised the accuracy of projecting GPS (WGS84) coordinates using a grid transformation (250 points over the 7 parameters used until 2002), OSTN15 both uses the OS Net of 250 points but has also been improved further with 12 zero order stations with accuracy of 2mm horizontal and 6mm vertical.

So how will this change the way you use your GIS?

If you are already using OSTN02 for your transformations between EPSG 27700 and EPSG 4326 – then you will only see a 5cm improvement over a 1m area at best and this is based on the worst places in the UK, on average you will only see a 2cm improvement anywhere in the UK. To put this into context, when you are zoomed in to an A3 map to about 1:100, you are talking about a few pixels on the screen….it won’t be groundbreaking [at the moment].

Currently, as this goes to press, the OSTN15 transformation has only been available for a few weeks and it is still being tested on different software to ensure it works, I am told that ESRI UK have been testing it with their software as this is being written.

As with OSTN02, I’ve created a fix for QGIS and OSTN15, I will describe how to implement this further in this.

It’s all about the Proj

Proj (Proj.4) is a cartographic library which is based on the work of Gerald Evenden of USGS back circa 1980. Over time it has evolved to consume grid transformations and is used by GRASS GIS, MapServer, PostGIS, Thuban, OGDI, Mapnik, TopoCad,GDAL/OGR as well as QGIS.

There are many ways to use proj, without a GIS you can use it through a command line by defining parameters. QGIS uses the proj library by accessing a spatialite database called srs.db. This is held at .appsqgisresourcessrs.db in Windows and Linux.

The proj spatialite database is a relational database which, when analysed, holds tables for coordinate systems, epsg codes & transformations. What is really clever is that it recognises direction of transformation.

Why is direction important?

Most coordinate transformations go from the projected coordinate system to the geographic coordinate system, for example epsg 4277 to epsg 4326, OSTN15 bucks the trend and is the reverse direction, from 4326 to 4277.

As I found when I first tested OSTN15 with QGIS, I was getting a uniform 200m shift in the data which was being translated and I was really confused. After talking with the gridfile creator, I discovered that the file was created from ETRS89 to OSGB36, therefore the 200m shift I was getting.

QGIS is awesome, you’ve probably overlooked just how clever it is and so did I. Next time you run a transformation, or when you try this one, you may notice that there are 2 fields noted in the columns SRC (source) and DST (destination)…and this is a godsend for solving this issue, as QGIS can read the coordinate in both directions.

transformation-in-qgis

Show us the magic

So, I talked with Ordnance Survey and found that OSTN15 has been given the epsg of 7709 and created a new record with the srs.db which is distributed with Windows, Linux & Mac releases. To utilise this, all you need to do is to download the OSTN15 file from Ordnance Survey (here) and then place the OSTN15_NTv2.gsb file in the shared projections folder .shareprojOSTN15_NTv2.gsb this has been found to be correct in Mac and Windows (there should be similar in Linux). You know it is the right folder as there should be other .gsb files in there!

qgis_folder_location

You can download the updated srs.db from here, this should be placed in the resources folder which can be found at  .appsqgisresourcessrs.db – I highly recommend changing the name of the srs.db file in this folder to something like srs.db.old before adding the new version, just in case it doesn’t work for your particular set up BUT it has been checked on Mac and Windows distributions of QGIS from version 2.12 through to QGIS 2.17.

Enjoy

Dragons8mycat

 

Many thanks to Ordnance Survey for their help

Further reading about the model for Great Britain and OSTN15, I recommend this paper: A guide to Coordinate systems in Great Britain

QGIS CSV & Delimited text Issues

Originally posted on xyHt Magazine 10th August 2016

Last month I was at the Maptime in Southampton (UK), helping QGIS new users how to join tables and map EU referendum maps when I came across an issue with something on QGIS I hadn’t spotted in the last *ahem* years of using it.

When you drag and drop txt, csv or other delimited files into QGIS the fields automatically get converted to text format. No, I’m not making it up and it caused a lot of embarrassment when I was giving my demonstration.

 

draganddrop fields
By dragging and dropping the csv file, you can see that the field type is solely “String”

 

This isn’t written to complain about QGIS but to notify others who are wondering why their joins aren’t working or why their interpolation can’t pick up the value field….You QGIS guys are going to say “why haven’t I raised this as an issue?”, well, firstly read Nyall Dawsons blog post on QGIS issues , secondly I tried to…..it turns out that trying to get access to submit issues has changed and even though I’ve asked for help to get access I’ve been waiting 1 month for response to request.

So…why does it happen?

If you add the file through the “add delimited file” button, none of this is an issue, this is due to the was that the software is written When the file is “dragged & dropped”, the software relies on OGR to add it as a comprehensible layer and this just renders all the fields as text (at present August 2016).

Add layer fields e2e
By adding the csv file using the add layer method, you can see the fields are brought in correctly

Why is it an issue?

If you are joining tables and aren’t aware of the issue, you drag and drop a table with a list of numerical values in, and then can’t join it to a spatial data with values in as you can’t join text to numbers. This could also cause issues with interpolation (reading of a value field) and also generation of points which need classification based on numbers.

Getting it fixed…

This is where things get a little tricky, as I don’t think it is entirely a QGIS issue and more related to the code which QGIS uses to parse the information, so until OGR update their code, it might be a bit of a wait.

 

Dragons8mycat

 

Using 3D Web Mapping to Model Offshore Archaeology

Ever since I started working in the renewables industry on offshore wind farms over 8yrs ago and had to analyse shipwrecks, I thought about how much more interactive and informative shipwreck analysis would be in 3D. There are many companies out there at the moment who produce the most amazing visualisations, where is the ability to move along a fixed track to view a 2.5D wreck but there is no ability to relate it to anything, no context and normally the cost is extremely high when the data captured is normally geospatial and used within a GIS such as QGIS, ArcGIS or Fledemaus.

Here is an example of the amazing model of the James Eagen Layne created by Fourth Element and the model of the Markgraf Shipwreck by Scapa Flow Wrecks

Please don’t get me wrong, I admire these models and they provide detail and information that would be almost impossible to render in a GIS web map without some serious development and a lot of modelling but technology has progressed. Five years ago I would have said that creating an offshore 3D web map was the thing of dreams, whereas today it is a few clicks of the mouse. Using ESRI software, I was able to combine both terrain and bathymetry, adjust for tide datum differences, import a 3D model and then add links and images to the web map (called a ‘scene’).

The most exciting thing we found in developing this, was the cost and time in implementing such a solution. With the ability to consume data from Sketchup, ESRI 3D Models and even Google Earth models, we can reduce the time which a scene takes to build from weeks to mere hours, the most time consuming part is adding the links & getting the colours nice!! Have a look below at what we created:

[iframe src=”https://cloudciti.es/scenes/SJaI7ZBO/embed&#8221; width=”836″ height=”470″ frameborder=”0″ style=”border: 1px solid whitesmoke” webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe><p><a href=”https://cloudciti.es/SJaI7ZBO”>Wreck of the James Eagen Layne</a> from <a href=”https://cloudciti.es/users/54930bf17b2842080022f175″>Garsdale Design Limited</a> on <a href=”https://cloudcities.io”>CloudCities</a&gt;. ]

The model can be navigated in a similar manner to Google Earth, the model should also be interactive, with the ability to click on areas of the wreck with information returned on the right of the screen. If you look at the bottom left there are a set of icons which I will explain.

Overview of the buttons

Camera Button

 

The camera button, highlighted in green, provides access to the scene bookmarks, click on any of these and the scene will move to the view relating to the text. It will also alter the layers shown to provide the best view (according to the creator)

Animation button

The animation button, highlighted green above, animates the scene by cycling through the bookmarks

Layers buttonThe layers button allows access to the information relayed on the scene. By default, the tidal water is turned off and only one model is shown.

Light Simulation

The light simulation button provides ability to cast shadow and simulate specific times of day. Although not really relevant for an underwater feature, it provides a method for viewing internal features better.

Mobile User Bonus Feature!

For those of you using a mobile device, you will notice one further button:

Cardboard button

Yes, the scene is fully 3D and the viewer fully supports Google Cardboard, so go ahead and have a go!

Future development

This is just the beginning, as you can see this viewer is extremely lightweight and responsive Moving forward, we (Garsdale Design Ltd) are looking to adding further information such as nearby wrecks, more detailed bathymetry, objects which may cause risk such as anchorages and vessel movement in the area. The potential is immense and where this is geographic (hit the map button on the right) you can relate this to a real world location….in future versions we are looking to implementing Admiralty charts and bathymetry maps to view side by side with the site.

Disclaimer

I am not an archaeologist or diver! – Data is sourced from open data sources (Inspire, EA Lidar, Wikipedia) with the exception of the model(s) which were built by myself from images and multibeam data. Photos were obtained from Promare, on the Liberty 70 project – Contains public sector information licensed under the Open Government Licence v3.0, This data is not to be used for navigation or diving.

For further information or to ask how Garsdale Design can assist you, please do not hesitate to contact me.