The Problem with 3D GIS

3D GIS is game changing, it can change the way you view your analysis, it can provide insights which you may have overlooked….but there is a little problem which I may had not shared in my deluge of blogs about how great 3D GIS is…..

…Sometimes it can be hard work. What I mean by this, is that it can sometimes need a lot more consideration than standard methods. Let’s look at some of the major culprits when using Esri ArcGIS Pro 1.4 & Desktop 10.5, keeping in mind that there isn’t many (or any) other 3D GIS you can do this kind of work in.

One of the 3D models built of Kendal, UK

Editing

Once you have converted or built your 3D model into a multipatch polygon you may find yourself struggling to edit or adjust your model.

Split

Having built entire cities, one issue that I’ve come across a fair bit is removing parts of multipatch polygons. For example, when adding multiple models, you may find that two features might overlap and need to remove part of one. In a 2D, standard GIS, you would simply turn the editing on and split the offending polygon.

This doesn’t work in a 3D GIS…Think about it, how does the GIS know where all the planes you require breaking lie with a 3D plane? 3D GIS can do some pretty clever assumptions but I am yet to find a way to remove part of a complex multipatch polygon.

True, within ArcGIS Pro 1.4 there are some editing tools for 3D multipatch features but it is early days at the moment. For simple cubes it is quite easy to adjust and manipulate the model a little but you don’t stand a chance if you have a curved edge.

 

Don’t despair, don’t give up on the 3D just yet, as you know, 3D isn’t new and there are many “workarounds”. One of my favourites is using the ArcGIS Pro “Replace Multipatch” tool. If you want to make multiple edits to a model (feature) you can export the multipatch to a collada format or keyhole markup language (kml) format, edit it in Sketchup, Blender, Meshlab or any of your favourite modelling suites and then import it again without affecting the other features in that layer.

 

If you are extruding simple 2D polygons with the intention of creating 3D mulitpatch polygons, it is a good idea to keep the conversion to multipatch until you explicitly need to. This way, you can edit, split and reshape your 2D polygon within normal ArcGIS Desktop without any issues at all and then draw up and extrude when you are 100% sure it all fits and works okay.

 

In my experience, pre-planning and clarity of the end goal means that you can be prepared for these slight niggles in advance.

Using 3D multipatches in ArcGIS Desktop

Overlay analysis 

So, I’ve built out an entire city of 3D buildings, it looks amazing, last thing to do is clip them by the city boundary….oh, I forgot, the 2D polygon doesn’t truly intersect the multipatch polygon in 3D space, the “clip” tool doesn’t work, the “intersect tool” nor the “merge” tool…in fact you can ignore using modelbuilder.

Clip features

I learned the hard way that 3D features work best with 3D features. Unless the boundary polygon is a 3D feature, then you won’t be able to use spatial analysis to use any kind of overlay querying.

But, there is always a way to trick the system…

Although the multipatch polygon feature is a 3D object, you can still open it in ArcGIS Desktop, meaning that you can use the good ol’ fashioned “select by location” tool. No, it shouldn’t work but it does, furthermore, you can then export your selected data to a new multipatch feature. It begs the question why you can do this and not the “clip” as, in geoprocessing terms, they are the same thing (the clip tool is just the separate bits combined into a single script).

Let’s not get too hung up on it, as it works,

Volumetric Analysis

If you thought that the  “volume” tool was just put there to look clever, think again. It can be a great tool for easily representing floor space or calculating the tonnage of aggregate that needs extracting from the ground to lay a new pipeline.

There is a slight problem though…it can be a little hit and miss, especially when using sourced models.

Calculating Volume

Try adding a model you built in Sketchup, Blender or Meshlab to ArcGIS Pro and then calculate volume (using the add z information tool), Nothing, right? But why? The reason is that it doesn’t like “open” multipatch 3D features, it isn’t fully enclosed. Even if there is a slither of a gap in the polygon and it isn’t 100% enclosed, the tool cannot calculate the volume.

There are other methods whereby you can use a raster surface, like the “surface volume” tool but this isn’t quite as accurate as using your super detailed vector multipatch.

You could try the “Enclose Multipatch” tool, as this closes the multipatch and then running the volume tool BUT you need to consider that unless the multipatch is cut to the surface, for example, where a building sits on a hill and the base isn’t perfectly flat, the volume will not be ideal. So please consider using the data as a high resolution TIN which is merged with the terrain to provide a more accurate volume result.

Oh, last point on this – make sure you use a projected coordinate system that uses metric for your data, a geographic coordinate system will leave you with your volume in degrees….is that even possible?!

Which brings me nicely to –

Issues occur when you don’t specify the datum

Vertical referencing

I distinctly remember my first adventures into 3D through Google Earth, trying to create some of Romsey, UK as 3D buildings using Sketchup. The first hurdle was always figuring out whether the building was “Absolute height”, “On the Ground” or “Relative to the ground”…I mean, what does that mean anyway? I just drew it to sit on the floor, why does it need to ask me more questions about it?

Correct Height Placement

Right now, if you are a regular reader of xyHt or a 3D Geoninja, you will be calling me a muppet. In reality though, you wouldn’t believe how often I am asked about this, especially now that Digital Surface Models (DSMs), Digital Terrain Models (DTMs), bathymetry and other elevation data are so readily available.

Elevation is never easy, worst still, there are either too many or too few options. With the Esri ArcGIS Pro, I am 100% confident that I know where my data sits within 3D space but it is only because I’ve worked with this day in and day out and understand the limitations and data sources.

Let’s consider the Esri “scene” – it’s a cool 3D map and as you zoom into that lovely globe you can see lovely mountains and valleys all popping out of the surface, my question to you is, what elevation data is it using? what is the resolution of that data? You see, I love that Esri provide a detailed and complete coverage elevation surface for the entire globe but the flip side of it is that you cannot know the exact limitations of that surface easily (the information is provided by Esri but it is not a simple “point & click” exercise).

My words of advice here are to use your own terrain when placing 3D multipatch features. Therefore you are in control of both the vertical datum and the resolution of the height.

While I’m here, I want to also point out that there isn’t a “snap to ground” feature in the editing tools within ArcGIS Pro either. This becomes an issue when you bring a model in which isn’t vertically reference, has no vertical datum because you then need to sit it on the surface. Even when your model is a captured point cloud and accurate to 0.5cm, you have no way to accurately place it on the ground. You can adjust it up and down and sit it by sight, though you cannot “snap” it.

The big takeaway here is that firstly, you need to ensure you are confident and know your elevation data if you plan to work in the 3D scene views and secondly that you need to set up your x,y & z coordinate systems correctly from the start to ensure that all the work you do is as precise as possible.

…and yes, I now know the difference between “absolute”, “relative to the ground” & “on the ground”….maybe an interesting blog for another day, though feel free to contact me if you need quicker answers!

And everything else

There are still many things I have not had a chance to mention, for example the complexities of cartographic representation using 3D models in a GIS, or ways of minimising the clashing of overlapping data plus other 3D centric issues such as shadow and light. Maybe a blog for another day?….

Dragons8mycat

 

 

QGIS – What do you do when you move your .qgs file?

What do you do when you move your file location in QGIS and lose all your links? Maybe try this….

So, the situation occurred yesterday where I was giving a workshop and sent out a load of QGIS styles, layer definition files and also a project file (.qgs)….Smugly, I told everyone to open the project file, then realised, as hands raised across the room that QGIS doesn’t work with relative paths and it also doesn’t do a “map package”. Working with so many different GIS, it’s hard to keep track of which ones do different things but I really should have remembered this one.

Surprisingly, the solution to repairing all the links and getting it all up and running is relatively easy if you are working with disconnected databases or vector files (shapefiles etc). Just make sure you have a text editor and away you go….

1

First of all, open the rogue .qgs file in your text editor, in the example above, I am using sublime text editor but during the workshop I found Windows notepad was just as capable. Upon opening, you will see that the project file is just a standard xml file with references to several processes.

Use your “Find” option in the text editor to find one of the <datasource> tags (as shown above)

It is simply a case of then changing the folders within that datasource tag to locate the correct location (most people store their data in a single location).

2

As you can see above, I want the project to read all the data from C:OS Southampton rather than the G:Work_Admin_Backup_Nov15GIS Core DataOS Southampton location, so using the REPLACE function (sometimes called the find/replace in some text editors) we can simply change ALL the locations in one go.

Pretty easy huh? A lot easier that using the interface which is provided by QGIS for updating each file link individually, after all, most times we just change folders, we don’t scatter our data around a drive location.

I am sure that this sort of functionality (changing the folder to reference all the links) could be done in bash or as an extra option within QGIS, if you know how, I look forward to hearing from you!

Dragons8mycat

Add OSTN15 to QGIS 2.16

As you may be aware, the United Kingdom has a new transformation model that is OSTN15…..But why? What does it mean to the geospatial community?

Without being too nerdy, tectonic plate movement means that the “model” surface (the geoid) is slowly moving from best fit for the coordinate system. It has been 13yrs since Ordnance Survey implemented OSTN02 so the shift since then is enormous…..a whole 1cm and vertically it is 2.5cm. See this article here from Ordnance Survey.

The whole story is that sensors and our ability to calculate our positi0n relative to both the mathematical models and our relative position to those is constantly evolving too. So, just as OSTN02 revolutionised the accuracy of projecting GPS (WGS84) coordinates using a grid transformation (250 points over the 7 parameters used until 2002), OSTN15 both uses the OS Net of 250 points but has also been improved further with 12 zero order stations with accuracy of 2mm horizontal and 6mm vertical.

So how will this change the way you use your GIS?

If you are already using OSTN02 for your transformations between EPSG 27700 and EPSG 4326 – then you will only see a 5cm improvement over a 1m area at best and this is based on the worst places in the UK, on average you will only see a 2cm improvement anywhere in the UK. To put this into context, when you are zoomed in to an A3 map to about 1:100, you are talking about a few pixels on the screen….it won’t be groundbreaking [at the moment].

Currently, as this goes to press, the OSTN15 transformation has only been available for a few weeks and it is still being tested on different software to ensure it works, I am told that ESRI UK have been testing it with their software as this is being written.

As with OSTN02, I’ve created a fix for QGIS and OSTN15, I will describe how to implement this further in this.

It’s all about the Proj

Proj (Proj.4) is a cartographic library which is based on the work of Gerald Evenden of USGS back circa 1980. Over time it has evolved to consume grid transformations and is used by GRASS GIS, MapServer, PostGIS, Thuban, OGDI, Mapnik, TopoCad,GDAL/OGR as well as QGIS.

There are many ways to use proj, without a GIS you can use it through a command line by defining parameters. QGIS uses the proj library by accessing a spatialite database called srs.db. This is held at .appsqgisresourcessrs.db in Windows and Linux.

The proj spatialite database is a relational database which, when analysed, holds tables for coordinate systems, epsg codes & transformations. What is really clever is that it recognises direction of transformation.

Why is direction important?

Most coordinate transformations go from the projected coordinate system to the geographic coordinate system, for example epsg 4277 to epsg 4326, OSTN15 bucks the trend and is the reverse direction, from 4326 to 4277.

As I found when I first tested OSTN15 with QGIS, I was getting a uniform 200m shift in the data which was being translated and I was really confused. After talking with the gridfile creator, I discovered that the file was created from ETRS89 to OSGB36, therefore the 200m shift I was getting.

QGIS is awesome, you’ve probably overlooked just how clever it is and so did I. Next time you run a transformation, or when you try this one, you may notice that there are 2 fields noted in the columns SRC (source) and DST (destination)…and this is a godsend for solving this issue, as QGIS can read the coordinate in both directions.

transformation-in-qgis

Show us the magic

So, I talked with Ordnance Survey and found that OSTN15 has been given the epsg of 7709 and created a new record with the srs.db which is distributed with Windows, Linux & Mac releases. To utilise this, all you need to do is to download the OSTN15 file from Ordnance Survey (here) and then place the OSTN15_NTv2.gsb file in the shared projections folder .shareprojOSTN15_NTv2.gsb this has been found to be correct in Mac and Windows (there should be similar in Linux). You know it is the right folder as there should be other .gsb files in there!

qgis_folder_location

You can download the updated srs.db from here, this should be placed in the resources folder which can be found at  .appsqgisresourcessrs.db – I highly recommend changing the name of the srs.db file in this folder to something like srs.db.old before adding the new version, just in case it doesn’t work for your particular set up BUT it has been checked on Mac and Windows distributions of QGIS from version 2.12 through to QGIS 2.17.

Enjoy

Dragons8mycat

 

Many thanks to Ordnance Survey for their help

Further reading about the model for Great Britain and OSTN15, I recommend this paper: A guide to Coordinate systems in Great Britain

QGIS CSV & Delimited text Issues

Originally posted on xyHt Magazine 10th August 2016

Last month I was at the Maptime in Southampton (UK), helping QGIS new users how to join tables and map EU referendum maps when I came across an issue with something on QGIS I hadn’t spotted in the last *ahem* years of using it.

When you drag and drop txt, csv or other delimited files into QGIS the fields automatically get converted to text format. No, I’m not making it up and it caused a lot of embarrassment when I was giving my demonstration.

 

draganddrop fields
By dragging and dropping the csv file, you can see that the field type is solely “String”

 

This isn’t written to complain about QGIS but to notify others who are wondering why their joins aren’t working or why their interpolation can’t pick up the value field….You QGIS guys are going to say “why haven’t I raised this as an issue?”, well, firstly read Nyall Dawsons blog post on QGIS issues , secondly I tried to…..it turns out that trying to get access to submit issues has changed and even though I’ve asked for help to get access I’ve been waiting 1 month for response to request.

So…why does it happen?

If you add the file through the “add delimited file” button, none of this is an issue, this is due to the was that the software is written When the file is “dragged & dropped”, the software relies on OGR to add it as a comprehensible layer and this just renders all the fields as text (at present August 2016).

Add layer fields e2e
By adding the csv file using the add layer method, you can see the fields are brought in correctly

Why is it an issue?

If you are joining tables and aren’t aware of the issue, you drag and drop a table with a list of numerical values in, and then can’t join it to a spatial data with values in as you can’t join text to numbers. This could also cause issues with interpolation (reading of a value field) and also generation of points which need classification based on numbers.

Getting it fixed…

This is where things get a little tricky, as I don’t think it is entirely a QGIS issue and more related to the code which QGIS uses to parse the information, so until OGR update their code, it might be a bit of a wait.

 

Dragons8mycat

 

Using 3D Web Mapping to Model Offshore Archaeology

Ever since I started working in the renewables industry on offshore wind farms over 8yrs ago and had to analyse shipwrecks, I thought about how much more interactive and informative shipwreck analysis would be in 3D. There are many companies out there at the moment who produce the most amazing visualisations, where is the ability to move along a fixed track to view a 2.5D wreck but there is no ability to relate it to anything, no context and normally the cost is extremely high when the data captured is normally geospatial and used within a GIS such as QGIS, ArcGIS or Fledemaus.

Here is an example of the amazing model of the James Eagen Layne created by Fourth Element and the model of the Markgraf Shipwreck by Scapa Flow Wrecks

Please don’t get me wrong, I admire these models and they provide detail and information that would be almost impossible to render in a GIS web map without some serious development and a lot of modelling but technology has progressed. Five years ago I would have said that creating an offshore 3D web map was the thing of dreams, whereas today it is a few clicks of the mouse. Using ESRI software, I was able to combine both terrain and bathymetry, adjust for tide datum differences, import a 3D model and then add links and images to the web map (called a ‘scene’).

The most exciting thing we found in developing this, was the cost and time in implementing such a solution. With the ability to consume data from Sketchup, ESRI 3D Models and even Google Earth models, we can reduce the time which a scene takes to build from weeks to mere hours, the most time consuming part is adding the links & getting the colours nice!! Have a look below at what we created:

[iframe src=”https://cloudciti.es/scenes/SJaI7ZBO/embed&#8221; width=”836″ height=”470″ frameborder=”0″ style=”border: 1px solid whitesmoke” webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe><p><a href=”https://cloudciti.es/SJaI7ZBO”>Wreck of the James Eagen Layne</a> from <a href=”https://cloudciti.es/users/54930bf17b2842080022f175″>Garsdale Design Limited</a> on <a href=”https://cloudcities.io”>CloudCities</a&gt;. ]

The model can be navigated in a similar manner to Google Earth, the model should also be interactive, with the ability to click on areas of the wreck with information returned on the right of the screen. If you look at the bottom left there are a set of icons which I will explain.

Overview of the buttons

Camera Button

 

The camera button, highlighted in green, provides access to the scene bookmarks, click on any of these and the scene will move to the view relating to the text. It will also alter the layers shown to provide the best view (according to the creator)

Animation button

The animation button, highlighted green above, animates the scene by cycling through the bookmarks

Layers buttonThe layers button allows access to the information relayed on the scene. By default, the tidal water is turned off and only one model is shown.

Light Simulation

The light simulation button provides ability to cast shadow and simulate specific times of day. Although not really relevant for an underwater feature, it provides a method for viewing internal features better.

Mobile User Bonus Feature!

For those of you using a mobile device, you will notice one further button:

Cardboard button

Yes, the scene is fully 3D and the viewer fully supports Google Cardboard, so go ahead and have a go!

Future development

This is just the beginning, as you can see this viewer is extremely lightweight and responsive Moving forward, we (Garsdale Design Ltd) are looking to adding further information such as nearby wrecks, more detailed bathymetry, objects which may cause risk such as anchorages and vessel movement in the area. The potential is immense and where this is geographic (hit the map button on the right) you can relate this to a real world location….in future versions we are looking to implementing Admiralty charts and bathymetry maps to view side by side with the site.

Disclaimer

I am not an archaeologist or diver! – Data is sourced from open data sources (Inspire, EA Lidar, Wikipedia) with the exception of the model(s) which were built by myself from images and multibeam data. Photos were obtained from Promare, on the Liberty 70 project – Contains public sector information licensed under the Open Government Licence v3.0, This data is not to be used for navigation or diving.

For further information or to ask how Garsdale Design can assist you, please do not hesitate to contact me.

How to Grayscale ArcGIS Pro Vector Symbology

Most of the time, ESRI software is great, it does [mostly] what you ask it and as long as you aren’t doing anything too crazy it behaves. We all know that it has it’s ‘unique-ness’ about it, after using it for a few years you start to ask “why don’t they do this….” or “How comes I can’t do that…..”. Well, a lot of this is being addressed in ArcGIS Pro, already it has answered the question as to why we needed 3 different GIS software (ArcGIS Desktop, ArcScene & ArcGlobe) by bundling it all up into one package. Now (with 1.3) we are starting to see other features which we always wanted in ArcGIS Desktop coming into ArcGIS Pro, case in point, converting symbology to grayscale.

Today, I discovered while creating a basemap, that ESRI have implemented a couple of neat little touches, firstly RGB VALUES ON HOVER.

Hovering the mouse provides RGB values
Hovering the mouse provides RGB values

Although this isn’t ground breaking, it is a nice little touch which, for us cartophiles and OCD cartographers, provides a quick and easy bit of feedback.

The other discovery was having the option to grayscale the symbology. The new ArcGIS Pro can be a little tricky to get your head around, so it is understandibly not obvious but I went to change the RGB values on a piece of road and found another option: GRAYSCALE

Grayscale dropdown
Grayscale dropdown

Selecting “Grayscale” takes you to this menu:

Grayscale removes colour while retaining it's presence.
Grayscale removes colour while retaining it’s presence.

 

Okay, so this isn’t groundbreaking BUT having played with photoshop a little, I’ve found that the RGB value which is automatically given is almost a perfect match for what you get if you desaturate the colour.

What does all this mean? It means that you can easily and confidently convert your vector symbology to grayscale without guesswork! Creating alternative grayscale maps should now be a lot easier! Now, the question is, will this ever make it to ArcGIS Desktop?!

Dragons8mycat

Do your work right and you can be smart too

Originally published in xyHt

Ever since I saw the word phrase “smart city”, I have cringed. Not because of the term but what it alludes to. To me it says that we (geospatial experts) haven’t done our work right….let me explain

From Wikipedia:

“A smart city is an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage a city’s assets”

Now, my understanding, as a person who uses a GIS on a daily basis, was that a GIS was used to overlay and integrate multiple layers of information to gain insight and manage a project more efficiently….so, in reality, these two aren’t too dissimilar. In fact, when you look into it further, the [smart] platform is pretty much a GIS which links to live data and data which is structured to be interlinked [each data is linked to all the other data]…oh, of course, there is some form of asset management, usually in the form of a CMS [Content Management System].

Smart_City_GraphI guess my point is that I’m frustrated that many of us, geospatial experts, aren’t being “smart” with our data and hands up, at times I can be one of you. I download a load of data, put it in my geodatabase and don’t think twice about it until someone asks for it.

Here is a great example – I was working on site analysis of wind farms and pretty much the job involved loading in all the environmental constraints, physical and topographical constraints, overlaying them and finding gaps. The way it has been done for generations. Except I woke up one day and thought, “why am I doing this?”….and I looked at the data I was using and started to build a model (in ESRI modelbuilder) and what the model did was take all the files, spatial joined them (merging them with their attributes in tact) and then doing a few tasks to turn the gaps in the data into polygons. I then made a centroid from the polygons and THEN did another spatial join on the data using the nearby setting.

What I ended up with was a fully automated way to find the best sites for a wind farm and also report back (in spreadsheet) what the nearest constraints were. Over time I found there were other data I could build into it, like land use, Land Registry land type (freehold/leasehold) and even some analysis to provide slope, average sun, aspect. Yes, 3 years ago I was working “smart”….unfortunately too smart for the company as this new-fangled technology wasn’t as good as having somebody rummage through by hand to find the best locations (even though the best sites were the ones the computer picked!).

Let’s have a look at the principle behind this :

Knowing that we were trying to find areas suitable for wind farms, the area needs to be unbuilt land, have not within 250m of  a building, it shouldn’t be closer than 40km from an airport (though it could be), it shouldn’t be on anywhere too steep or next to an existing wind farm. Obviously it shouldn’t be in any of the environmentally sensitive areas.

Most of the data is open data –

Environmental constraints: Natural England

Wind farms: The Crown Estate and Restats

Land Registry land type: Land Registry

Land Use (rough):

Topography (buildings, terrain): Ordnance Survey vectormap, Strategi & Open Map

And (curiously enough) farms, restaurants, business parks and other points of interest were taken from my SatNav (extracted as a csv)

The model would then look a little like this:

WindFarm_Flowchart

But there are other ways to be smart

The former method uses a spatial join technique whereby features which lie in the same location are combined into a large dataset which can be interrogated. Another technique is to join tables of related information to enhance the data about location, this is quite commonly used in demographics but can be used anywhere.

A great example of this would be the neighbourhood statistics websites whereby they provide information about your locality…let’s have a look at how this can be done with openly available data:

If we download the Super Output Areas (average population approx 1000) from National Statistics, we can then join most of their data based on Super Output Area [SOA] ID

SOA_types
Output area types
ONS_Join
By joining the area code to the area code in the table we can extract informative data

As you can see, this can be used to create much more informative data, some software vendors might even call it “enriched” data and it is extremely easy to do.

….and then you realise that you can THEN spatially join this data to buildings, political boundaries, offices and all other types of data to extract SMART data about the locations.

My challenge to you today is to “enrich” the next data you use, if only for your own satisfaction, add some demographic data to it, add some wikipedia data to it, spatially join it with the INSPIRE Land Registry polygons (while you can)….go on, do it……that sense of satisfaction, THAT is why you do GIS.

 

Nick D