The Problem with 3D GIS

3D GIS is game changing, it can change the way you view your analysis, it can provide insights which you may have overlooked….but there is a little problem which I may had not shared in my deluge of blogs about how great 3D GIS is…..

…Sometimes it can be hard work. What I mean by this, is that it can sometimes need a lot more consideration than standard methods. Let’s look at some of the major culprits when using Esri ArcGIS Pro 1.4 & Desktop 10.5, keeping in mind that there isn’t many (or any) other 3D GIS you can do this kind of work in.

One of the 3D models built of Kendal, UK

Editing

Once you have converted or built your 3D model into a multipatch polygon you may find yourself struggling to edit or adjust your model.

Split

Having built entire cities, one issue that I’ve come across a fair bit is removing parts of multipatch polygons. For example, when adding multiple models, you may find that two features might overlap and need to remove part of one. In a 2D, standard GIS, you would simply turn the editing on and split the offending polygon.

This doesn’t work in a 3D GIS…Think about it, how does the GIS know where all the planes you require breaking lie with a 3D plane? 3D GIS can do some pretty clever assumptions but I am yet to find a way to remove part of a complex multipatch polygon.

True, within ArcGIS Pro 1.4 there are some editing tools for 3D multipatch features but it is early days at the moment. For simple cubes it is quite easy to adjust and manipulate the model a little but you don’t stand a chance if you have a curved edge.

 

Don’t despair, don’t give up on the 3D just yet, as you know, 3D isn’t new and there are many “workarounds”. One of my favourites is using the ArcGIS Pro “Replace Multipatch” tool. If you want to make multiple edits to a model (feature) you can export the multipatch to a collada format or keyhole markup language (kml) format, edit it in Sketchup, Blender, Meshlab or any of your favourite modelling suites and then import it again without affecting the other features in that layer.

 

If you are extruding simple 2D polygons with the intention of creating 3D mulitpatch polygons, it is a good idea to keep the conversion to multipatch until you explicitly need to. This way, you can edit, split and reshape your 2D polygon within normal ArcGIS Desktop without any issues at all and then draw up and extrude when you are 100% sure it all fits and works okay.

 

In my experience, pre-planning and clarity of the end goal means that you can be prepared for these slight niggles in advance.

Using 3D multipatches in ArcGIS Desktop

Overlay analysis 

So, I’ve built out an entire city of 3D buildings, it looks amazing, last thing to do is clip them by the city boundary….oh, I forgot, the 2D polygon doesn’t truly intersect the multipatch polygon in 3D space, the “clip” tool doesn’t work, the “intersect tool” nor the “merge” tool…in fact you can ignore using modelbuilder.

Clip features

I learned the hard way that 3D features work best with 3D features. Unless the boundary polygon is a 3D feature, then you won’t be able to use spatial analysis to use any kind of overlay querying.

But, there is always a way to trick the system…

Although the multipatch polygon feature is a 3D object, you can still open it in ArcGIS Desktop, meaning that you can use the good ol’ fashioned “select by location” tool. No, it shouldn’t work but it does, furthermore, you can then export your selected data to a new multipatch feature. It begs the question why you can do this and not the “clip” as, in geoprocessing terms, they are the same thing (the clip tool is just the separate bits combined into a single script).

Let’s not get too hung up on it, as it works,

Volumetric Analysis

If you thought that the  “volume” tool was just put there to look clever, think again. It can be a great tool for easily representing floor space or calculating the tonnage of aggregate that needs extracting from the ground to lay a new pipeline.

There is a slight problem though…it can be a little hit and miss, especially when using sourced models.

Calculating Volume

Try adding a model you built in Sketchup, Blender or Meshlab to ArcGIS Pro and then calculate volume (using the add z information tool), Nothing, right? But why? The reason is that it doesn’t like “open” multipatch 3D features, it isn’t fully enclosed. Even if there is a slither of a gap in the polygon and it isn’t 100% enclosed, the tool cannot calculate the volume.

There are other methods whereby you can use a raster surface, like the “surface volume” tool but this isn’t quite as accurate as using your super detailed vector multipatch.

You could try the “Enclose Multipatch” tool, as this closes the multipatch and then running the volume tool BUT you need to consider that unless the multipatch is cut to the surface, for example, where a building sits on a hill and the base isn’t perfectly flat, the volume will not be ideal. So please consider using the data as a high resolution TIN which is merged with the terrain to provide a more accurate volume result.

Oh, last point on this – make sure you use a projected coordinate system that uses metric for your data, a geographic coordinate system will leave you with your volume in degrees….is that even possible?!

Which brings me nicely to –

Issues occur when you don’t specify the datum

Vertical referencing

I distinctly remember my first adventures into 3D through Google Earth, trying to create some of Romsey, UK as 3D buildings using Sketchup. The first hurdle was always figuring out whether the building was “Absolute height”, “On the Ground” or “Relative to the ground”…I mean, what does that mean anyway? I just drew it to sit on the floor, why does it need to ask me more questions about it?

Correct Height Placement

Right now, if you are a regular reader of xyHt or a 3D Geoninja, you will be calling me a muppet. In reality though, you wouldn’t believe how often I am asked about this, especially now that Digital Surface Models (DSMs), Digital Terrain Models (DTMs), bathymetry and other elevation data are so readily available.

Elevation is never easy, worst still, there are either too many or too few options. With the Esri ArcGIS Pro, I am 100% confident that I know where my data sits within 3D space but it is only because I’ve worked with this day in and day out and understand the limitations and data sources.

Let’s consider the Esri “scene” – it’s a cool 3D map and as you zoom into that lovely globe you can see lovely mountains and valleys all popping out of the surface, my question to you is, what elevation data is it using? what is the resolution of that data? You see, I love that Esri provide a detailed and complete coverage elevation surface for the entire globe but the flip side of it is that you cannot know the exact limitations of that surface easily (the information is provided by Esri but it is not a simple “point & click” exercise).

My words of advice here are to use your own terrain when placing 3D multipatch features. Therefore you are in control of both the vertical datum and the resolution of the height.

While I’m here, I want to also point out that there isn’t a “snap to ground” feature in the editing tools within ArcGIS Pro either. This becomes an issue when you bring a model in which isn’t vertically reference, has no vertical datum because you then need to sit it on the surface. Even when your model is a captured point cloud and accurate to 0.5cm, you have no way to accurately place it on the ground. You can adjust it up and down and sit it by sight, though you cannot “snap” it.

The big takeaway here is that firstly, you need to ensure you are confident and know your elevation data if you plan to work in the 3D scene views and secondly that you need to set up your x,y & z coordinate systems correctly from the start to ensure that all the work you do is as precise as possible.

…and yes, I now know the difference between “absolute”, “relative to the ground” & “on the ground”….maybe an interesting blog for another day, though feel free to contact me if you need quicker answers!

And everything else

There are still many things I have not had a chance to mention, for example the complexities of cartographic representation using 3D models in a GIS, or ways of minimising the clashing of overlapping data plus other 3D centric issues such as shadow and light. Maybe a blog for another day?….

Dragons8mycat

 

 

Explain Georeferencing To Me as If I Were a Five-Year-Old

Blog post copied from Adrian Welsh on GeoNet 30/11/2016 – Too good not to share!!

Explain Georeferencing To Me as If I Were a Five-Year-Old

 

I really liked how Denzel Washington used the phrase “explain this to me as if I were a xxx-year-old” in the movie Philadelphia (1993).

Reference: Philadelphia. Philadelphia, PA: Jonathan Demme, 1993. film.

So, I will take it one step further and attempt to explain the concept of georeferencing to an actual five-year-old.

Five-year-old:

Five-year-old engineer says, “I have this PDF of a site plan. I want to put this on a map and have it line up properly.”

Here is my map.

We need to zoom in a little bit closer.

A little bit more.

Open Street Map 1:5,000

Almost there. Zoom in some more so that our site plan will fit better.

Open Street Map 1:1,050

Much better. Now, we need to shrink the site plan to a more usable size. Currently, it’s larger than our map.

Let’s make it a little bit smaller.

Perfect. Now we need to place the site plan on our zoomed in map and adjust it to fit by rotating it and resizing it.

Great! Now, after some quality control of adjustments and transformations, we can rectify this image and call it georeferenced!

OSM 1:1,050 with Image

We can make the georeferenced image transparent to where we can see the basemap behind it.

OSM 1:1,050 with Image, Transparency 50%

Finally, we can add existing linework and other GIS files to give the image a more solid reference.

OSM 1:1,050 with Image, Transparency 50% and Linework

Many thanks to Adrian Welsh for letting me share this!

Dragons8mycat

Add OSTN15 to QGIS 2.16

As you may be aware, the United Kingdom has a new transformation model that is OSTN15…..But why? What does it mean to the geospatial community?

Without being too nerdy, tectonic plate movement means that the “model” surface (the geoid) is slowly moving from best fit for the coordinate system. It has been 13yrs since Ordnance Survey implemented OSTN02 so the shift since then is enormous…..a whole 1cm and vertically it is 2.5cm. See this article here from Ordnance Survey.

The whole story is that sensors and our ability to calculate our positi0n relative to both the mathematical models and our relative position to those is constantly evolving too. So, just as OSTN02 revolutionised the accuracy of projecting GPS (WGS84) coordinates using a grid transformation (250 points over the 7 parameters used until 2002), OSTN15 both uses the OS Net of 250 points but has also been improved further with 12 zero order stations with accuracy of 2mm horizontal and 6mm vertical.

So how will this change the way you use your GIS?

If you are already using OSTN02 for your transformations between EPSG 27700 and EPSG 4326 – then you will only see a 5cm improvement over a 1m area at best and this is based on the worst places in the UK, on average you will only see a 2cm improvement anywhere in the UK. To put this into context, when you are zoomed in to an A3 map to about 1:100, you are talking about a few pixels on the screen….it won’t be groundbreaking [at the moment].

Currently, as this goes to press, the OSTN15 transformation has only been available for a few weeks and it is still being tested on different software to ensure it works, I am told that ESRI UK have been testing it with their software as this is being written.

As with OSTN02, I’ve created a fix for QGIS and OSTN15, I will describe how to implement this further in this.

It’s all about the Proj

Proj (Proj.4) is a cartographic library which is based on the work of Gerald Evenden of USGS back circa 1980. Over time it has evolved to consume grid transformations and is used by GRASS GIS, MapServer, PostGIS, Thuban, OGDI, Mapnik, TopoCad,GDAL/OGR as well as QGIS.

There are many ways to use proj, without a GIS you can use it through a command line by defining parameters. QGIS uses the proj library by accessing a spatialite database called srs.db. This is held at .appsqgisresourcessrs.db in Windows and Linux.

The proj spatialite database is a relational database which, when analysed, holds tables for coordinate systems, epsg codes & transformations. What is really clever is that it recognises direction of transformation.

Why is direction important?

Most coordinate transformations go from the projected coordinate system to the geographic coordinate system, for example epsg 4277 to epsg 4326, OSTN15 bucks the trend and is the reverse direction, from 4326 to 4277.

As I found when I first tested OSTN15 with QGIS, I was getting a uniform 200m shift in the data which was being translated and I was really confused. After talking with the gridfile creator, I discovered that the file was created from ETRS89 to OSGB36, therefore the 200m shift I was getting.

QGIS is awesome, you’ve probably overlooked just how clever it is and so did I. Next time you run a transformation, or when you try this one, you may notice that there are 2 fields noted in the columns SRC (source) and DST (destination)…and this is a godsend for solving this issue, as QGIS can read the coordinate in both directions.

transformation-in-qgis

Show us the magic

So, I talked with Ordnance Survey and found that OSTN15 has been given the epsg of 7709 and created a new record with the srs.db which is distributed with Windows, Linux & Mac releases. To utilise this, all you need to do is to download the OSTN15 file from Ordnance Survey (here) and then place the OSTN15_NTv2.gsb file in the shared projections folder .shareprojOSTN15_NTv2.gsb this has been found to be correct in Mac and Windows (there should be similar in Linux). You know it is the right folder as there should be other .gsb files in there!

qgis_folder_location

You can download the updated srs.db from here, this should be placed in the resources folder which can be found at  .appsqgisresourcessrs.db – I highly recommend changing the name of the srs.db file in this folder to something like srs.db.old before adding the new version, just in case it doesn’t work for your particular set up BUT it has been checked on Mac and Windows distributions of QGIS from version 2.12 through to QGIS 2.17.

Enjoy

Dragons8mycat

 

Many thanks to Ordnance Survey for their help

Further reading about the model for Great Britain and OSTN15, I recommend this paper: A guide to Coordinate systems in Great Britain

Using 3D Web Mapping to Model Offshore Archaeology

Ever since I started working in the renewables industry on offshore wind farms over 8yrs ago and had to analyse shipwrecks, I thought about how much more interactive and informative shipwreck analysis would be in 3D. There are many companies out there at the moment who produce the most amazing visualisations, where is the ability to move along a fixed track to view a 2.5D wreck but there is no ability to relate it to anything, no context and normally the cost is extremely high when the data captured is normally geospatial and used within a GIS such as QGIS, ArcGIS or Fledemaus.

Here is an example of the amazing model of the James Eagen Layne created by Fourth Element and the model of the Markgraf Shipwreck by Scapa Flow Wrecks

Please don’t get me wrong, I admire these models and they provide detail and information that would be almost impossible to render in a GIS web map without some serious development and a lot of modelling but technology has progressed. Five years ago I would have said that creating an offshore 3D web map was the thing of dreams, whereas today it is a few clicks of the mouse. Using ESRI software, I was able to combine both terrain and bathymetry, adjust for tide datum differences, import a 3D model and then add links and images to the web map (called a ‘scene’).

The most exciting thing we found in developing this, was the cost and time in implementing such a solution. With the ability to consume data from Sketchup, ESRI 3D Models and even Google Earth models, we can reduce the time which a scene takes to build from weeks to mere hours, the most time consuming part is adding the links & getting the colours nice!! Have a look below at what we created:

[iframe src=”https://cloudciti.es/scenes/SJaI7ZBO/embed&#8221; width=”836″ height=”470″ frameborder=”0″ style=”border: 1px solid whitesmoke” webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe><p><a href=”https://cloudciti.es/SJaI7ZBO”>Wreck of the James Eagen Layne</a> from <a href=”https://cloudciti.es/users/54930bf17b2842080022f175″>Garsdale Design Limited</a> on <a href=”https://cloudcities.io”>CloudCities</a&gt;. ]

The model can be navigated in a similar manner to Google Earth, the model should also be interactive, with the ability to click on areas of the wreck with information returned on the right of the screen. If you look at the bottom left there are a set of icons which I will explain.

Overview of the buttons

Camera Button

 

The camera button, highlighted in green, provides access to the scene bookmarks, click on any of these and the scene will move to the view relating to the text. It will also alter the layers shown to provide the best view (according to the creator)

Animation button

The animation button, highlighted green above, animates the scene by cycling through the bookmarks

Layers buttonThe layers button allows access to the information relayed on the scene. By default, the tidal water is turned off and only one model is shown.

Light Simulation

The light simulation button provides ability to cast shadow and simulate specific times of day. Although not really relevant for an underwater feature, it provides a method for viewing internal features better.

Mobile User Bonus Feature!

For those of you using a mobile device, you will notice one further button:

Cardboard button

Yes, the scene is fully 3D and the viewer fully supports Google Cardboard, so go ahead and have a go!

Future development

This is just the beginning, as you can see this viewer is extremely lightweight and responsive Moving forward, we (Garsdale Design Ltd) are looking to adding further information such as nearby wrecks, more detailed bathymetry, objects which may cause risk such as anchorages and vessel movement in the area. The potential is immense and where this is geographic (hit the map button on the right) you can relate this to a real world location….in future versions we are looking to implementing Admiralty charts and bathymetry maps to view side by side with the site.

Disclaimer

I am not an archaeologist or diver! – Data is sourced from open data sources (Inspire, EA Lidar, Wikipedia) with the exception of the model(s) which were built by myself from images and multibeam data. Photos were obtained from Promare, on the Liberty 70 project – Contains public sector information licensed under the Open Government Licence v3.0, This data is not to be used for navigation or diving.

For further information or to ask how Garsdale Design can assist you, please do not hesitate to contact me.

Pokémon Go leads the AR Revolution

Originally posted on xyHt Magazine 12/7/2016

 

Six months ago when I claimed that augmented reality was the future of GIS and geospatial services and it was met with a few sniggers. This week has seen the arrival of Pokémon Go, one of the most popular games hit the mobile phone market….and yes, it is augmented reality and yes, it is geospatial. It could well be the turning point for many geospatial companies.

pokemongo

What is Pokémon Go?

In case you have been hidden in a cave fro the last 25yrs, Pokémon is a game that first appeared on the Nintendo Gameboy (circa 1995) in which players openly walk around the world and capture mystical creatures (in a white and red ball) which they then train to fight (other creatures) in battle arenas called “gyms”. If their creature (called a Pokémon but there are hundreds of breeds) wins their battle, then they earn a badge. The aim is to collect all the badges. Simple right?

Since the mid 90’s there have been 18 manga books, 19 films and about 17 games….yes, this thing is HUGE. Pokémon Go is the new generation and it was inevitable that it would become a geocaching game.

“Geocaching /ˈˌkæʃɪŋ/ is an outdoor recreational activity, in which participants use a Global Positioning System (GPS) receiver or mobile device and other navigational techniques to hide and seek containers, called “geocaches” or “caches”, anywhere in the world.” – Wikipedia

Pokemon Encounter

Why Pokémonn Go is perfect for AR

Lets look at the concept of the game again, the user walks around the globe looking for Pokémon to capture and then train…..if this isn’t the definition of geocaching then I will eat my shorts (thanks Bart). Though any thoughts that this game was designed as a geocache promotion are quickly quelled when you realise that the first geocache was made around the millennium (by Dave Ulmer, Oregon).

For years we have struggled to get our children interested in mapping and geography not realising that this was sat under our nose the whole time! I ironically, it is all due to an Aprils Fools Day joke in 2014 in which it was claimed that Google had developed an augmented reality app…..Niantic saw that this could be a reality and worked with Google, the rest is history.

How does it work?

After logging into the app for the first time, the player creates their avatar. The player can choose the avatar’s style, hair, skin, and eye color, and can choose from a limited number of outfits. Once the avatar is created, it is displayed at the player’s current location along with a map of the player’s immediate surroundings. Features on the map may include a number of landmarks where Pokémon may be and Pokémon gyms (places where you battle your Pokémon).

As players travel the real world, the avatar moves along the game’s map. Different Pokémon live in different areas of the world; for example, water-type Pokémon are generally found near water. When a player encounters a Pokémon, they may view it either in augmented reality mode or with a pre-rendered background. AR mode uses the camera and gyroscope on the player’s mobile device to display an image of a Pokémon as though it were in the real world. Players can also take pictures, using an in-game camera, of the Pokémon that they encounter both with and without the AR mode activated.

There is a fantastic article on how Google chose the locations for the Pokémon/geocaches here (by Mashable) whereby it explains how safety was the primary concern. Caches were chosen based on open places that had some significance so that players wouldn’t be chasing a Pikachu (a type of Pokémon) across a train track.

Pokemon Batlle

Why will it change geo things?

Already there is a wave of companies looking at how they can use marketing to get a piece of the pie, see this & this, it is evident that people are aware that this is, excuse the pun, a game changer. It really isn’t hard to see that this is going to be popular….so this might well be the turning point for geospatial, AR & VR. If people are comfortable using AR through this game then they will start expecting it for their mapping, bringing back apps like LAYAR which augmented real world information.

AR on Mobile

The applications are immense and exciting for the geospatial industry, the ability to overlay real world issues and information to what the user sees through the camera would be the definitive mapping system. Even if the accuracy isn’t that amazing (mobile GPS – think about it) there is the potential to use clever imagery and presentation to overcome most issues. Imagine sending the worst member of your team to site, with an AR map you could be 90% sure that they would be able to find the correct building over using a 2D map, or think about how easy it could be to identify potential points of weakness or contamination around a site by just looking through your device, all set by someone sat at a desk on the other side of the world.

Of course this is speculation but consider the rise of VR which is now around us, soon we will all be fully immersed watching TV, playing games and riding roller coasters, 3D GIS has seen a rise over the last few years too, with many geospatial providers offering 3D add-ons or 3D alternatives. Furthermore the conferences were rife with talk and demonstration of 3D and VR. Of course all this innovation is led by CEOs and Project Managers who have seen their kid playing with some game and asking that all important question….

“Why can’t our company do that?”

You’d be a fool to think that this is all going to just disappear, the future of geospatial is now, we are seeing the evolution occur in front of our eyes – Just like the late Roger Tomlinson evolved the paper map to digital GIS in the 60’s, we are seeing 2D moving to 3D real world. I am all for it, it will bring new challenges, better accuracy and more interaction with the user…..though I draw the line at Pokemon Go myself.

Dragons8mycat

How to Grayscale ArcGIS Pro Vector Symbology

Most of the time, ESRI software is great, it does [mostly] what you ask it and as long as you aren’t doing anything too crazy it behaves. We all know that it has it’s ‘unique-ness’ about it, after using it for a few years you start to ask “why don’t they do this….” or “How comes I can’t do that…..”. Well, a lot of this is being addressed in ArcGIS Pro, already it has answered the question as to why we needed 3 different GIS software (ArcGIS Desktop, ArcScene & ArcGlobe) by bundling it all up into one package. Now (with 1.3) we are starting to see other features which we always wanted in ArcGIS Desktop coming into ArcGIS Pro, case in point, converting symbology to grayscale.

Today, I discovered while creating a basemap, that ESRI have implemented a couple of neat little touches, firstly RGB VALUES ON HOVER.

Hovering the mouse provides RGB values
Hovering the mouse provides RGB values

Although this isn’t ground breaking, it is a nice little touch which, for us cartophiles and OCD cartographers, provides a quick and easy bit of feedback.

The other discovery was having the option to grayscale the symbology. The new ArcGIS Pro can be a little tricky to get your head around, so it is understandibly not obvious but I went to change the RGB values on a piece of road and found another option: GRAYSCALE

Grayscale dropdown
Grayscale dropdown

Selecting “Grayscale” takes you to this menu:

Grayscale removes colour while retaining it's presence.
Grayscale removes colour while retaining it’s presence.

 

Okay, so this isn’t groundbreaking BUT having played with photoshop a little, I’ve found that the RGB value which is automatically given is almost a perfect match for what you get if you desaturate the colour.

What does all this mean? It means that you can easily and confidently convert your vector symbology to grayscale without guesswork! Creating alternative grayscale maps should now be a lot easier! Now, the question is, will this ever make it to ArcGIS Desktop?!

Dragons8mycat

Do your work right and you can be smart too

Originally published in xyHt

Ever since I saw the word phrase “smart city”, I have cringed. Not because of the term but what it alludes to. To me it says that we (geospatial experts) haven’t done our work right….let me explain

From Wikipedia:

“A smart city is an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage a city’s assets”

Now, my understanding, as a person who uses a GIS on a daily basis, was that a GIS was used to overlay and integrate multiple layers of information to gain insight and manage a project more efficiently….so, in reality, these two aren’t too dissimilar. In fact, when you look into it further, the [smart] platform is pretty much a GIS which links to live data and data which is structured to be interlinked [each data is linked to all the other data]…oh, of course, there is some form of asset management, usually in the form of a CMS [Content Management System].

Smart_City_GraphI guess my point is that I’m frustrated that many of us, geospatial experts, aren’t being “smart” with our data and hands up, at times I can be one of you. I download a load of data, put it in my geodatabase and don’t think twice about it until someone asks for it.

Here is a great example – I was working on site analysis of wind farms and pretty much the job involved loading in all the environmental constraints, physical and topographical constraints, overlaying them and finding gaps. The way it has been done for generations. Except I woke up one day and thought, “why am I doing this?”….and I looked at the data I was using and started to build a model (in ESRI modelbuilder) and what the model did was take all the files, spatial joined them (merging them with their attributes in tact) and then doing a few tasks to turn the gaps in the data into polygons. I then made a centroid from the polygons and THEN did another spatial join on the data using the nearby setting.

What I ended up with was a fully automated way to find the best sites for a wind farm and also report back (in spreadsheet) what the nearest constraints were. Over time I found there were other data I could build into it, like land use, Land Registry land type (freehold/leasehold) and even some analysis to provide slope, average sun, aspect. Yes, 3 years ago I was working “smart”….unfortunately too smart for the company as this new-fangled technology wasn’t as good as having somebody rummage through by hand to find the best locations (even though the best sites were the ones the computer picked!).

Let’s have a look at the principle behind this :

Knowing that we were trying to find areas suitable for wind farms, the area needs to be unbuilt land, have not within 250m of  a building, it shouldn’t be closer than 40km from an airport (though it could be), it shouldn’t be on anywhere too steep or next to an existing wind farm. Obviously it shouldn’t be in any of the environmentally sensitive areas.

Most of the data is open data –

Environmental constraints: Natural England

Wind farms: The Crown Estate and Restats

Land Registry land type: Land Registry

Land Use (rough):

Topography (buildings, terrain): Ordnance Survey vectormap, Strategi & Open Map

And (curiously enough) farms, restaurants, business parks and other points of interest were taken from my SatNav (extracted as a csv)

The model would then look a little like this:

WindFarm_Flowchart

But there are other ways to be smart

The former method uses a spatial join technique whereby features which lie in the same location are combined into a large dataset which can be interrogated. Another technique is to join tables of related information to enhance the data about location, this is quite commonly used in demographics but can be used anywhere.

A great example of this would be the neighbourhood statistics websites whereby they provide information about your locality…let’s have a look at how this can be done with openly available data:

If we download the Super Output Areas (average population approx 1000) from National Statistics, we can then join most of their data based on Super Output Area [SOA] ID

SOA_types
Output area types
ONS_Join
By joining the area code to the area code in the table we can extract informative data

As you can see, this can be used to create much more informative data, some software vendors might even call it “enriched” data and it is extremely easy to do.

….and then you realise that you can THEN spatially join this data to buildings, political boundaries, offices and all other types of data to extract SMART data about the locations.

My challenge to you today is to “enrich” the next data you use, if only for your own satisfaction, add some demographic data to it, add some wikipedia data to it, spatially join it with the INSPIRE Land Registry polygons (while you can)….go on, do it……that sense of satisfaction, THAT is why you do GIS.

 

Nick D