So, the situation occurred yesterday where I was giving a workshop and sent out a load of QGIS styles, layer definition files and also a project file (.qgs)….Smugly, I told everyone to open the project file, then realised, as hands raised across the room that QGIS doesn’t work with relative paths and it also doesn’t do a “map package”. Working with so many different GIS, it’s hard to keep track of which ones do different things but I really should have remembered this one.
Surprisingly, the solution to repairing all the links and getting it all up and running is relatively easy if you are working with disconnected databases or vector files (shapefiles etc). Just make sure you have a text editor and away you go….
First of all, open the rogue .qgs file in your text editor, in the example above, I am using sublime text editor but during the workshop I found Windows notepad was just as capable. Upon opening, you will see that the project file is just a standard xml file with references to several processes.
Use your “Find” option in the text editor to find one of the <datasource> tags (as shown above)
It is simply a case of then changing the folders within that datasource tag to locate the correct location (most people store their data in a single location).
As you can see above, I want the project to read all the data from C:OS Southampton rather than the G:Work_Admin_Backup_Nov15GIS Core DataOS Southampton location, so using the REPLACE function (sometimes called the find/replace in some text editors) we can simply change ALL the locations in one go.
Pretty easy huh? A lot easier that using the interface which is provided by QGIS for updating each file link individually, after all, most times we just change folders, we don’t scatter our data around a drive location.
I am sure that this sort of functionality (changing the folder to reference all the links) could be done in bash or as an extra option within QGIS, if you know how, I look forward to hearing from you!
As you may be aware, the United Kingdom has a new transformation model that is OSTN15…..But why? What does it mean to the geospatial community?
Without being too nerdy, tectonic plate movement means that the “model” surface (the geoid) is slowly moving from best fit for the coordinate system. It has been 13yrs since Ordnance Survey implemented OSTN02 so the shift since then is enormous…..a whole 1cm and vertically it is 2.5cm. See this article here from Ordnance Survey.
The whole story is that sensors and our ability to calculate our positi0n relative to both the mathematical models and our relative position to those is constantly evolving too. So, just as OSTN02 revolutionised the accuracy of projecting GPS (WGS84) coordinates using a grid transformation (250 points over the 7 parameters used until 2002), OSTN15 both uses the OS Net of 250 points but has also been improved further with 12 zero order stations with accuracy of 2mm horizontal and 6mm vertical.
So how will this change the way you use your GIS?
If you are already using OSTN02 for your transformations between EPSG 27700 and EPSG 4326 – then you will only see a 5cm improvement over a 1m area at best and this is based on the worst places in the UK, on average you will only see a 2cm improvement anywhere in the UK. To put this into context, when you are zoomed in to an A3 map to about 1:100, you are talking about a few pixels on the screen….it won’t be groundbreaking [at the moment].
Currently, as this goes to press, the OSTN15 transformation has only been available for a few weeks and it is still being tested on different software to ensure it works, I am told that ESRI UK have been testing it with their software as this is being written.
As with OSTN02, I’ve created a fix for QGIS and OSTN15, I will describe how to implement this further in this.
There are many ways to use proj, without a GIS you can use it through a command line by defining parameters. QGIS uses the proj library by accessing a spatialite database called srs.db. This is held at .appsqgisresourcessrs.db in Windows and Linux.
The proj spatialite database is a relational database which, when analysed, holds tables for coordinate systems, epsg codes & transformations. What is really clever is that it recognises direction of transformation.
Why is direction important?
Most coordinate transformations go from the projected coordinate system to the geographic coordinate system, for example epsg 4277 to epsg 4326, OSTN15 bucks the trend and is the reverse direction, from 4326 to 4277.
As I found when I first tested OSTN15 with QGIS, I was getting a uniform 200m shift in the data which was being translated and I was really confused. After talking with the gridfile creator, I discovered that the file was created from ETRS89 to OSGB36, therefore the 200m shift I was getting.
QGIS is awesome, you’ve probably overlooked just how clever it is and so did I. Next time you run a transformation, or when you try this one, you may notice that there are 2 fields noted in the columns SRC (source) and DST (destination)…and this is a godsend for solving this issue, as QGIS can read the coordinate in both directions.
Show us the magic
So, I talked with Ordnance Survey and found that OSTN15 has been given the epsg of 7709 and created a new record with the srs.db which is distributed with Windows, Linux & Mac releases. To utilise this, all you need to do is to download the OSTN15 file from Ordnance Survey (here) and then place the OSTN15_NTv2.gsb file in the shared projections folder .shareprojOSTN15_NTv2.gsb this has been found to be correct in Mac and Windows (there should be similar in Linux). You know it is the right folder as there should be other .gsb files in there!
You can download the updated srs.db from here, this should be placed in the resources folder which can be found at .appsqgisresourcessrs.db – I highly recommend changing the name of the srs.db file in this folder to something like srs.db.old before adding the new version, just in case it doesn’t work for your particular set up BUT it has been checked on Mac and Windows distributions of QGIS from version 2.12 through to QGIS 2.17.
Last month I was at the Maptime in Southampton (UK), helping QGIS new users how to join tables and map EU referendum maps when I came across an issue with something on QGIS I hadn’t spotted in the last *ahem* years of using it.
When you drag and drop txt, csv or other delimited files into QGIS the fields automatically get converted to text format. No, I’m not making it up and it caused a lot of embarrassment when I was giving my demonstration.
This isn’t written to complain about QGIS but to notify others who are wondering why their joins aren’t working or why their interpolation can’t pick up the value field….You QGIS guys are going to say “why haven’t I raised this as an issue?”, well, firstly read Nyall Dawsons blog post on QGIS issues , secondly I tried to…..it turns out that trying to get access to submit issues has changed and even though I’ve asked for help to get access I’ve been waiting 1 month for response to request.
So…why does it happen?
If you add the file through the “add delimited file” button, none of this is an issue, this is due to the was that the software is written When the file is “dragged & dropped”, the software relies on OGR to add it as a comprehensible layer and this just renders all the fields as text (at present August 2016).
Why is it an issue?
If you are joining tables and aren’t aware of the issue, you drag and drop a table with a list of numerical values in, and then can’t join it to a spatial data with values in as you can’t join text to numbers. This could also cause issues with interpolation (reading of a value field) and also generation of points which need classification based on numbers.
Getting it fixed…
This is where things get a little tricky, as I don’t think it is entirely a QGIS issue and more related to the code which QGIS uses to parse the information, so until OGR update their code, it might be a bit of a wait.
Ever since I started working in the renewables industry on offshore wind farms over 8yrs ago and had to analyse shipwrecks, I thought about how much more interactive and informative shipwreck analysis would be in 3D. There are many companies out there at the moment who produce the most amazing visualisations, where is the ability to move along a fixed track to view a 2.5D wreck but there is no ability to relate it to anything, no context and normally the cost is extremely high when the data captured is normally geospatial and used within a GIS such as QGIS, ArcGIS or Fledemaus.
Please don’t get me wrong, I admire these models and they provide detail and information that would be almost impossible to render in a GIS web map without some serious development and a lot of modelling but technology has progressed. Five years ago I would have said that creating an offshore 3D web map was the thing of dreams, whereas today it is a few clicks of the mouse. Using ESRI software, I was able to combine both terrain and bathymetry, adjust for tide datum differences, import a 3D model and then add links and images to the web map (called a ‘scene’).
The most exciting thing we found in developing this, was the cost and time in implementing such a solution. With the ability to consume data from Sketchup, ESRI 3D Models and even Google Earth models, we can reduce the time which a scene takes to build from weeks to mere hours, the most time consuming part is adding the links & getting the colours nice!! Have a look below at what we created:
The model can be navigated in a similar manner to Google Earth, the model should also be interactive, with the ability to click on areas of the wreck with information returned on the right of the screen. If you look at the bottom left there are a set of icons which I will explain.
Overview of the buttons
The camera button, highlighted in green, provides access to the scene bookmarks, click on any of these and the scene will move to the view relating to the text. It will also alter the layers shown to provide the best view (according to the creator)
The animation button, highlighted green above, animates the scene by cycling through the bookmarks
The layers button allows access to the information relayed on the scene. By default, the tidal water is turned off and only one model is shown.
The light simulation button provides ability to cast shadow and simulate specific times of day. Although not really relevant for an underwater feature, it provides a method for viewing internal features better.
Mobile User Bonus Feature!
For those of you using a mobile device, you will notice one further button:
Yes, the scene is fully 3D and the viewer fully supports Google Cardboard, so go ahead and have a go!
This is just the beginning, as you can see this viewer is extremely lightweight and responsive Moving forward, we (Garsdale Design Ltd) are looking to adding further information such as nearby wrecks, more detailed bathymetry, objects which may cause risk such as anchorages and vessel movement in the area. The potential is immense and where this is geographic (hit the map button on the right) you can relate this to a real world location….in future versions we are looking to implementing Admiralty charts and bathymetry maps to view side by side with the site.
I am not an archaeologist or diver! – Data is sourced from open data sources (Inspire, EA Lidar, Wikipedia) with the exception of the model(s) which were built by myself from images and multibeam data. Photos were obtained from Promare, on the Liberty 70 project – Contains public sector information licensed under the Open Government Licence v3.0, This data is not to be used for navigation or diving.
For further information or to ask how Garsdale Design can assist you, please do not hesitate to contact me.
Most of the time, ESRI software is great, it does [mostly] what you ask it and as long as you aren’t doing anything too crazy it behaves. We all know that it has it’s ‘unique-ness’ about it, after using it for a few years you start to ask “why don’t they do this….” or “How comes I can’t do that…..”. Well, a lot of this is being addressed in ArcGIS Pro, already it has answered the question as to why we needed 3 different GIS software (ArcGIS Desktop, ArcScene & ArcGlobe) by bundling it all up into one package. Now (with 1.3) we are starting to see other features which we always wanted in ArcGIS Desktop coming into ArcGIS Pro, case in point, converting symbology to grayscale.
Today, I discovered while creating a basemap, that ESRI have implemented a couple of neat little touches, firstly RGB VALUES ON HOVER.
Although this isn’t ground breaking, it is a nice little touch which, for us cartophiles and OCD cartographers, provides a quick and easy bit of feedback.
The other discovery was having the option to grayscale the symbology. The new ArcGIS Pro can be a little tricky to get your head around, so it is understandibly not obvious but I went to change the RGB values on a piece of road and found another option: GRAYSCALE
Selecting “Grayscale” takes you to this menu:
Okay, so this isn’t groundbreaking BUT having played with photoshop a little, I’ve found that the RGB value which is automatically given is almost a perfect match for what you get if you desaturate the colour.
What does all this mean? It means that you can easily and confidently convert your vector symbology to grayscale without guesswork! Creating alternative grayscale maps should now be a lot easier! Now, the question is, will this ever make it to ArcGIS Desktop?!
“A smart city is an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage a city’s assets”
Now, my understanding, as a person who uses a GIS on a daily basis, was that a GIS was used to overlay and integrate multiple layers of information to gain insight and manage a project more efficiently….so, in reality, these two aren’t too dissimilar. In fact, when you look into it further, the [smart] platform is pretty much a GIS which links to live data and data which is structured to be interlinked [each data is linked to all the other data]…oh, of course, there is some form of asset management, usually in the form of a CMS [Content Management System].
I guess my point is that I’m frustrated that many of us, geospatial experts, aren’t being “smart” with our data and hands up, at times I can be one of you. I download a load of data, put it in my geodatabase and don’t think twice about it until someone asks for it.
Here is a great example – I was working on site analysis of wind farms and pretty much the job involved loading in all the environmental constraints, physical and topographical constraints, overlaying them and finding gaps. The way it has been done for generations. Except I woke up one day and thought, “why am I doing this?”….and I looked at the data I was using and started to build a model (in ESRI modelbuilder) and what the model did was take all the files, spatial joined them (merging them with their attributes in tact) and then doing a few tasks to turn the gaps in the data into polygons. I then made a centroid from the polygons and THEN did another spatial join on the data using the nearby setting.
What I ended up with was a fully automated way to find the best sites for a wind farm and also report back (in spreadsheet) what the nearest constraints were. Over time I found there were other data I could build into it, like land use, Land Registry land type (freehold/leasehold) and even some analysis to provide slope, average sun, aspect. Yes, 3 years ago I was working “smart”….unfortunately too smart for the company as this new-fangled technology wasn’t as good as having somebody rummage through by hand to find the best locations (even though the best sites were the ones the computer picked!).
Let’s have a look at the principle behind this :
Knowing that we were trying to find areas suitable for wind farms, the area needs to be unbuilt land, have not within 250m of a building, it shouldn’t be closer than 40km from an airport (though it could be), it shouldn’t be on anywhere too steep or next to an existing wind farm. Obviously it shouldn’t be in any of the environmentally sensitive areas.
Most of the data is open data –
Environmental constraints: Natural England
Wind farms: The Crown Estate and Restats
Land Registry land type: Land Registry
Land Use (rough):
Topography (buildings, terrain): Ordnance Survey vectormap, Strategi & Open Map
And (curiously enough) farms, restaurants, business parks and other points of interest were taken from my SatNav (extracted as a csv)
The model would then look a little like this:
But there are other ways to be smart
The former method uses a spatial join technique whereby features which lie in the same location are combined into a large dataset which can be interrogated. Another technique is to join tables of related information to enhance the data about location, this is quite commonly used in demographics but can be used anywhere.
A great example of this would be the neighbourhood statistics websites whereby they provide information about your locality…let’s have a look at how this can be done with openly available data:
If we download the Super Output Areas (average population approx 1000) from National Statistics, we can then join most of their data based on Super Output Area [SOA] ID
As you can see, this can be used to create much more informative data, some software vendors might even call it “enriched” data and it is extremely easy to do.
….and then you realise that you can THEN spatially join this data to buildings, political boundaries, offices and all other types of data to extract SMART data about the locations.
My challenge to you today is to “enrich” the next data you use, if only for your own satisfaction, add some demographic data to it, add some wikipedia data to it, spatially join it with the INSPIRE Land Registry polygons (while you can)….go on, do it……that sense of satisfaction, THAT is why you do GIS.
You can now export your QGIS 2.5D maps straight to a web map thanks to the genius(es) Tom Chadwin & Luca Casagrande who developed the QGIS2Web plugin. Before I run away with myself and start talking 2.5D and cool effects, I think it’s best I clarify a few things.
No, you didn’t misread that, you can now export your QGIS 2.5D maps straight to a web map thanks to the genius(es) Tom Chadwin & Luca Casagrande who developed the QGIS2Web plugin. Before I run away with myself and start talking 2.5D and cool effects, I think it’s best I clarify a few things.
Firstly, I realise that I told a bit of a fib when I wrote the xyHt article on web mapping where I said that there wasn’t an easier way to build a web map. There is, it is QGIS2Web, the only thing you need is your own website and QGIS. You simply make your map and hit the plugin button….voila! Not only a preview of the web map but also options for measure, popups, scalebar and even basemaps – It is truly a thing of beauty. The only technical knowledge you need is how to copy and paste your folder onto your web host.
It sounds too good to be true, which is why I feel guilty for ignoring it for so long….Having been beaten by the geospatial industry for well over a decade, I naturally assumed it was some scam whereby I would have to buy into something or pay for a subscription but no, this is the real deal. This is what ArcGIS online should have been, you front the cost (or not – have another look at Github pages *ahem*) for your website and the rest is free. You can host as many maps as you want with whatever style and data….of course, with a little know how and you can even link them to other sources using hyperlinks in your fields.
So, let’s clarify what 2.5D is
2D is the everyday “flat” maps which you would generate in QGIS/ArcGIS/MapInfo/CadCorp (add your own here) although there is relation of how features lie in comparison to others, there is no depth. Buildings and trees appear as “top-down” flat objects.
In true Dragons8mycat style, the best way to describe this is using a 2D image of Super Mario Bros:
2.5D Adds depth, though it is not full 3D, it cannot be explored like Google Earth, rather it is an illusion of depth.
This can be seen in the image below of Mario in 2.5D
3D provides true depth and the ability to explore the depth, this would be the Google Earth buildings or the CityEngine webscenes. Unlike a fixed view of the 3D features, they allow full movement around the real-world feature. Here is Super Mario once again to show what 3D is:
Is this the right time to discuss 4D?….No?….Briefly then – 4D is the medium of time, if we were to add a timeslider to the 3D Super Mario above, and we could move around not only the features but also the time, then it would be 4D (temporal).
Out of interest I have had many discussions over what 3.5D is and so far the general consensus is that it would be a 2.5D map with temporal capability, so think of a 2.5D map which showed change over time….could this (below) be 3.5D?
Back to the future (qgis2web)
The QGIS2Web plugin provides the ability to create 2.5D features on a web map, there is the option to adjust & change colour and you can add functionality. What really surprised me was how easy it was.
Of course there are some minor things which I found as I worked through but I am sure that these will be fixed before this post goes live as the developers of this plugin are right on the ball.
Tip 1 – Use geojson files.
Although the Plugin uses any QGIS data (WYSIWYG) I found the plugin quicker and gave a more accurate representation with geojson files
Tip 2 – Forget your shadows
I was told that the plugin DOES honour the shadow effects but I found in my experience that the shapeburst fills and shadow effects didn’t work and in some cases caused the map to hang when switched on
Tip 3 – You need to remove the OSM maps in the code.
There are a plethora of basmaps available, I’m not going to knock it, BUT there is no option to turn them all off or to have them off by default. If you want to remove them, find the layers.js file and then remove the baselayers reference. var layersList = [baselayers,lyr_LandParcelsSedbergh,lyr_Trees]; should be changed to var layersList = [lyr_LandParcelsSedbergh,lyr_Trees];