This post marks the 1000th entry for the Spaced-OoooO-Out blog. My first post was on 12 September 2005 outlining a talk I gave at the Society of Cartographers annual meeting in Cambridge on the Journal of Maps. I then went on to outline the current debates between open data and the walled garden of the Ordnance Survey - speaking at the conference were (then CTO at OS) Ed Parsons and (then OpenGeodata) Jo Walsh. O have times have changed with Ed jumping to Google and OS opening up much of their data (and continuing to do so).
I never intended to spend 10 years blogging and much less write 1000 blog posts. That’s averaged out at 100 per year or about 2 a week. I’m not the most prolific poster but it has been consistent in terms of delivery, style and content…. very much focused on my teaching and research in GIS, remote sensing and geomorphology, with heavy smatterings of general IT around this.
Blogs have been around in various guises since people could leave public readable messages on the internet, however the evolution from regularly updated static web pages to bespoke server platforms designed for blogging didn’t really happen until the late 1990s when popularity started to spread. By 2004/5 blogging had hit the mainstream along with the rise in a web 2.0 technologies of which this was definitely a part.
So why blog? Well from a personal perspective I was a relatively new lecturer at Kingston University at the time, blogging was very much an area of “buzz” and I wanted to increase some profile. However there is more to it… I really like the comments Donald Clark made about the (lack of) use of blogging in education. Culled from both his list, as well as further thoughts of my own, these in particular struck a chord with me when I started:
(1) Get Better at Writing: to improve at writing you need to practise and what better way than to do something practical and useful. Blogging has helped me improve the quality of my writing which has fed directly back in to my academic work.
(2) Organising Thoughts: any kind of writing forces you to organise your thoughts in to meaningful content.
(3) Improve Understanding: writing strongly reinforces my understanding by forcing me to (re)think about topics.
(4) Sharing: opportunity to share useful (and not so useful!) information with readers.
(5) Debate: whilst I dont get too many comments, it allows at least a 1-way, and sometimes 2-way, conversation to develop.
(6) Notes to Self: my blog is an invaluable self-published repository of information. There are some things which small snippets, but just so darn useful. Where do I store such information? Well I could put it in to my GtD archive or blog it, share it and make it easy to find in the future!
(7) Indexed: what I write about is crawled and indexed by search engines and much of it can easily be found. For example, I find it amazing that my blog entry on the NERC FSF is on the first page of Google hits for it!
(8) READING blogs saves time: OK, this isn’t about writing, but reading, however I follow 57 blogs at the moment using the The Old Reader (and GReader on Android) as my feed aggregator of choice. It’s an invaluable time saver for keeping up with important snippets of information.
Whilst I am a BIG fan of blogging, micro-blogging (aka Twitter) is just not for me for two reasons:
(1) Vast amounts of drivel: I am NOT interested in the minutiae of a persons day. I want “signal” above “noise” in life and you drown in noise on Twitter, There is undoubtedly signal but finding it can be difficult.
(2) Time: it is considerably more time consuming to stay on top of the constant drip of information…. don’t get me started about Facebook!
As a footnote to that… Twitter has its place and I do use it occasionally. Very occasionally….
Finally a technical note - I have always used the uber-cool Blosxom blogging engine which runs as a CGI script from my own server with all posts stored as text files. Its ultra reliable and portable, which is often not the case for more complex database driven sites.
I’m off to the 6th Argentine Congress on Quaternary Geomorphology (or the rather handy Google Translate version!) shortly so have been prepping various academic and travel things ready for the trip. One thing I stumbled across which might be useful to other (UK) travelers is paying abroad - credit cards are obviously dead handy in this regard but usually charge a foreign transaction fee. Not so the Halifax Clarity Credit Card which is free on foreign transaction and, indeed, free on cash withdrawals. If you pay off your card monthly then this is a great deal.
Although a little dated (it was produced using v2.0 and we are currently on v.2.81) Lex berman’s QGIS Workshop is a very easy to access and use intro and primer to QGIS, with a smattering of useful links. Worth looking through for hints and tips.
I wanted to highlight the 2015 International Data Rescue Award in the Geosciences which is run by and IEDA and Elsevier. As they say on the site, IDRA was created to to raise awareness of the importance of securing access to science’s older research data, particularly those with poor preservation outlook or fragile storage conditions, and to urge efforts towards creating robust electronic datasets that can be shared globally.
This is something I have long had an interest in, going back to terrain modelling I undertook for my MSc and MSc degrees. In particular it was a focus of my PhD where I looked at a range of published and unpublished materials on the former Irish ice sheet. Some time after my PhD (!) I realised there was a dataset of striae observations of considerable size and this led the the compilation, mapping and publication along with subsequent interpretation of the data. This then formed one of the examples used in my recent paper on data rescue in geomorphology.
It’s worth looking at the introductory section to the GeoResJ paper (see below) as it covers some more general ground about what we consider to be data rescue (and something I also blogged on)… I’m not going to repeat it here, but it’s salient to note that it’s anything we lose “access” to. For example I blogged about try to make PDFs of my MSc Thesis available and how, in the space of 20 years, this particular file format is near obsolete (but not quite unreadable). Flipping this on it’s head, what formats should we storing data in? Within the context of spatial data, I blogged about this a little while ago and much of this remains pertinent today. Indeed, the topic of preservation is so important that research council projects need to have a data deposition plan - however this is often file format agnostic and really a well conceived plan should take this in to consideration as well. At Wageningen University, all research students need to come up with a data management plan as part of their research - an important element.
The take away… if nothing else consider how you might use data collected as part of your research in the future and that is both in the physical media it is stored on and the format it is stored in.
OS recently provided a recent update on their OpenData products as a reminder of what is available and some of the new products. Indeed, take a look at their main OpenData page, the products page and the download page. There are some really good products here including Meridian (medium scale vector), Terrain 50 (medium scale DEM), CodePoint (postcodes), BoundaryLine (administrative boundaries) and a range of raster products. Very good for a range of mapping projects and all using the very flexible Open Government License. Enjoy!
Part of a note to self…. I wanted to burn a DVD of an mp4 i had downloaded and started looking around for an easy and quick way to do this. And you’d have thought it would be simples…. but no! Which surprises me because all you need to do is transcode the video in to mp2, create the DVD file directory structure and then burn to the disc. All things for which there is open source software. So after some false starts with Infracorder, cdrtfe and ImgBurn and, after a little bit of DuckDuckGoing, I ended up coming back to Windows DVD Maker which… errrrr… didn’t quite work!! A couple of gotchas….
1. It doesn’t work with mp4…. so I quickly loaded TEncoder and converted it using the DVD_Player_avi settings, *but* changed the audio codec to Wmav2
2. When I burnt the disc - there was no audio!! A quick DuckDuckGo later and this page was useful. In short, try using WMA audio instead of AC3 (hence the point above) and then TURN OFF any filters. To do this click on “Options” (bottom right of DVD Maker screen) and go to the Compatibility tab and untick the “AVI Decompressor”.
Once I had done this things worked perfectly. As with much in the Microsoft (and Apple!) world, if you do it their way it works well.
…. has now been created! This was a great project over at the BBCs StarGazing Live 2015 getting the public to submit photos of Orion to create it. The clever bit is combining the photos together which, from the description, looks to use image matching algorithms that my PhD student James O’Connor is utilising in his research. It first matches the image to a known constellation to calculate the area of the sky it covers - if appropriate its accepted for processing, along with every other image of the same region. With the end of submissions, these are then all matched against one another, overlaid and combined together. This is, again, an image matching process although I’d be interested to know what they did for the combination.
A great example of remote sensing, citizen science and the way image processing cross-fertilises across disciplines.
Thought I’d kick off an occasional series of blog posts highlighting nice features in QGIS….. this is my go-to app for working with spatial data as it’s fast and reliable.
I’m currently completing work with my colleague Niels Anders on manipulating some digitised vector data. This works in Python and produces shapefile outputs - so QGIS is being used to view the data, manipulate the attribute table and symbolise some of the outputs for map production.
One of the processes I have to do is extract a sub-set of vector data in a shapefile and save it to a new one… so great feature number 1 is the “Paste features as” menu item. As the screenshot below shows I can automatically paste features to a new vector layer with the same projection as the QGIS project. Very handy and…. just makes life easier!
Feels appropriate, given the eclipse across northern Europe (very nearly now!) to post about the topic….. and point people to the very good Eclipse Maps (Esri employee by day, eclipse fanatic by night). Below is an example from a couple of years ago - really nicely produced and very clear. Worldwide eclipses 2001-2020 are shown here…. the gallery has a good range of maps both historical and predicted.
It’s a strange situation now - in the past monochrome (or B&W or panchromatic) photos were the standard images to be produced. You had a choice of…. B&W film and that was it!! B&W remained the mainstay of (particularly professional) photography right the way through to the 1970s. Whilst the idea for colour projection (and photography) dates back to James Maxwell in 1855, it wasn’t until the launch of Kodachrome in 1935 that there was a viable commercial product available… at a price. The 1970s was when colour finally decreased to “consumer” prices.
Since the 1980s we have had the rise of digital which works completely differently. Whilst film has 3 layers, each sensitive to different wavelengths of light, a digital sensor is inherently monochromatic….. it only records light (a greyscale value) incident upon the sensor. On top of the sensor sits a colour filter array (CFA) which filters either red, green or blue light. The *arrangement* of filters in the array is critical and typically a Bayer arrangement is used. This has 50% green, 25% red and 25% blue filters meaning that the sensor records a matrix (or patchwork) of values for different wavelengths of light - this single layer is then demosaiced in to three new layers representing red, green and blue light.
The obvious point is that, if you only want a monochrome image, what can you do? The digital sensor is inherently recording in a single layer. The sub-sampling employed by using the CFA requires interpolation to a colour image which, if you produce a B&W, means you then convert back to a single layer! Crazy!! One solution is to buy a monochrome camera - yes, at least one manfacturer now makes a B&W camera and thats the Leica M Monochrom. Nice camera, but a little pricey at £4,500. One alternative is to **remove** the Bayer array (debayer) from an existing camera - a few people appear to have done this but there are (as far as Im aware) no commercial services as it’s a risky business. The array is bonded to the sensor and you need to scrape it off, but clearly people have successfully completed the task.
Besides shooting in monochrome, what are the advantages? Well with no de-mosaicing process to go through the images should be sharper, a fact critical for photogrammetry where colour is less important. I think we’ll see more of this over the next few years.