A good article over at edSurge on how Ardusat have raised money in a funding round to support the use of micro-satellites by schools for learning. Real science for real kids - inspiring.
With the scanning of Paolozz’s studio outlined earlier, I promised to show the external camera processing we completed. So, with no further delay, below is the pano from the top of Paolozzi’s cabin bed. All seven ScanStations are visible here.
Nice pick up by Go-Geo from the Exelis blog (the makers of ENVI) on a range of resources for access to free remotely sensed imagery. Its actually in the PDF linked off the blog page. Very handy resource.
Clearly Amazon are very keen on the whole drone/UAv delivery thing…..the BBC reports on a recent FAA draft ruling that require operators to have “line of sight” with their kit. This pretty much puts paid to delivery for the time being. However expected this (air!)space to become crowded as a number of delivery operators will push for the this. I guess to start with we might see extended trials in some regions…. one to watch as it will clearly affect the wider UAV debate. And take a look at the Small UAV Coalition…. the only thing “small” is the UAV. Some big backers (see the members)behind this advocacy group…smiley happy website though!!
With the scan complete for the Paolozzi Studio of Objects project, our starting point was the 41.7GB of compressed data from the P20 in the form of .bin files which required post-processing. The tasks required sounded fairly simple; import the data into Cyclone, import and attach the cube map images using the ‘texture map browser’, register the scanworlds to produce one point cloud, export and send to Touchpress as a PTX file (a fairly standard pointcloud format). However for a number of reasons it was not as simple as we had envisaged!
The first problem related to the multiple returns issue noted earlier. Information on optimal import settings and post processing techniques to help remove these points had been requested from Leica before the Edinburgh scan but it was a while before they were able to respond.
With post-processing starting, it was discovered that Cyclone v8 (installed on the network at Kingston) couldn’t import the P20 data and v9 was required. This had not been an issue during testing as post-processing had been carried out with a single install on v9, however this was not possible following the Edinburgh scan due to the amount of data collected. Cyclone was finally up and running just after New Year!
More problems were to follow… after numerous unsuccessful attempts to import the data, it was discovered that the existing workstations within the university did not meet the system requirements for Cyclone 9.0 and a new workstation was built. Originally this was accessed remotely but the amount of data being transferred over the network made this impossible and so a new desk was arranged in order to access the workstation locally however the problems continued.
During this time we were in constant contact with Leica’s helpdesk who provided faultless assistance for what must have seemed like an endless series of issues! Leica offered to import the data themselves and return it to us as an IMP file ready to be accessed by Cyclone. This offer was duly accepted and a flash drive dispatched to Leica’s Milton Keynes offices. Leica’s second tier support team (in Germany) then became involved once again. After remotely accessing the workstation, they established that the import was failing as the Cyclone software had been installed by the IT department with the settings used for previous Cyclone versions. These allowed the log and data files to be mapped over a network. However in Cyclone 9.0 these files have to be mapped locally. With the settings altered the import was finally successful and the flash drive duly arrived from Leica just as the issue had been resolved!
Once the scans could be viewed it was evident that the multiple return issue was still a problem despite adopting the import settings recommended by Leica. Furthermore, the tools Leica suggested could potentially eliminate this issue had little effect. This was the advice received:
“As you have seen, applying some of the filters - have slightly improved the noise levels but when applying full filters - this removes too much information. Our second support have gone through the workflows and have confirmed that the only way to remove these type of extraneous data, is to do manual fencing. This obviously means more manual work. There is currently no automated functionality that could solve your issue.”
Following this confirmation, it was decided to produce a point cloud immediately due to the work required to manually remove the points and then review both potential automated solutions and assess the time required.
So mapping the cube map images began. The images are imported by right clicking the image folder found under the ‘Known Coordinates’ section of each project. In order to align the cube maps to the point clouds, matching points have to be manually picked from the image and point cloud. This process is carried out in the modelspace for each scan using the ‘texture map browser’ found under ‘Edit Object - Appearance’. Only three matching picks are required for each cube map but the more picks you have the better the alignment is likely to be. After selecting matching points, Cyclone computes the picks to provide an estimated pixel error for each pair of picks and an average error for all of the picks. Any pair of picks with a large error can then be removed and recomputed until there are a satisfactory number of picks with a low average pixel error. The texture map is then confirmed by selecting ‘create multi-image from cube map’ in the texture map browser. This then adds the images under the correct scanworld in the navigator window. Right clicking on the multiImage folder within the scanworld provides the option to ‘Apply MultiImage’ which burns the texture map to the point cloud. This can also be completed in batch mode by right clicking on the project folder and selecting ‘Batch Apply MultiImages.’ The original images can then be deleted from the ‘Known Coordinates’ folder so the new cube map can be imported in order to texture map the next point cloud. This is a time consuming process but worth the effort to produce well aligned result.
Once all of the point clouds had been texture mapped, they were ’stitched’ or registered together to produce one point cloud containing the data from all the scans. This is a process made easy by our use of targets during the scanning process in Edinburgh. Once the required scanworlds had been selected, the ‘Auto Add Constraints’ function was used which produced a registered point cloud with only a 2mm RMS error.
The registered and texture mapped point cloud was then exported as a PTX file, which includes the RGB data as well as XYZ coordinates and intensity for each point. The export process is a long one (allowing time to write blog entries!) with the resulting file 140GB uncompressed and 27GB compressed, containing around 2 billion points! Posted via USB to Touchpress….
I’ve already talked about the initial objectives of the Studio of Objects project - and I say objectives, but if we could boil it down it would simply be to “recreate Paolozzi’s studio.” I’ve already outlined the technical requirements and how we tested them - so on a cold and bright weekend in October we headed up to the Scottish National Gallery of Modern Art. I say for the weekend…. I wasn’t involved in any of the preparation of the studio itself, but rather our partners at hijack and Dacapo (Gilly and Ceri) who worked very closely with SNGoMA. In practice this meant liasing with the gallery to give us access to the studio on the Saturday and Sunday, install significantly brighter lighting in both the roof and under the cabin bed, rent the Nikon D810 (and lens), arrange for delivery of the Leica P20 (Leica very kindly donated scanner time to the project), arrange (the airbnb) accommodation and to panic about any last minute glitches! All I can say is that when Adam and I turned upon on Saturday morning everything looked perfect! Given that the studio is being preserved…. it’s preservation (i.e. that it remains undamaged!) is paramount. So both lighting installation and laser scanning are higher risk activities. We’d really like to thank Kirstie Meehan at SNGoMA who spent the whole weekend making sure we didn’t damage anything! If you look at the studio space you will see that it is very cramped and it was important to minimise the number of people in it - we only ever had a maximum of two people (myself and Adam) and only when needed. Otherwise it was just myself.
One of the most important things to do when running a scan is - DOCUMENT EVERYTHING. The mists of time will change what you remember so it is critical that you note down every decision, step and procedure undertaken. Below is the field sketch Adam created for the studio along with the location (and name) of each of our scan locations. An accompanying table then notes down each individual scan at each location and the settings for that scan. We also noted down Nikon camera settings and the exact measurements on the Nodal Ninja.
This setup process took much longer than we anticipated, partly because we had to check everything was there and working, and then decide exactly where we would have the “scan stations” and the location of the HDS targets within the studio. We were also concerned about trying to minimise the problem with multiple returns noted in the scan testing. However our rationale was for excessive, redundant, scanning, but at the lowest “quality” setting which is significantly faster but didn’t affect (for our purposes) the data collected. Each full dome 360 degree scan took about 3 minutes to complete meaning the workflow became pretty slick - move and level tripod, attach scanner (which is “always on”), identify targets in the studio (and scan), then set the scanner in to a full dome scan (rapidly exiting the studio!). At this point we switched to the D810 which then meant attaching the Nodal Ninja and taking the 48/12 photos for the 24mm/16mm panoramas. In fact by the end of the day we were pretty tired and had only finished two scan stations - we retired to basecamp and left Adam’s laptop importing the scan data from the first scan which took 2 hours. We were satisfied that it’d been collected satisfactorily although the multiple return issue still remained….
Day Two dawned bright and early and a brisk walk to SNGoMA set us on our way - with the workflow optimised and scan stations decided, we very rapidly worked our way through the remaining locations. The trickiest element was scanning from the top of the bed. This is about 2m high and contains a fairly small bed with very little space around it. We could only have one person on the platform (me) which meant very carefully running through the workflow above in both a constrained space that had very limited access…. whilst 2m off the ground!! Safety was the priorty which meant conciously being aware of where the edge of the platform was at all times. Obtaining scans of the HDS targets was the trickiest part as in both scanner locations the on-board screen was very difficult to access and had poor viewing angles. Then, when it came to the scan itself, I hid behind the bed or under the tripod whilst on the platform!!
With the main scans complete, we then looked at “in-filling” the data we had. When you look at the studio you realise there are many nooks-and-crannies. With so much “stuff” there will always be shadows with line-of-sight scanning - the more scan locations there are, the more in-filling you can do. With that in mind we did four “fast” (noted as (F) on the sketch) scans at slightly lower resolution but designed to add a slightly different perspective. In total we had 11 scan locations, 20 separate scans at a mix of 1.6mm and 50mm spacing.
We didn’t think we’d be stressed for time, but with everyone leaving at different times via different transport it ended up being slightly rushed….. we had to make sure that all rental equipment was ready to go back Monday morning and was accounted for and packed. We also wanted to make sure that all data was backed up, and in particular that all data was off the scanner - in the end we collected 40Gb of compressed scan data on the P20 and about 40Gb of RAW camera imagery from the D810 (split between the spherical panoramas and the photogrammetry James undertook). As a final note, the first thing we did once back was to coalesce all the data (from different media) in to one location as a master copy and then mirror that to network storage at Kingston as part of the archival strategy.
With data collection complete, processing then began….. I’ve already noted the processing workflow for the spherical panoramas. Next up will be the laser scan data processing!
(Kirstie, Adam, Gilly, James, Mike, Chris)
In an earlier post I talked about the spherical panoramas we have created for the Studio of Objects project. These are humungous 295MP images which, even when compressed, are pretty big files. Obviously with a (spherical) panorama rather than just looking from left to right it would be good to be able to fully rotate (as if you were standing in the middle) around them, in the same way you can in a point cloud. Well there are a few viewers for spherical panos, my favourite being FSP Viewer which is fast and easy to use.
(and once I’ve set up the online image viewing I’ll post a pano there)
Adam Goddard sent me this link to a recreation of The Doves Type…. this story is told in the fabulous little layman’s book entitled Just My Type that I recommend to my cartography students. There’s no point in me recounting the story, just read about the font’s recreation and buy a digital copy of it. It’s a beautiful font and will set apart any publication you use it in.
A tender, haunting and very personal project photographing death - it’s hard to not feel a sense of attachment and loss at the very personal photographic story telling of the end of a persons life…. it comes to all yet we feel so unprepared for it…