pietersberg

Pietersberg

My colleague Tom was playing around with WebGL this morning to visualise 3D data in a browser. As test area he is using the only ‘mountain’ in the Netherlands: the Pietersberg. It’s especially nice to see how the ENCI quarry is eating away the mountain. The big spikes are the chimneys of the quarry.

 

If you want to see it for yourself: http://model.geodan.nl/main/d3test/webgl1.html (only works on Chrome for now and most likely the model will be refined)

Concurrent Online WebGIS

Last year we were playing around with websockets and we:

“…figured that an interesting use-case would to have a multi-user GIS where you can actually see where the other guy is, what he is seeing and together edit the map; think google-docs for map editing.”

We showed a first version of our application ‘cow’ (concurrent online webgis). Since then we’ve been expanding the possibilities. An obvious one is being able to add, edit and delete objects on the map; symbols, lines and polygons. We created a version where you could mark where the nature fire started with a portable device, that location would be shared over websockets and the nature fire model would calculate the spread of the fire and return the resulting time-polygons to all connected devices.

Another important possibility we explored was connecting Phoenix with websockets. This way you can expand Phoenix’ ‘same place, same time’ multi-user capabilities with ‘different place, same time’ collaboration. This could mean that someone in the field can see the results of discussions around the table as they are drawn into Phoenix on his smartphone or that different groups of experts in different countries all can contribute on the same map using multiple instances of Phoenix. What you see below is a combination of Phoenix with cow’s webviewer on a Windows 8 all-in-one PC, an Android tablet and an Android smartphone.

YouTube Preview Image

Websocket based 3D viewer

Our French intern Nils created a nifty extension on Phoenix; the Google Earth connector. This allows an user to control a seemingly unlimited number of 3D views from within Phoenix. The user has a control showing the current position in the 3D space and the point in the middle of the 3D view (the viewpoint). The user can change the 3D view by either dragging the viewpoint, the control or the map. You also can load 3D models in Phoenix and position them on the map.

YouTube Preview Image

If you connect more than one 3D view they will be connected to the initial control. However dragging a screen away from the control will create a new control, allowing for multiple views with independent control. This way multiple users can control their own 3D view, or an object or area can be shown from different angles in a in intuitive way.

The setup works with a websocket server which receives the coordinates, models, etc from Phoenix and passes them on to the connected webclients. The clients run the google earth plugin, which is controlled by some javascript and the commands received through the websocket connection. If a new client connects or an existing one disconnects the server informs Phoenix.

De TR-bril van Geodan

(as an exception to the rule, this post is Dutch, especially for the readers of our magazine, Geodata)

Hieronder kunt u een filmpje zien dat een indruk geeft van de mogelijkheden van Toegevoegde Realiteit (TR) (in het Engels Augmented Reality (AR)) als manier om op locatie, in het volledige blikveld, om te gaan met geo-informatie. Of klik hier om het filmpje met een hogere resolutie te zien.

YouTube Preview Image

 

AR eyewear shows underground infrastructure

It has been a while since our last blog on the subject of Augmented Reality (AR), but now we can give a first look at the progress that has been made. The progress has been made largely thanks to Nils, our intern from Toulouse. He is a true IT wizard!

Our video eyewear has been reworked into a platform that can make the unseen data cloud that surrounds us all visible. See the short video (below or go to youtube for a higher resolution view) a  to get an impression of what it is like to walk around wearing the future of spatial awareness.

YouTube Preview Image

You will probably notice that the system is experimental. Some paper and tape was needed to block direct sunlight from the eyes. Also, the system is not as portable as it could be. We use a laptop and a smartphone to make the system work. In theory, just one small but powerful portable device should be sufficient. And, as can be expected, there is a fair amount of calibration and tweaking to be done. Nevertheless, we think that the system in its present state already makes a fine proof-of-concept.

About that concept: we are aiming at a uniform approach towards interacting with all kinds of data. The basic function of the glasses is to display the world in front of the user without any augmentation. It does so with a stereoscopic effect, the same effect that you see when you watch a 3D movie. All visual extras can be switched on or off with voice commands. All you need to control the eyewear is a small microphone (In the video you see Nils wearing a headset with earphones, but that is just because we had no wearable microphone without earphones lying around. Although it is possible to imagine the system using audio to give information, we do not intend to make use of that option). Voice commands can not only be used to switch visual elements on or off, they can also be used to change the behaviour of a single element. For example, voice commands can be used to zoom in or zoom out on the inset map.

Some of the visual extras have their own designated space in the wearer’s view. Things like the current time, the compass direction and the index map will always be shown in the same place of the field of view. Other information can appear anywhere within the field of vision, because it has a real world location. For example, the user can switch on locations of aircraft in the sky, or points of interest in the immediate vicinity.

When spatial objects like buildings, aircraft or underground pipes are visualised, the wearer of the glasses can select an object by bringing it in the centre of the field of vision – by looking straight at it. That object will then be highlighted and extra information can be displayed. In other words, looking at an object is enough to identify it.

Sometimes it may be desirable to track a selected object and to keep the about the object information (which may change) visible. In that case it is not necessary to keep the object in the centre of vision. Again, a voice command like ‘lock focus’ is sufficient. And of course an vocalized instruction like ‘unlock focus’ deselects the object.

We hope to show you some more video demonstrations of our AR experiments soon. Meanwhile – as always – we like to know what you think, so don’t hesitate to comment!

Building a websocket based multi-user map

Websockets are the next big thing for full-duplex communication on the web. At least that is what they are saying for a while now. Actual implementations are hard to find and documentation is a bit sparse. However, the idea of full-duplex communication is intriguing, the most used example is a webchat client. No more polling, long-polling, keep-a-lives and other tricks. Since there are already an over abundance of webchat clients it is not really a strong use-case. We figured that an interesting use-case would to have a multi-user GIS where you can actually see where the other guy is, what he is seeing and together edit the map; think google-docs for map editing.

Websockets are really easy to start with, especially if you use node.js, on the client it is:

var url = ws://host/
var ws = new WebSocket(url, 'connect'); 
ws.onopen = onOpen; 
ws.onmessage = onMessage; 
ws.onclose = onClose; 
ws.onerror = onError;

On the server I used this; basically you start a websocket server and define what it has to do when people connect, send messages and disconnect again. This worked like a charm. In no time we were sending view-extents, features and the like around on the local network.

The problem however is that websockets are really really new. There is not that much information about them and hardly any tools to work with them. Most of the hardware of the internet does not know how to deal with it. We already had a problem when the node.js server was moved onto a different network within the company. The gateway didn’t understand the websocket packages and dropped them. There is a nice article explaining what is happening and what you should do. Basically the way forward is not using websockets (ws://) but secure websockets (wss://). So while websockets are new and not too well documented, secure websockets are a complete disaster in that respect.

Setting up a secure websocket with node.js is also very easy; just use wss:// on the client side and on the serverside use:

var http = require('https');

var options = {
      key: fs.readFileSync( 'ssl/socket-key.pem'),
      cert: fs.readFileSync('ssl/socket-cert.pem')
};
var server = http.createServer(options, function(request, response) {
      response.writeHead(200);
      response.end();
});
....etc

Since I am using self-signed certificates I needed to get the user sign the certificate, otherwise the wss:// connection fails silently. So I added an empty HTTP 200 response so the browser would show an unsecure certificate warning. This worked nicely through the company gateways. However, it was still impossible to access the server from the outside, because non of the proxy server we use understand websocket or secure websocket. So while I made a little progress within the different networks of the company I still couldn’t access the socket server from outside the network.

It took me a while, but with the help of this article I figured out the trick. He uses stunnel to unencrypt the secure websocket, but I prefer a one stop shop so: You need to setup a proxy facing the internet using the latest haproxy (with SSL support). Configure it as such:

global
        log 127.0.0.1   local0 info
        maxconn 4096
        user haproxy
        group haproxy
        daemon
        #debug
        #quiet

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        option redispatch
        option http-server-close
        maxconn 2000
        contimeout      5000
        clitimeout      50000
        srvtimeout      50000

backend ws
        balance roundrobin
        option forwardfor # This sets X-Forwarded-For
        timeout connect 86400000s
        timeout server 86400000s
        server ws1 localhost:8080 weight 1 maxconn 1024 check

backend ww
        balance roundrobin
        option forwardfor # This sets X-Forwarded-For
        timeout connect 10s
        timeout server 30s
        server ww1 localhost

frontend https_proxy
        bind *:443 ssl crt /etc/ssl/cert.key_pem
        mode http
        acl is_websocket hdr(Upgrade) -i WebSocket
        acl is_websocket hdr_beg(Host) -i ws
        use_backend ws if is_websocket
        timeout client 86400000s
        default_backend ww

What this does is providing an https ‘server’ on the internet and filtering the wss requests out, unencrypt them and send them to the node.js instance. http://afitnerd.com really did a great job explaining their setup, only two things weren’t that clear. The first is that the client is using the wss:// protocol, so your config looks like this:

var url = wss://host/
var ws = new WebSocket(url, 'connect'); 
ws.onopen = onOpen; 
ws.onmessage = onMessage; 
ws.onclose = onClose; 
ws.onerror = onError;

The second is that the server still provides a non secure websocket service. So you need to use var http = require('http'); and not var https = require('https');. Now we have secure websockets working through all proxies, gateways and routers between you and the ws server, so without further ado:

Please accept the self-signed certificate, use a websocket enabled browser and click here.

OpenJDK7 vs. Oracle JDK7 with Geoserver

Introduction:

After installing a new server with ubuntu 64 I noticed that only openjdk packages are available nowadays. A quick search learned me that, though it is still possible, installing Oracle’s JDK wasn’t advisable. It is hard to keep up to date and people even claim that Oracle’s JDK is a security risk in itself. However the Geoserver developers recommend Oracle (Sun) JDK over OpenJDK. This is a test they did in september 2010 with Sun JDK 1.6 and OpenJDK 6.

Test setup:

Development has continued since then and now Java 7 is available, both in Oracle and OpenJDK flavour. I asked on twitter whether or not it still mattered which version you use. Apparently it is not entirely clear, so I did a quick test with two virtual machines on an ESX-cluster[*] I had handy. One I installed the standard OpenJDK that comes with ubuntu (7u9-2.3.3-0ubuntu1~12.04.1), the other I installed the latest Oracle JDK and JRE (jdk7u9).

On the database server the dutch top10 topographic map was loaded and styled with the SLD from NLextract. I used geoserver’s preview layer function to get an idea how fast the two JDKs are and how the results look. I’ve first zoomed in step by step from the entire Netherlands to village level (8steps) and changed the mapsize to 1280×1024 and zoomed back up. Obviously I had atop running on all three servers to prevent accidentally starting a request before the server were ready.

Results:

This produced the following graph:

Interestingly oracle’s render time tops around 48seconds and openjdk gets a DNF at the lower zoomlevels.In general you can say that Oracle is faster in parsing the millions of features from the database. On more detailed zoom levels the differences disappear:

z0 z1 z2 z3 z4 z5 z6 z7 z8
openjdk 60 41.3 11.97 3.65 1.44 0.552 0.294 0.183
openjdk-2 60 41.35 11.98 3.92 1.35 0.598 0.272 0.213
oracle 48.49 48.32 26.06 8 2.49 0.9 0.363 0.228 0.197
oracle-2 48.32 48.44 27.25 7.99 2.44 0.701 0.384 0.484 0.197
openjdk-large 60 28.47 9.3 3.53 1.62 0.802
openjdk-large-2 28.59 9.22 3.18 1.4 0.784
oracle-large 48.25 48.53 48.84 31.58 11.81 3.75 1.83 1.01 0.721
oracle-large-2 48.48 48.13 48.83 31.43 12.21 3.78 1.85 1.1 0.76

The render quality was on par though, this is with openjdk:

and this with oracle:

On zoom level 2, openjdk:

and oracle:

Conclusion:

This quick and dirty test shows that a reasonably configured openjdk-based geoserver instance is slower than a oracle-jdk based one when rendering a lot of features out of a database. Since the image quality seems to be the same (no labels here) it shouldn’t be too much of a problem though if you use geoserver as a backend of a tiling server. The millions of tiles of the deep-zoomlevels contain less features.

[*] test machine specs, all run Ubuntu 12.04 64bit:
Database server ‘cumulus’:
16GB RAM, 4CPU 2.67GHZ XEON X5650, postgresql 9.1 postgis 2.0, 2GB shared_buffers

Geoserver machines:
8GB RAM, 4CPU 2.67GHZ XEON X5650, tomcat 7, geoserver 2.2.1

 

 

xkcd

OpenLayers and XKCD

Today Randall of XKCD created a most astonishing comic, which is a giant side view of a world with stuff in the air and underground. Looking at the comic I realised that it was a tiled map, though with a slightly odd tilescheme, in stead of a simple x/y it used a more complex, n/s,e/w starting from the middle, so the centre tile is 1n1e one to the left is 1n2e and one to the right is 1n1w, similar with going up or down.

OpenLayers to the rescue: being open source it is very easy to extend and modify their tiling-schema to work with the slightly weird XKCD system, see it in action here: http://research.geodan.nl/sites/xkcd

The slightly horrible code:

OpenLayers.Layer.XKCD = OpenLayers.Class(OpenLayers.Layer.XYZ, {
	getXYZ: function(bounds) {
		var res = this.getServerResolution();
		var x = Math.round((bounds.left - this.maxExtent.left) /
			(res * this.tileSize.w));
		if(x >= 50)  x = (x - 49)+'e';			
		else if (x <50) x = (50 - x)  + 'w';
		var y = Math.round((this.maxExtent.top - bounds.top) /
			(res * this.tileSize.h));
		if(y > 50) y = (y -50)+'s';
		else if (y <=50) y = (51 -y )+ 'n';
	        var z = this.getServerZoom();
		return {'x': x, 'y': y, 'z': z};
	}
})


 

Phoenix is finished

After several years of doing research with touch tables and GIS we’ve finally build an application which is so intuitive that even children can use it:

YouTube Preview Image

This application (Phoenix) is a spatial discussion platform where people can discuss issues while standing around an interactive map. Different ideas we have tried out in earlier prototypes have been polished into a consistent application which is extendible with plugins. I made a teaser movie for those who are not in the neighbourhood to play with it themselves:

YouTube Preview Image