Last year we were playing around with websockets and we:
“…figured that an interesting use-case would to have a multi-user GIS where you can actually see where the other guy is, what he is seeing and together edit the map; think google-docs for map editing.”
We showed a first version of our application ‘cow’ (concurrent online webgis). Since then we’ve been expanding the possibilities. An obvious one is being able to add, edit and delete objects on the map; symbols, lines and polygons. We created a version where you could mark where the nature fire started with a portable device, that location would be shared over websockets and the nature fire model would calculate the spread of the fire and return the resulting time-polygons to all connected devices.
Another important possibility we explored was connecting Phoenix with websockets. This way you can expand Phoenix’ ‘same place, same time’ multi-user capabilities with ‘different place, same time’ collaboration. This could mean that someone in the field can see the results of discussions around the table as they are drawn into Phoenix on his smartphone or that different groups of experts in different countries all can contribute on the same map using multiple instances of Phoenix. What you see below is a combination of Phoenix with cow’s webviewer on a Windows 8 all-in-one PC, an Android tablet and an Android smartphone.
Our French intern Nils created a nifty extension on Phoenix; the Google Earth connector. This allows an user to control a seemingly unlimited number of 3D views from within Phoenix. The user has a control showing the current position in the 3D space and the point in the middle of the 3D view (the viewpoint). The user can change the 3D view by either dragging the viewpoint, the control or the map. You also can load 3D models in Phoenix and position them on the map.
If you connect more than one 3D view they will be connected to the initial control. However dragging a screen away from the control will create a new control, allowing for multiple views with independent control. This way multiple users can control their own 3D view, or an object or area can be shown from different angles in a in intuitive way.
After several years of doing research with touch tables and GIS we’ve finally build an application which is so intuitive that even children can use it:
This application (Phoenix) is a spatial discussion platform where people can discuss issues while standing around an interactive map. Different ideas we have tried out in earlier prototypes have been polished into a consistent application which is extendible with plugins. I made a teaser movie for those who are not in the neighbourhood to play with it themselves:
HTML5 provides the geolocation api. This is commonly used to move the map to the position of the user. Quite a useful feature, but obviously it is not limited to this. I made a quick HTML5 site which will give the height of you location according to the AHN - the dutch height model.
It is very simple: the geolocation API will provide a coordinate and I use that coordinate to do a getfeatureinfo on the AHN-service of EduGIS. To make it slightly more interesting it will show a cow if you’re above sea level, a nice coral reef if below and a foggy picture if the app doesn’t know where you are, or doesn’t have a height for that location.
Obviously this only works for the Netherlands, since the AHN only provides heights for the Netherlands.
You can find your own height here: http://research.geodan.nl/sites/hoogte/
After someone saw my BAG building data movie he asked if it would be possible to create an interactive map of the entire Netherlands. This made me think, since creating the movie was a very time consuming action. The problem is that there are about 6 million buildings in the BAG database. This makes the data a bit unwieldy to use directly in the browser. The old fashioned way to do time series on maps involves creating a new layer for each time-moment (year in this case). This would mean that there would be over 150 layers to be loaded on the map and switching between those for the ‘time sliding’ effect. Apart from the hideous task to set up 150 almost the same layers, it would end up with too much images for a browser to handle.
However modern browsers have the <canvas> element. This element allows for the manipulation of single pixels within this element. So I figured if I could encode the building-dates in a PNG and use canvas to display only those pixels which represent a building older than the given date it should be possible to time-slide through the buildings. Fortunately the fancy new mapping library Leaflet.js has a canvas tile layer build in. The BAG data is already available through EduGIS, so I only needed it to encode the data differently in the PNGs.
The encoding is very simple: per pixel there are 4 values available: red, green, blue and alpha. Since I only needed to encode 200-odd values I used just the red value. The years before 1850 are encoded as groups, since the data is so sparse, after 1850 each year is individually encoded. This means that from 1850 onwards the red value increases with 1. The client retrieves the encoded PNGs as normal tiles and look like this:
The image appears grey because I kept the green and blue values of the PNG the same as the red. This image is loaded into canvas and the imageData is retrieved using ctx.getImageData(0, 0, 256, 256) and stored as a jQuery data object on the canvas. This is important, since for visual effect we will manipulate the imageData on the canvas and we want to keep track of the original values. Once the imageData is attached to the canvas the colors are being calculated. It will take the original values, compares them to the current year and will decide whether or not to show the pixel and in which color.
Since only the grey tile is needed, the actual sliding through time is really fast because it doesn’t need to retrieve more data. With 4 bands of 255 values each you can encode an insane amount of data into a PNG, readily available through canvas for direct manipulation. Apart from time sliding, is detailed representation of DEM data an obvious use case.
Spatial data traditionally has been displayed using maps and tables. Maps, though good in showing the spatial extent, are not always optimal in showing aggregated data. The human mind can easily be tricked into believing that a bigger country has a bigger share of the pie. Tables show the individual records, but still lack the overview of aggregated data, also tables are notoriously hard to read.
To view the information in an aggregated form one has to build complex queries. These are often slow and do not scale well and the user either has to interpret the resulting map or compare numbers in a table. The first being not very precise the second not very intuitive.
Within EuroGeoSource, a cross European project that allows users to identify, access, use and reuse aggregated geographical information on geo energy and mineral resources, we have come up with a new way. The users are not typical GIS experts and do not have the knowledge to build custom spatial queries. Instead we have determined the most important types of aggregation (eg. by country, by deposit-type etc).
The user can search for one or more commodities. Using MapQuery and gRaphael, an SVG chart library, we than present these commodities aggregated in (pie) charts. He can use the charts as a selection method, for instance by clicking on a country he will see all the occurrences of the commodity in that country on the map.
The result is a fast and intuitive way to search for aggregated data. Providing overview on the distribution of data in multiple domains and still giving access to detailed data in a traditional GIS manner.
Climate models have several scenario’s to calculate different futures, depending on some starting parameters. The outcomes of these scenario’s can be visualized with maps. Comparing maps of the same variable in different scenario’s is one way to quickly grasp the impact of the starting parameters.
We’ve build a tool for the Surface to browse these different maps with a flick of the wrist: the wiekwiek. You select a variable from the list and a map showing the current situation is shown. By turning the wiekwiek one can display the different scenario’s and visually compare the impact of a scenario.
We created a control on the surface which allows the user to view 360° photos on the secondary screen. You place the control object on the map and it will load the nearest photo onto the secondary screen. When you rotate the object the view will rotate with it.
Cyclorama viewer op de surface
We created a short movie to show how it works:
A few months ago we experimented with controlling the Surface with real objects. This was generally seen as a success and there was need for more objects. Our initial objects were real objects which we equipped with a Surface tag. However it was difficult to get objects which where more or less the right size and were a nice metaphor for the action it was controlling.
This summer we had an intern who had learned how to create models from modeling foam. We thought of some metaphors to be created, bought the foam and let her loose. She created a whole set of objects;
Influenced by Martijn, we use quite some OpenStreetMap data in various projects. We use it more and more as background maps for our web-applications, but also the raw data for analysis in projects like Tripod. We created a short movie to show off various strengths of OpenStreetMap and how/why we use it at Geodan Research.