Last year we were playing around with websockets and we:
“…figured that an interesting use-case would to have a multi-user GIS where you can actually see where the other guy is, what he is seeing and together edit the map; think google-docs for map editing.”
We showed a first version of our application ‘cow’ (concurrent online webgis). Since then we’ve been expanding the possibilities. An obvious one is being able to add, edit and delete objects on the map; symbols, lines and polygons. We created a version where you could mark where the nature fire started with a portable device, that location would be shared over websockets and the nature fire model would calculate the spread of the fire and return the resulting time-polygons to all connected devices.
Another important possibility we explored was connecting Phoenix with websockets. This way you can expand Phoenix’ ‘same place, same time’ multi-user capabilities with ‘different place, same time’ collaboration. This could mean that someone in the field can see the results of discussions around the table as they are drawn into Phoenix on his smartphone or that different groups of experts in different countries all can contribute on the same map using multiple instances of Phoenix. What you see below is a combination of Phoenix with cow’s webviewer on a Windows 8 all-in-one PC, an Android tablet and an Android smartphone.
Our French intern Nils created a nifty extension on Phoenix; the Google Earth connector. This allows an user to control a seemingly unlimited number of 3D views from within Phoenix. The user has a control showing the current position in the 3D space and the point in the middle of the 3D view (the viewpoint). The user can change the 3D view by either dragging the viewpoint, the control or the map. You also can load 3D models in Phoenix and position them on the map.
If you connect more than one 3D view they will be connected to the initial control. However dragging a screen away from the control will create a new control, allowing for multiple views with independent control. This way multiple users can control their own 3D view, or an object or area can be shown from different angles in a in intuitive way.
After several years of doing research with touch tables and GIS we’ve finally build an application which is so intuitive that even children can use it:
This application (Phoenix) is a spatial discussion platform where people can discuss issues while standing around an interactive map. Different ideas we have tried out in earlier prototypes have been polished into a consistent application which is extendible with plugins. I made a teaser movie for those who are not in the neighbourhood to play with it themselves:
Climate models have several scenario’s to calculate different futures, depending on some starting parameters. The outcomes of these scenario’s can be visualized with maps. Comparing maps of the same variable in different scenario’s is one way to quickly grasp the impact of the starting parameters.
We’ve build a tool for the Surface to browse these different maps with a flick of the wrist: the wiekwiek. You select a variable from the list and a map showing the current situation is shown. By turning the wiekwiek one can display the different scenario’s and visually compare the impact of a scenario.
Influenced by Martijn, we use quite some OpenStreetMap data in various projects. We use it more and more as background maps for our web-applications, but also the raw data for analysis in projects like Tripod. We created a short movie to show off various strengths of OpenStreetMap and how/why we use it at Geodan Research.
We’ve been experimenting with controlling applications on the Surface with tangible objects. Our first result is using a compass to change the orientation of the map. It uses the fiducial-tags which come with the Surface. If you place the compass on the Surface it recognizes it as a control device and takes the orientation from it and applies it to the NASA worldwind globe.