Using your hand to visualise earthquake data in 3D
Using UMAJIN here we can see earthquake strikes in x/y location and depth over the continental US for a month. The radius is the intensity of the earthquake. Using a 3D camera as an input device I can move my hand left and right I move through time. This highlights in magenta the earthquakes nearest to the current time. When I close my hand I can rotate and zoom the earthquake points.
One of the really interesting things you can see in this style of interaction is that we are using quite a natural grab and drag interaction when you want to manipulate the camera. While moving your hand with your fingers relaxed allows you to navigate the temporal space.
Having that natural control of the 3D space turns out to be important as visualising the depth of the strike would have required using a different visual trick (like opacity) – yet with the fluid rotation provided by the gesture the human brain is easily able to establish the deep versus shallow earthquakes.
It would be interesting to provide the ability to dynamically change the use of the ‘spatial axis’ where the user could gesture up and down through a series of options such as position, relative size, time, depth etc to hunt for patterns in the way the data is organised.