FP7 Logo

3D Stereoscopic Interactive User Interfaces

In the last few years, Stereoscopic display devices have been introduced to the consumer market. This, together with the upcoming consumer release of stereoscopic head-mounted displays such as the Oculus Rift, has contributed to renew interest in technologies capable of presenting three-dimensional contents to the user. Indeed, these display work by simulating the way our own brain generates the perception of depth, by rendering two computer-generated pictures from slightly different viewpoints. The display then makes sure that each picture is seen by the associated eye, by means of shutter glasses or two separate screens. The 3STARS project aimed at pursuing research in this domain by designing new interaction techniques and investigating how stereoscopic display can support interactive experiences.

In the first stage of the project, the researcher investigated the current state of the art in the domain of stereoscopic 3D user interfaces. Following this literature review, he set out to address issues commonly associated with stereoscopic visualization. Although specialist equipment that is explicitly designed to manipulate 3D content in the full 6 degrees of freedom exists, this has contributed to 3DUIs remaining confined to research laboratories and industry professionals. For example, the use of Multitouch input and 3D displays has received a lot of attention in recent years due to the richness of gestural interaction and the possibility of applying it to analyse scientific data. However, the requirement of touching the screen in order to interact, directly leads to the occurrence of the “vergence-accommodation conflict”. This arises because the user could perceive 3D content to appear in front of the screen or inside it. However, users still need to touch the display surface in order to express an input. This introduces a mismatch between the real depth of the screen and the perceived depth. In so doing, by fixating on their fingers, users risk incurring in the loss of the stereoscopic effect.

To mitigate these issues, the researcher proposed the use of an indirect Multitouch interaction paradigm. A tablet can be consider an indirect device if it is used to affect another display such as a 3D stereoscopic one. One distinct advantage they have compared to direct 3D Multitouch installations is that they are much more ubiquitous. Thus any tablet can potentially be turned into a device capable of full 6DOF input. Furthermore, by not requiring the user to touch the display area, they do not introduce the risk of potentially missing information by occluding it with one’s own hands. They also retain the richness of Multitouch interaction and are less prone to context switches than the traditional mouse and keyboard combination. Through a combination of user studies based on a 3D docking task, the researcher compared two indirect techniques designed for the tablet with two state-of-the-art direct touch techniques. Results from these studies showed that the indirect techniques are as effective as the best among the direct touch techniques. Furthermore, the indirect ones were reported to be less tiring and provided a better quality of experience. In a navigation tasks, the indirect technique resulted in a significant difference of 30% less errors. Results of these studies have been collected into various papers that have been published at the IEEE Symposium on 3D User Interfaces 2015 and 2016.

A side project focusing on the use of an indirect stereoscopic 3DUI based on a foot tracker has instead been accepted to the 2014 IEEE Symposium on 3D User Interfaces. This work was based on the use of a foot tracker system installed below the user’s desk. This tracker captured the user’s feet movements and mapped them to actions affecting the 3D environment. The researcher was invited to present his work in Minneapolis on March the 30th, 2014. Long term results include the potential of using commonplace Multitouch devices as 3D input devices and the design of better and improved interaction techniques for this class of devices.

Substitutional Reality

In the second part of the research fellowship, the rise in popularity of 3D stereoscopic head-mounted displays prompted the researcher to appraise its research potential. A promising research direction was found and the researcher decided to pursue this timely project with the goal of being at the forefront of this branch of 3D stereoscopic interfaces.

Indeed, today’s interactive experiences that simulate reality and are perceived through HMD stereoscopic displays suffer from two main issues. First, these experiences require suitable physical spaces if users need to walk around. Second, interacting with objects requires complex augmentation of either the environment or the user. If users want to participate in these experiences in their own home, it is unlikely that they will have these conditions available. Therefore we propose an investigation of this concept of “Substitutional Reality” in the context of Virtual Reality. The idea builds on the concept of “passive haptics” to provide engaging interactive experiences. Passive haptics is the notion of using real physical props to match 3D objects that users see in the virtual environment. So that when they reach for the virtual object, they receive passive touch feedback from touching a real object. In the past, research on this concept has been used to enhance the sense of really feeling “present” in a virtual environment. Substitutional Reality proposes to adapt the virtual environment to the layout of the physical environment. In this way, real objects are substituted by virtual equivalents. The researcher, along with a colleague, formalised this concept and investigated the extents to which a mismatch can be tolerated before it breaks the illusion of being in a virtual environment. Indeed, perfect matches are unlikely to be easily practical. On the other hand a mismatch between the physical object and its virtual representation enables designers to create a wide variety of Substitutional Environments.

Results helped to identify which physical properties (such as shape, size, material and temperature) had the most impact in supporting or breaking this illusion. We collected our finding in a set of design guidelines to support future designers of Substitutional Reality experiences. These results have been collected in a paper that, at the time of writing, has been accepted for inclusion in the program of the SIGCHI Conference on Human Factors, to be held in Seoul, South Korea in April 2015. Long term results could include the further exploration of this design space which eventually could result in systems able to use the user’s physical environment to dynamically alter the Virtual Environment the user is immersed in. This will allow users to participate in Virtual Reality experiences in their own home environment.

In addition to these research activities, the researcher was invited to give job talks at the Universities of Portsmouth, York and Birmingham. Eventually, the University of Portsmouth offered him a Lectureship. Following the termination of his contract with Lancaster University, he is continuing his research projects started thanks to the fellowship at the University of Portsmouth, where he is also engaging in teaching activities.


↑ Jump to Top

A.L. Simeone.
Indirect Touch Manipulation for Interaction with Stereoscopic Displays
In Proceedings of 3D User Interfaces 2016 (3DUI 2016). IEEE, pp. 13-22.

PDF Version of DocumentPresentation SlidesVideo

A.L. Simeone.
Substitutional reality: Towards a research agenda
In Proceedings of 1st Workshop on Everyday Virtual Reality (WEVR 2015). IEEE, pp. 19-22.

PDF Version of Document

A.L. Simeone, E. Velloso, and H. Gellersen.
Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences
In Proceedings of SIGCHI Conference on Human Factors in Computing Systems (CHI 2015). ACM, pp. 3307-3316.

PDF Version of DocumentPresentation SlidesVideo

A.L. Simeone and H. Gellersen.
Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task
In Proceedings of 3D User Interfaces 2015 (3DUI 2015). IEEE, pp. 105-108.

PDF Version of Document

A.L. Simeone, E. Velloso, J. Alexander, and H. Gellersen.
Feet Movement in Desktop 3D Interaction
In Proceedings of 3D User Interfaces 2014 (3DUI 2014). IEEE, pp. 71-74.

PDF Version of Document

Research on 3D interaction has explored the application of multi-touch technologies to 3D stereoscopic displays. However, the ability to perceive 3D objects at different depths (in front or behind the screen surface) conflicts with the necessity of expressing inputs on the screen surface. Touching the screen increases the risk of causing the vergence-accommodation conflict which can lead to the loss of the stereoscopic effect or cause discomfort. In this work, we present two studies evaluating a novel approach based on the concept of indirect touch interaction via an external multi-touch device. We compare indirect touch techniques to two state-of-the-art 3D interaction techniques: DS3 and the Triangle Cursor. The first study offers a quantitative and qualitative study of direct and indirect interaction on a 4 DOF docking task. The second presents a follow-up experiment focusing on a 6 DOF docking task. Results show that indirect touch interaction techniques provide a more comfortable viewing experience than both techniques. It also shows that there are no drawbacks when switching to indirect touch, as their performances in terms of net manipulation times are comparable.
author={Simeone, Adalberto L.},
booktitle={3D User Interfaces 2016},
series = {3DUI 2016},
title={Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task},
doi = {10.1109/3DUI.2016.7460025},
In our previous work on Substitutional Reality, we presented an exploration of a class of Virtual Environments where every physical object surrounding the user is associated with appropriate virtual counterparts. Differently from "passive haptics", Substitutional Reality assumes the existence of a discrepancy in the association. This previous work explored how far this mismatch can be pushed and its impact on the believability of the experience. In this paper we discuss three main research directions for Substitutional Reality. Firstly, the design space is largely unexplored as the initial investigation focused on the mismatch between real and virtual objects. Secondly, the development of systems enabling a dynamic substitution process represents a key challenge. Thirdly, we discuss the meta-design process of these experiences..
title={Substitutional reality: Towards a research agenda},
author={Simeone, Adalberto L},
booktitle={1st Workshop on Everyday Virtual Reality (WEVR)},
Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences.
author={Simeone, Adalberto L. and Velloso, Eduardo and Gellersen, Hans},
title={Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences},
booktitle={Proceedings of the 33rd annual ACM conference on Human factors in Computing Ssystems},
series = {CHI 2015},
pages = {3307--3316},
numpages = {10},
url = {http://doi.acm.org/10.1145/2702123.2702389},
doi = {10.1145/2702123.2702389},
publisher = {ACM}}
In this paper we studied the impact that the directedness of touch interaction has on a path following task performed on a stereoscopic display. The richness of direct touch interaction comes with the potential risk of occluding parts of the display area, in order to express one's interaction intent. In scenarios where attention to detail is of critical importance, such as browsing a 3D dataset or navigating a 3D environment, important details might be missed. We designed a user study in which participants were asked to move an object within a 3D environment while avoiding a set of static distractor objects. Participants used an indirect touch interaction technique on a tablet and a direct touch technique on the screen. Results of the study show that in the indirect touch condition, participants made 30\% less collisions with the distractor objects.
author={Simeone, Adalberto L. and Gellersen, Hans},
booktitle={3D User Interfaces 2015},
series = {3DUI 2015},
title={Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task},
In this paper we present an exploratory work on the use of foot movements to support fundamental 3D interaction tasks. Depth cameras such as the Microsoft Kinect are now able to track users’ motion unobtrusively, making it possible to draw on the spatial context of gestures and movements to control 3D UIs. Whereas multitouch and mid-air hand gestures have been explored extensively for this purpose, little work has looked at how the same can be accomplished with the feet. We describe the interaction space of foot movements in a seated position and propose applications for such techniques in three-dimensional navigation, selection, manipulation and system control tasks in a 3D modelling context. We explore these applications in a user study and discuss the advantages and disadvantages of this modality for 3D UIs.
author={Simeone, A.L. and Velloso, E. and Alexander, J. and Gellersen, H.},
booktitle={3D User Interfaces 2014},
series = {3DUI 2014},
title={Feet movement in desktop 3D interaction},
keywords={gesture recognition;human computer interaction;image sensors;solid modelling;3D UI;3D modelling context;Microsoft Kinect;desktop 3D interaction;feet movement;foot movements;fundamental 3D interaction tasks;mid-air hand gestures;seated position;three-dimensional navigation;Cameras;Foot;Legged locomotion;Mice;Navigation;Three-dimensional displays;Tracking},