Category Archives: Publications

“Altering User Movement Behaviour in Virtual Reality” accepted at IEEE VR 2017 and in TVCG!

This paper explores how we can alter Virtual Environments to affect user trajectories in Virtual Environments. See the video:

Publications

↑ Jump to Top
PDF

A.L. Simeone, I. Mavridou, and W. Powell.
Altering User Movement Behaviour in Virtual Environments
IEEE Transactions on Visualization and Computer Graphics, vol. 19(5), 2017. IEEE, in print..

PDF Version of DocumentVideo

In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. From the results obtained and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.

“Three-Point Interaction: Combining Bi-Manual Direct Touch with Gaze” paper accepted at AVI 2016

This paper explores the use of eye gaze as a third channel supporting bimanual touch interaction. We explored whether users can perform three actions in parallel and how effective they are. Results show that for 13% of the time, users acted in complete parallelism. We discuss how three-point user interfaces could benefit from a third channel.

I will be presenting this paper at the ACM Advanced Visual Interfaces Conferences to be held in Bari, Italy this June (7-10), for which I was also chosen to serve as a Program Committee member.

Publications

↑ Jump to Top
PDF

A.L. Simeone, A. Bulling, J. Alexander, and H. Gellersen.
Three-Point Interaction: Combining Bi-Manual Direct Touch with Gaze
In Proceedings of Advanced Visual Interfaces 2016 (AVI 2016). ACM, pp. 168-175.

PDF Version of DocumentVideo

The benefits of two-point interaction for tasks that require users to simultaneously manipulate multiple entities or dimensions are widely known. Two-point interaction has become common, e.g., when zooming or pinching using two fingers on a smartphone. We propose a novel interaction technique that implements three-point interaction by augmenting two-finger direct touch with gaze as a third input channel. We evaluate two key characteristics of our technique in two multi-participant user studies. In the first, we used the technique for object selection. In the second, we evaluate it in a 3D matching task that requires simultaneous continuous input from fingers and the eyes. Our results show that in both cases participants learned to interact with three input channels without cognitive or mental overload. Participants' performance tended towards fast selection times in the first study and exhibited parallel interaction in the second. These results are promising and show that there is scope for additional input channels beyond two-point interaction.
@inproceedings{Simeone2016ThreePoint,
author={Simeone, Adalberto L., Bulling, Andreas, Alexander, Jason, Gellersen, Hans},
booktitle={Advanced Visual Interfaces 2016},
series = {AVI 2016},
title={Three-Point Interaction: Combining Bi-Manual Direct Touch with Gaze},
year={2016},
pages = {168--175},
numpages = {8},
doi = {10.1145/2909132.2909251},
month={June},
organization={ACM},
}

“Indirect Touch Interaction for Manipulation with Stereoscopic Displays” paper accepted at 3DUI 2016

My full paper on using Indirect Touch multi-touch interaction techniques for stereoscopic displays has been accepted at the IEEE Symposium on 3D User Interfaces 2016, to be held in Greenville, SC, USA.

Publications

↑ Jump to Top
PDF

A.L. Simeone.
Indirect Touch Manipulation for Interaction with Stereoscopic Displays
In Proceedings of 3D User Interfaces 2016 (3DUI 2016). IEEE, pp. 13-22.

PDF Version of DocumentPresentation SlidesVideo

Research on 3D interaction has explored the application of multi-touch technologies to 3D stereoscopic displays. However, the ability to perceive 3D objects at different depths (in front or behind the screen surface) conflicts with the necessity of expressing inputs on the screen surface. Touching the screen increases the risk of causing the vergence-accommodation conflict which can lead to the loss of the stereoscopic effect or cause discomfort. In this work, we present two studies evaluating a novel approach based on the concept of indirect touch interaction via an external multi-touch device. We compare indirect touch techniques to two state-of-the-art 3D interaction techniques: DS3 and the Triangle Cursor. The first study offers a quantitative and qualitative study of direct and indirect interaction on a 4 DOF docking task. The second presents a follow-up experiment focusing on a 6 DOF docking task. Results show that indirect touch interaction techniques provide a more comfortable viewing experience than both techniques. It also shows that there are no drawbacks when switching to indirect touch, as their performances in terms of net manipulation times are comparable.
@inproceedings{Simeone2016Indirect,
author={Simeone, Adalberto L.},
booktitle={3D User Interfaces 2016},
series = {3DUI 2016},
title={Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task},
pages={13--22},
doi = {10.1109/3DUI.2016.7460025},
year={2016},
month={March},
organization={IEEE},
}

Substitutional Reality: bringing Virtual Reality home

The ACM XRDS magazine for students published in its fall issue our article on recent developments on Substitutional Reality. The rest of the issue also contains several interesting articles and perspectives on Virtual Reality!

XRDS SR

Publications

↑ Jump to Top
PDF

A.L. Simeone.
Substitutional Reality: Bringing virtual reality home
In Proceedings of (XRDS 22 (November 2015)). ACM, pp. 24-9.


Now that virtual reality headsets are finally reaching the wider consumer market, how can we merge the physical and virtual worlds to create a unified multi-sensory experience?
@article{simeone2015VRHome,
title={Substitutional Reality: Bringing virtual reality home},
author={Simeone, Adalberto L. and Velloso, Eduardo},
journal={XRDS: Crossroads, The ACM Magazine for Students},
volume={22},
number={1},
pages={24--29},
year={2015},
publisher={ACM}
}

Substitutional Reality: Towards a Research Agenda (WEVR 2015)

The workshop paper on Substitutional Reality published at the IEEE 1st Workshop on Everyday Virtual Reality is now available on the IEEE Xplore library and here (preprint version).

Publications

↑ Jump to Top
PDF

A.L. Simeone.
Substitutional reality: Towards a research agenda
In Proceedings of 1st Workshop on Everyday Virtual Reality (WEVR 2015). IEEE, pp. 19-22.

PDF Version of Document

In our previous work on Substitutional Reality, we presented an exploration of a class of Virtual Environments where every physical object surrounding the user is associated with appropriate virtual counterparts. Differently from "passive haptics", Substitutional Reality assumes the existence of a discrepancy in the association. This previous work explored how far this mismatch can be pushed and its impact on the believability of the experience. In this paper we discuss three main research directions for Substitutional Reality. Firstly, the design space is largely unexplored as the initial investigation focused on the mismatch between real and virtual objects. Secondly, the development of systems enabling a dynamic substitution process represents a key challenge. Thirdly, we discuss the meta-design process of these experiences..
@inproceedings{Simeone2015SubRealAgenda,
title={Substitutional reality: Towards a research agenda},
author={Simeone, Adalberto L},
booktitle={1st Workshop on Everyday Virtual Reality (WEVR)},
pages={19--22},
year={2015},
organization={IEEE}
}

“Select & Apply: Understanding How Users Act Upon Objects Across Devices” accepted at Personal and Ubiquitous Computing

Our paper was just accepted at the Personal and Ubiquitous Computing journal. As our interactions increasingly cut across diverse devices, we often encounter situations where we find information on one device but wish to use it on another device for instance a phone number spotted on a public display but wanted on a mobile. We conceptualise this problem as Select & Apply and contribute two user studies where we presented participants with eight different scenarios involving different device combinations, applications and data types. In the first, we used a think-aloud methodology to gain insights on how users currently accomplish such tasks and how they ideally would like to accomplish them. In the second, we conducted a focus group study to investigate which factors influence their actions. Results indicate shortcomings in present support for Select & Apply and contribute a better understanding of which factors affect cross-device interaction.

Publications

↑ Jump to Top
PDF

A.L. Simeone, M.K. Chong, C. Sas, and H. Gellersen.
Select & Apply: Understanding How Users Act Upon Objects Across Devices
Personal and Ubiquitous Computing, vol. 19(5), Feb. 2015. Springer, 1-16.

PDF Version of Document

As our interactions increasingly cut across diverse devices, we often encounter situations where we find information on one device but wish to use it on another device for instance a phone number spotted on a public display but wanted on a mobile. We conceptualise this problem as Select Apply and contribute two user studies where we presented participants with eight different scenarios involving different device combinations, applications and data types. In the first, we used a think-aloud methodology to gain insights on how users currently accomplish such tasks and how they ideally would like to accomplish them. In the second, we conducted a focus group study to investigate which factors influence their actions. Results indicate shortcomings in present support for Select Apply and contribute a better understanding of which factors affect cross-device interaction.
@article{Simeone2015SelectApply
year={2015},
issn={1617-4909},
journal={Personal and Ubiquitous Computing},
doi={10.1007/s00779-015-0836-1},
title={Select & Apply: understanding how users act upon objects across devices},
url={http://dx.doi.org/10.1007/s00779-015-0836-1},
publisher={Springer London},
author={Simeone, Adalberto L. and Chong, Ming Ki and Sas, Corina and Gellersen, Hans},
pages={1-16},
language={English}
}

Preprint of CHI 2015 paper on Substitutional Reality now available

A preprint version of our recent paper on “Substitutional Reality” is now available. Additional video material will become available shortly.

Publications

↑ Jump to Top
PDF

A.L. Simeone, E. Velloso, and H. Gellersen.
Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences
In Proceedings of SIGCHI Conference on Human Factors in Computing Systems (CHI 2015). ACM, pp. 3307-3316.

PDF Version of DocumentPresentation SlidesVideo

Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences.
@inproceedings{Simeone2015SubstitutionalReality,
author={Simeone, Adalberto L. and Velloso, Eduardo and Gellersen, Hans},
title={Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences},
booktitle={Proceedings of the 33rd annual ACM conference on Human factors in Computing Ssystems},
series = {CHI 2015},
year={2015},
pages = {3307--3316},
numpages = {10},
url = {http://doi.acm.org/10.1145/2702123.2702389},
doi = {10.1145/2702123.2702389},
publisher = {ACM}}

Paper on 3D indirect touch accepted at 3DUI 2015

Our paper titled “Comparing Indirect and Direct Touch in a Stereoscopic Interaction Task” has been accepted for inclusion in the 3DUI 2015 program.Task3

In this work we investigated the effect that occluding the screen has on direct touch 3D interaction techniques. Indeed, interacting with 3D content on a multi-touch screen causes the user to occlude with their arms or hands large parts of the display. We designed a 3D interaction task in which users had to move a biplane in a 3D environment while avoiding collisions with the gold spheres, which counted as errors. In one condition, participants interacted on a 3D multi-touch screen, in the other, we adapted the interaction technique for a tablet device. Results of our user-study showed that in the indirect condition (with the tablet) participants performed 30% less erros.

Publications

↑ Jump to Top
PDF

A.L. Simeone and H. Gellersen.
Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task
In Proceedings of 3D User Interfaces 2015 (3DUI 2015). IEEE, pp. 105-108.

PDF Version of Document

In this paper we studied the impact that the directedness of touch interaction has on a path following task performed on a stereoscopic display. The richness of direct touch interaction comes with the potential risk of occluding parts of the display area, in order to express one's interaction intent. In scenarios where attention to detail is of critical importance, such as browsing a 3D dataset or navigating a 3D environment, important details might be missed. We designed a user study in which participants were asked to move an object within a 3D environment while avoiding a set of static distractor objects. Participants used an indirect touch interaction technique on a tablet and a direct touch technique on the screen. Results of the study show that in the indirect touch condition, participants made 30\% less collisions with the distractor objects.
@inproceedings{Simeone2015Occlusion,
author={Simeone, Adalberto L. and Gellersen, Hans},
booktitle={3D User Interfaces 2015},
series = {3DUI 2015},
title={Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task},
year={2015},
month={March},
}

Paper on Substitutional Reality accepted at CHI 2015!

I’m delighted to report that the full paper I wrote with my colleagues Eduardo Velloso and Hans Gellersen while at Lancaster University has been officially accepted for inclusion in the CHI 2015 program.

Substitutional Reality

The paper provides an investigation of the concept of “Substitutional Reality” in the context of Virtual Reality. In a Substituional Virtual Environment, every physical object is substituted by a different virtual object. So that while we are looking at our desk, in such an immersive environment it could appear as a futuristic command console. Our umbrella could appear as a Lightsaber and so on. In this way we can use the passive feedback from the physical environment to substantiate more engaging Virtual Reality experiences.

The paper explores this novel design space with two user studies investigating increasing level of mismatch between the physical proxy and the virtual equivalent. The goal was to determine when this mismatch would negatively affect the suspension of disbelief and level of engament so that future designers of Substitutional Reality content could build on our findings.

Presentation at MUM 2013

Presentation at MUM 2013I presented my work titled “A Cross-Device Drag-and-Drop Technique” at MUM 2013, in Luleä, Sweden. See the video and download all resources related to the short paper and demo here after the jump.

Publications

↑ Jump to Top
PDF

A.L. Simeone, J. Seifert, D. Schmidt, P. Holleis, E. Rukzio, and H. Gellersen.
A Cross-Device Drag-and-Drop Technique
In Proceedings of Mobile and Ubiquitous Multimedia 2013 (MUM 2013). ACM, article 10.

PDF Version of DocumentPresentation SlidesVideo
PDF

A.L. Simeone, J. Seifert, D. Schmidt, P. Holleis, E. Rukzio, and H. Gellersen.
Technical Framework Supporting a Cross-Device Drag-and-Drop Technique
Demo at Mobile and Ubiquitous Multimedia 2013 (MUM 2013). ACM, article 40.

PDF Version of Document

Many interactions naturally extend across smart-phones and devices with larger screens. Indeed, data might be received on the mobile but more conveniently processed with an application on a larger device, or vice versa. Such interactions require spontaneous data transfer from a source location on one screen to a target loca- tion on the other device. We introduce a cross-device Drag-and- Drop technique to facilitate these interactions involving multiple touchscreen devices, with minimal effort for the user. The technique is a two-handed gesture, where one hand is used to suitably align the mobile phone with the larger screen, while the other is used to select and drag an object between devices and choose which application should receive the data.
@inproceedings{Simeone2013DragAndDrop,
author = {Simeone, Adalberto L. and Seifert, Julian and Schmidt, Dominik and Holleis, Paul and Rukzio, Enrico and Gellersen, Hans},
title = {A Cross-device Drag-and-drop Technique},
booktitle = {Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia},
series = {MUM '13},
year = {2013},
isbn = {978-1-4503-2648-3},
location = {Lule\å, Sweden},
pages = {10:1--10:4},
articleno = {10},
numpages = {4},
url = {http://doi.acm.org/10.1145/2541831.2541848},
doi = {10.1145/2541831.2541848},
acmid = {2541848},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {data transfer, drag-and drop, mobile devices},
}
We present the technical framework supporting a cross-device Drag-and-Drop technique designed to facilitate interactions involving multiple touchscreen devices. This technique supports users that need to transfer information received or produced on one device to another device which might be more suited to process it. Furthermore, it does not require any additional instrumentation. The technique is a two-handed gesture where one hand is used to suitably align the mobile phone with the larger screen, while the other is used to select and drag an object from one device to the other where it can be applied directly onto a target application. We describe the implementation of the framework that enables spontaneous data-transfer between a mobile device and a desktop computer.
@inproceedings{Simeone:2013:TechDragDrop,
author = {Simeone, Adalberto L. and Seifert, Julian and Schmidt, Dominik and Holleis, Paul and Rukzio, Enrico and Gellersen, Hans},
title = {Technical Framework Supporting a Cross-device Drag-and-drop Technique},
booktitle = {Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia},
series = {MUM '13},
year = {2013},
isbn = {978-1-4503-2648-3},
location = {Lule\å, Sweden},
pages = {40:1--40:3},
articleno = {40},
numpages = {3},
url = {http://doi.acm.org/10.1145/2541831.2541879},
doi = {10.1145/2541831.2541879},
acmid = {2541879},
publisher = {ACM},
address = {New York, NY, USA},
}