“Select & Apply: Understanding How Users Act Upon Objects Across Devices” accepted at Personal and Ubiquitous Computing

Our paper was just accepted at the Personal and Ubiquitous Computing journal. As our interactions increasingly cut across diverse devices, we often encounter situations where we find information on one device but wish to use it on another device for instance a phone number spotted on a public display but wanted on a mobile. We conceptualise this problem as Select & Apply and contribute two user studies where we presented participants with eight different scenarios involving different device combinations, applications and data types. In the first, we used a think-aloud methodology to gain insights on how users currently accomplish such tasks and how they ideally would like to accomplish them. In the second, we conducted a focus group study to investigate which factors influence their actions. Results indicate shortcomings in present support for Select & Apply and contribute a better understanding of which factors affect cross-device interaction.

Publications

↑ Jump to Top
PDF

A.L. Simeone, M.K. Chong, C. Sas, and H. Gellersen.
Select & Apply: Understanding How Users Act Upon Objects Across Devices
Personal and Ubiquitous Computing, vol. 19(5), Feb. 2015. Springer, 1-16.

PDF Version of Document

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email
As our interactions increasingly cut across diverse devices, we often encounter situations where we find information on one device but wish to use it on another device for instance a phone number spotted on a public display but wanted on a mobile. We conceptualise this problem as Select Apply and contribute two user studies where we presented participants with eight different scenarios involving different device combinations, applications and data types. In the first, we used a think-aloud methodology to gain insights on how users currently accomplish such tasks and how they ideally would like to accomplish them. In the second, we conducted a focus group study to investigate which factors influence their actions. Results indicate shortcomings in present support for Select Apply and contribute a better understanding of which factors affect cross-device interaction.
@article{Simeone2015SelectApply
year={2015},
issn={1617-4909},
journal={Personal and Ubiquitous Computing},
doi={10.1007/s00779-015-0836-1},
title={Select & Apply: understanding how users act upon objects across devices},
url={http://dx.doi.org/10.1007/s00779-015-0836-1},
publisher={Springer London},
author={Simeone, Adalberto L. and Chong, Ming Ki and Sas, Corina and Gellersen, Hans},
pages={1-16},
language={English}
}

Preprint of CHI 2015 paper on Substitutional Reality now available

A preprint version of our recent paper on “Substitutional Reality” is now available. Additional video material will become available shortly.

Publications

↑ Jump to Top
PDF

A.L. Simeone, E. Velloso, and H. Gellersen.
Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences
In Proceedings of SIGCHI Conference on Human Factors in Computing Systems (CHI 2015). ACM, pp. 3307-3316.

PDF Version of DocumentPresentation SlidesVideo

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email
Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences.
@inproceedings{Simeone2015SubstitutionalReality,
author={Simeone, Adalberto L. and Velloso, Eduardo and Gellersen, Hans},
title={Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences},
booktitle={Proceedings of the 33rd annual ACM conference on Human factors in Computing Ssystems},
series = {CHI 2015},
year={2015},
pages = {3307--3316},
numpages = {10},
url = {http://doi.acm.org/10.1145/2702123.2702389},
doi = {10.1145/2702123.2702389},
publisher = {ACM}}

Paper on 3D indirect touch accepted at 3DUI 2015

Our paper titled “Comparing Indirect and Direct Touch in a Stereoscopic Interaction Task” has been accepted for inclusion in the 3DUI 2015 program.Task3

In this work we investigated the effect that occluding the screen has on direct touch 3D interaction techniques. Indeed, interacting with 3D content on a multi-touch screen causes the user to occlude with their arms or hands large parts of the display. We designed a 3D interaction task in which users had to move a biplane in a 3D environment while avoiding collisions with the gold spheres, which counted as errors. In one condition, participants interacted on a 3D multi-touch screen, in the other, we adapted the interaction technique for a tablet device. Results of our user-study showed that in the indirect condition (with the tablet) participants performed 30% less erros.

Publications

↑ Jump to Top
PDF

A.L. Simeone and H. Gellersen.
Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task
In Proceedings of 3D User Interfaces 2015 (3DUI 2015). IEEE, pp. 105-108.

PDF Version of Document

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email
In this paper we studied the impact that the directedness of touch interaction has on a path following task performed on a stereoscopic display. The richness of direct touch interaction comes with the potential risk of occluding parts of the display area, in order to express one's interaction intent. In scenarios where attention to detail is of critical importance, such as browsing a 3D dataset or navigating a 3D environment, important details might be missed. We designed a user study in which participants were asked to move an object within a 3D environment while avoiding a set of static distractor objects. Participants used an indirect touch interaction technique on a tablet and a direct touch technique on the screen. Results of the study show that in the indirect touch condition, participants made 30\% less collisions with the distractor objects.
@inproceedings{Simeone2015Occlusion,
author={Simeone, Adalberto L. and Gellersen, Hans},
booktitle={3D User Interfaces 2015},
series = {3DUI 2015},
title={Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task},
year={2015},
month={March},
}

Paper on Substitutional Reality accepted at CHI 2015!

I’m delighted to report that the full paper I wrote with my colleagues Eduardo Velloso and Hans Gellersen while at Lancaster University has been officially accepted for inclusion in the CHI 2015 program.

Substitutional Reality

The paper provides an investigation of the concept of “Substitutional Reality” in the context of Virtual Reality. In a Substituional Virtual Environment, every physical object is substituted by a different virtual object. So that while we are looking at our desk, in such an immersive environment it could appear as a futuristic command console. Our umbrella could appear as a Lightsaber and so on. In this way we can use the passive feedback from the physical environment to substantiate more engaging Virtual Reality experiences.

The paper explores this novel design space with two user studies investigating increasing level of mismatch between the physical proxy and the virtual equivalent. The goal was to determine when this mismatch would negatively affect the suspension of disbelief and level of engament so that future designers of Substitutional Reality content could build on our findings.

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email

WEVR 2015

We will be organising a Workshop on Everyday Virtual Reality at the IEEE conference on Virtual Reality that will be held in France next year. The upcoming release of consumer-grade VR Head-Mounted Displays will require a rethinking of the way we design and implement VR interactive systems. The workshop will convene researchers and industry practictioners to explore this emerging domain and define a new research agenda.

For further information, have a look at the WEVR 2015 website.

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email

Research Fellow position at the University of Portsmouth

The University of Portsmouth is advertising a Research Fellow position to work in AR / VR / Serious games. Here are the details:

Faculty of Creative & Cultural Industries 
School of Creative Technologies in collaboration with School of Biological Sciences
Research Fellow

Employment type:  Fixed term (6 months contract)
Employment basis:  Full time
Salary:  £33,242 – £36,309
Position number:  10013619
Date published:  14 November 2014
Closing date:  2 December 2014

Interview date:  12 December 2014

An opportunity for Virtual Relality or Augmented Reality Developer to work on an exciting new interdisciplinary project funded by InnovateUK in the areas of Molecular Bioscience and Virtual and Augmented Reality. The project will involve software development of a new research tool to bring complex 3D molecular structures to life.  The project is due to start on 5 January 2015.

You must have a PhD or equivalent industrial experience in one or more of the following areas: 3D modelling and programming for simulations or games; design of Virtual Reality applications, Augmented Reality programming or Serious Games design. You will be expected to have a strong practical knowledge of several industry-standard tools in these areas, such as Unreal, Unity, Vizard, PyMol, 3DS Max, Vuforia or Metaio. A strong research profile in a related area is desirable.

The successful candidate will be expected to contribute to funding applications to extend the project beyond this initial Phase I contract.

Please contact Dr Wendy Powell (wendy.powell@port.ac.uk) and Dr Vaughan Powell (vaughan.powell@port.ac.uk) for further details.

https://port.engageats.co.uk/

job number 10013619

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email

Joined the University of Portsmouth

On the first of October I have joined the University of Portsmouth as a Lecturer, where I will continue my research on 3D Interaction and Virtual Reality.

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email

Review of “Direct3D Rendering Cookbook”

Direct3D Rendering Cookbook (by Packt Publishing) is the only book I am aware of that covers the combination of C# programming and the DirectX 11 API. This is the book I would have liked to have when I taught myself how to use the DirectX API.

The book is mainly aimed at those who would like to start learning a 3D graphic library and have already a sound C# programming background. It starts by giving the reader the basic knowledge on how to set up a DirectX application. Each chapter follows a clear structure by focusing on a particular topic. Source code is given and then explained in “How it works” sections. Once learned the basics, the other topics covered are very up to date with the features of modern 3D engines. Indeed, aside from the fundamental core concepts of rendering meshes it also covers advanced topics such as physics, deferred rendering and multithreading.

This book is however not about 3D engine architecture. So those expecting this book to be a guide from start to finish on the development of a 3D application or game, are likely to be disappointed. Faithful to its name, the book reports a set of rendering techniques. Indeed concepts such as input or sound are not covered. The techniques described by the book contain the minimum amount of code necessary for their implementation. However, the reader will find that unless one wants to build standalone demos, it is necessary for a lot more work in order to put everything together in a meaningful application or game. Considering the target audience, I think the book lost an opportunity to introduce the fundamental concepts of 3D engine architecture.

In conclusion, I believe the book is an excellent match for those who wish to learn how to use C# to build 3D applications. SharpDX also has a good community around it, so it is easy to find support once you become ready to work on your own.

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email

Direct3D Rendering Cookbook

I am currently reviewing a new book on Direct3D as used in .net with SharpDX called “Direct3D Rendering Cookbook”. It seems to be the only book that I know of on this specific combination. Here is what the publisher says about it:

Direct3D Rendering Cookbook provides detailed .NET examples covering a wide range of advanced 3D rendering techniques available in Direct3D 11.2. With this book, you will learn how to use the new Visual Studio 2012 graphics content pipeline, how to perform character animation, how to use advanced hardware tessellation techniques, how to implement displacement mapping, perform image post-processing, and how to use compute shaders for general-purpose computing on GPUs. (more…)

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email

Paper accepted at 3DUI 2014

Our paper titled “Feet Movement in Desktop 3D Interaction” has been officially accepted at the 9th IEEE Symposium on 3D User Interfaces (3DUI 2014). The conference will be held in Minneapolis, Minnesota (USA), March 29-30.

This research explores the use of a feet-tracker system to support fundamental 3D Interaction Tasks. By using a Kinect placed under the desk user are able to perform such tasks as camera navigation, object manipulation, selection and system control with their feet. In this way the hands are free to perform tasks requiring higher precision.

Feet Movement in Desktop 3D Interaction

Publications

↑ Jump to Top
PDF

A.L. Simeone, E. Velloso, J. Alexander, and H. Gellersen.
Feet Movement in Desktop 3D Interaction
In Proceedings of 3D User Interfaces 2014 (3DUI 2014). IEEE, pp. 71-74.

PDF Version of Document

Share this article
  • Facebook
  • Twitter
  • Reddit
  • Email
In this paper we present an exploratory work on the use of foot movements to support fundamental 3D interaction tasks. Depth cameras such as the Microsoft Kinect are now able to track users’ motion unobtrusively, making it possible to draw on the spatial context of gestures and movements to control 3D UIs. Whereas multitouch and mid-air hand gestures have been explored extensively for this purpose, little work has looked at how the same can be accomplished with the feet. We describe the interaction space of foot movements in a seated position and propose applications for such techniques in three-dimensional navigation, selection, manipulation and system control tasks in a 3D modelling context. We explore these applications in a user study and discuss the advantages and disadvantages of this modality for 3D UIs.
@inproceedings{Simeone:2014:Feet3D,
author={Simeone, A.L. and Velloso, E. and Alexander, J. and Gellersen, H.},
booktitle={3D User Interfaces 2014},
series = {3DUI 2014},
title={Feet movement in desktop 3D interaction},
year={2014},
month={March},
pages={71-74},
keywords={gesture recognition;human computer interaction;image sensors;solid modelling;3D UI;3D modelling context;Microsoft Kinect;desktop 3D interaction;feet movement;foot movements;fundamental 3D interaction tasks;mid-air hand gestures;seated position;three-dimensional navigation;Cameras;Foot;Legged locomotion;Mice;Navigation;Three-dimensional displays;Tracking},
doi={10.1109/3DUI.2014.6798845},}