Publications

Journal Articles

↑ Jump to Top
PDF

A.L. Simeone, I. Mavridou, and W. Powell.
Altering User Movement Behaviour in Virtual Environments
IEEE Transactions on Visualization and Computer Graphics, vol. 19(5), 2017. IEEE, in print..

PDF Version of DocumentVideo
PDF

A.L. Simeone, M.K. Chong, C. Sas, and H. Gellersen.
Select & Apply: Understanding How Users Act Upon Objects Across Devices
Personal and Ubiquitous Computing, vol. 19(5), Feb. 2015. Springer, 1-16.

PDF Version of Document

Conference Proceedings

↑ Jump to Top
PDF

A.L. Simeone, A. Bulling, J. Alexander, and H. Gellersen.
Three-Point Interaction: Combining Bi-Manual Direct Touch with Gaze
In Proceedings of Advanced Visual Interfaces 2016 (AVI 2016). ACM, pp. 168-175.

PDF Version of DocumentVideo
PDF

A.L. Simeone.
Indirect Touch Manipulation for Interaction with Stereoscopic Displays
In Proceedings of 3D User Interfaces 2016 (3DUI 2016). IEEE, pp. 13-22.

PDF Version of DocumentPresentation SlidesVideo
PDF

A.L. Simeone, E. Velloso, and H. Gellersen.
Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences
In Proceedings of SIGCHI Conference on Human Factors in Computing Systems (CHI 2015). ACM, pp. 3307-3316.

PDF Version of DocumentPresentation SlidesVideo
PDF

A.L. Simeone and H. Gellersen.
Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task
In Proceedings of 3D User Interfaces 2015 (3DUI 2015). IEEE, pp. 105-108.

PDF Version of Document
PDF

A.L. Simeone, E. Velloso, J. Alexander, and H. Gellersen.
Feet Movement in Desktop 3D Interaction
In Proceedings of 3D User Interfaces 2014 (3DUI 2014). IEEE, pp. 71-74.

PDF Version of Document
PDF

A.L. Simeone, J. Seifert, D. Schmidt, P. Holleis, E. Rukzio, and H. Gellersen.
A Cross-Device Drag-and-Drop Technique
In Proceedings of Mobile and Ubiquitous Multimedia 2013 (MUM 2013). ACM, article 10.

PDF Version of DocumentPresentation SlidesVideo
PDF

J. Seifert, A.L. Simeone, D. Schmidt, C. Reinartz, P. Holleis, M. Wagner, H. Gellersen, and E. Rukzio.
MobiSurf: Improving Co-located Collaboration through Integrating Mobile Devices and Interactive Surfaces
In Proceedings of Interactive Tabletops and Surfaces 2012 (ITS 2012). ACM, pp. 51-60.

PDF Version of DocumentVideo
PDF

J. Vermeulen, F. Kawsar, A.L. Simeone, G. Kortuem, K. Luyten, and K. Coninx.
Informing the Design of Situated Glyphs for a Care Facility
In Proceedings of Visual Languages and Human-Centric Computing 2012 (VL/HCC 2012). IEEE, pp. 89-96.

PDF Version of DocumentBest Student Paper Award
PDF

P. Buono, and A.L. Simeone.
Video Abstraction and Detection of Anomalies by Tracking Movements
In Proceedings of Advanced Visual Interfaces 2010 (AVI 2010). ACM, pp. 249-253.

PDF Version of Document
PDF

C.Ardito, P. Buono, M.F. Costabile, R. Lanzilotti, A. Piccinno, and A.L. Simeone.
Analysis of the UCD process of a web-based system
In Proceedings of Distributed Multimedia Systems 2010 (DMS 2010). Knowledge Systems Institute, pp. 180-185.

PDF Version of Document
PDF

C.Ardito, P. Buono, M.F. Costabile, R. Lanzilotti, and A.L. Simeone.
Comparing Low Cost Input Devices for Interacting with 3D Virtual Environments
In Proceedings of Human System Interactions 2009 (HSI 2009). IEEE, pp. 289-294.

PDF Version of Document
PDF

P. Buono, A.L. Simeone, C. Ardito, and R.Lanzilotti.
Visualizing Data to Support Tracking in Food Supply Chain
In Proceedings of Distributed Multimedia Systems 2009 (DMS 2009). Knowledge Systems Institute, pp. 10-15.

PDF Version of Document
PDF

P. Buono, T. Cortese, F. Lionetti, M. Minoia, and A.L. Simeone.
A Simulation of a Fire Accident in Second Life
In Proceedings of Presence 2008 (Presence 2008). Temple, pp. 183-10.

PDF Version of Document
PDF

P. Buono, C. Plaisant, A.L. Simeone, A. Aris, B. Shneiderman, G. Shmueli, and W. Jank.
Similarity-Based Forecasting with Simultaneous Previews: a River Plot Interface for Time Series Forecasting
In Proceedings of Information Visualization 2007 (IV 2007). IEEE, pp. 191-196.

PDF Version of Document

Book Chapters

↑ Jump to Top
PDF

A.L. Simeone and C. Ardito.
EUD Software Environments in Cultural Heritage: A Prototype
Lecture Notes in Computer Science Volume 6654, 2011 (ISEUD 2011). Springer, pp 313-318.

PDF Version of Document
PDF

P.Buono, and A.L. Simeone.
An Experience About User Involvement for Successful Design
Information Systems, 2010 (ItAIS 2010). Springer, pp. 503-510.

PDF Version of Document
PDF

C. Ardito, P.Buono, M.F. Costabile, R. Lanzilotti, and A.L. Simeone.
An information Visualization Approach to Hospital Shifts Scheduling
Lecture Notes in Computer Science Volume vol. 5613, 2009 (HCII 2009). Springer, pp. 439-447.

PDF Version of Document

Workshops, Demos and Posters

↑ Jump to Top
PDF

A.L. Simeone.
Substitutional Reality: Bringing virtual reality home
In Proceedings of (XRDS 22 (November 2015)). ACM, pp. 24-9.

PDF

A.L. Simeone, J. Seifert, D. Schmidt, P. Holleis, E. Rukzio, and H. Gellersen.
Technical Framework Supporting a Cross-Device Drag-and-Drop Technique
Demo at Mobile and Ubiquitous Multimedia 2013 (MUM 2013). ACM, article 40.

PDF Version of Document
PDF

G. Kortuem, F. Kawsar, P. Scholl, M. Beigl, A.L. Simeone and K. Smith.
A Miniaturized Display Network for Situated Glyphs
Demo at Pervasive Computing 2011 (Pervasive 2011). Springer.

PDF Version of DocumentBest Demo Award
PDF

C. Ardito, M.F. Costabile, R. Lanzilotti, and A.L. Simeone.
Combining Multimedia Resources for an Engaging Experience of Cultural Heritage
In Proceedings of Workshop on Social, Adaptive and Personalized Multimedia Interaction and Access at ACM Multimedia, 2010 (SAPMIA 2010). ACM, pp. 45-48.

PDF Version of Document
PDF

C. Ardito, M.F. Costabile, R. Lanzilotti, and A.L. Simeone.
Multitouch Wall Screen Challenges in Cultural Heritage and Tourism
In Proceedings of Workshop on Natural User Interfaces: the prospect and challenge of touch and gestural computing at CHI, 2010 (NUI 2010).

PDF Version of Document
PDF

C. Ardito, M.F. Costabile, R. Lanzilotti, and A.L. Simeone.
Sharable Multitouch Screens in Cultural Heritage and Tourism Applications
Poster at Visual Languages and Human-Centric Computing 2010 (VL/HCC 2010). IEEE, pp. 271-272.

PDF Version of Document
PDF

A.L. Simeone and P. Buono.
Evacuation Traces Mini Challenge: User Testing to Obtain Consensus
In Proceedings of Visual Analytics Science and Technology, 2008 (VAST 2008). IEEE.

PDF Version of DocumentVAST Challenge Award
PDF

P. Buono, and A.L. Simeone.
Interactive Shape Specification for Pattern Search in Time Series
Demo at Advanced Visual Interfaces, 2008 (AVI 2008). ACM, pp. 480-481.

PDF Version of Document

National Conferences

↑ Jump to Top
PDF

C. Ardito, P. Buono, M.F. Costabile, R. Lanzilotti, A. Piccinno, and A.L. Simeone.
Exploring Archaeological Parks by Playing Games on Mobile Devices
In Proceedings of CHItaly 2009.

PDF

C. Ardito, P. Bottoni, M.F. Costabile, R. Lanzilotti, and A.L. Simeone.
Servizi adattivi su dispositivi mobili per la fruizione di beni culturali
In Proceedings of Associazione Italiana per l'Informatica ed il Calcolo Automatico (AICA 2009).


In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. From the results obtained and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.
As our interactions increasingly cut across diverse devices, we often encounter situations where we find information on one device but wish to use it on another device for instance a phone number spotted on a public display but wanted on a mobile. We conceptualise this problem as Select Apply and contribute two user studies where we presented participants with eight different scenarios involving different device combinations, applications and data types. In the first, we used a think-aloud methodology to gain insights on how users currently accomplish such tasks and how they ideally would like to accomplish them. In the second, we conducted a focus group study to investigate which factors influence their actions. Results indicate shortcomings in present support for Select Apply and contribute a better understanding of which factors affect cross-device interaction.
@article{Simeone2015SelectApply
year={2015},
issn={1617-4909},
journal={Personal and Ubiquitous Computing},
doi={10.1007/s00779-015-0836-1},
title={Select & Apply: understanding how users act upon objects across devices},
url={http://dx.doi.org/10.1007/s00779-015-0836-1},
publisher={Springer London},
author={Simeone, Adalberto L. and Chong, Ming Ki and Sas, Corina and Gellersen, Hans},
pages={1-16},
language={English}
}
The benefits of two-point interaction for tasks that require users to simultaneously manipulate multiple entities or dimensions are widely known. Two-point interaction has become common, e.g., when zooming or pinching using two fingers on a smartphone. We propose a novel interaction technique that implements three-point interaction by augmenting two-finger direct touch with gaze as a third input channel. We evaluate two key characteristics of our technique in two multi-participant user studies. In the first, we used the technique for object selection. In the second, we evaluate it in a 3D matching task that requires simultaneous continuous input from fingers and the eyes. Our results show that in both cases participants learned to interact with three input channels without cognitive or mental overload. Participants' performance tended towards fast selection times in the first study and exhibited parallel interaction in the second. These results are promising and show that there is scope for additional input channels beyond two-point interaction.
@inproceedings{Simeone2016ThreePoint,
author={Simeone, Adalberto L., Bulling, Andreas, Alexander, Jason, Gellersen, Hans},
booktitle={Advanced Visual Interfaces 2016},
series = {AVI 2016},
title={Three-Point Interaction: Combining Bi-Manual Direct Touch with Gaze},
year={2016},
pages = {168--175},
numpages = {8},
doi = {10.1145/2909132.2909251},
month={June},
organization={ACM},
}
Research on 3D interaction has explored the application of multi-touch technologies to 3D stereoscopic displays. However, the ability to perceive 3D objects at different depths (in front or behind the screen surface) conflicts with the necessity of expressing inputs on the screen surface. Touching the screen increases the risk of causing the vergence-accommodation conflict which can lead to the loss of the stereoscopic effect or cause discomfort. In this work, we present two studies evaluating a novel approach based on the concept of indirect touch interaction via an external multi-touch device. We compare indirect touch techniques to two state-of-the-art 3D interaction techniques: DS3 and the Triangle Cursor. The first study offers a quantitative and qualitative study of direct and indirect interaction on a 4 DOF docking task. The second presents a follow-up experiment focusing on a 6 DOF docking task. Results show that indirect touch interaction techniques provide a more comfortable viewing experience than both techniques. It also shows that there are no drawbacks when switching to indirect touch, as their performances in terms of net manipulation times are comparable.
@inproceedings{Simeone2016Indirect,
author={Simeone, Adalberto L.},
booktitle={3D User Interfaces 2016},
series = {3DUI 2016},
title={Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task},
pages={13--22},
doi = {10.1109/3DUI.2016.7460025},
year={2016},
month={March},
organization={IEEE},
}
Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences.
@inproceedings{Simeone2015SubstitutionalReality,
author={Simeone, Adalberto L. and Velloso, Eduardo and Gellersen, Hans},
title={Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences},
booktitle={Proceedings of the 33rd annual ACM conference on Human factors in Computing Ssystems},
series = {CHI 2015},
year={2015},
pages = {3307--3316},
numpages = {10},
url = {http://doi.acm.org/10.1145/2702123.2702389},
doi = {10.1145/2702123.2702389},
publisher = {ACM}}
In this paper we studied the impact that the directedness of touch interaction has on a path following task performed on a stereoscopic display. The richness of direct touch interaction comes with the potential risk of occluding parts of the display area, in order to express one's interaction intent. In scenarios where attention to detail is of critical importance, such as browsing a 3D dataset or navigating a 3D environment, important details might be missed. We designed a user study in which participants were asked to move an object within a 3D environment while avoiding a set of static distractor objects. Participants used an indirect touch interaction technique on a tablet and a direct touch technique on the screen. Results of the study show that in the indirect touch condition, participants made 30\% less collisions with the distractor objects.
@inproceedings{Simeone2015Occlusion,
author={Simeone, Adalberto L. and Gellersen, Hans},
booktitle={3D User Interfaces 2015},
series = {3DUI 2015},
title={Comparing Direct and Indirect Touch in a Stereoscopic Interaction Task},
year={2015},
month={March},
}
In this paper we present an exploratory work on the use of foot movements to support fundamental 3D interaction tasks. Depth cameras such as the Microsoft Kinect are now able to track users’ motion unobtrusively, making it possible to draw on the spatial context of gestures and movements to control 3D UIs. Whereas multitouch and mid-air hand gestures have been explored extensively for this purpose, little work has looked at how the same can be accomplished with the feet. We describe the interaction space of foot movements in a seated position and propose applications for such techniques in three-dimensional navigation, selection, manipulation and system control tasks in a 3D modelling context. We explore these applications in a user study and discuss the advantages and disadvantages of this modality for 3D UIs.
@inproceedings{Simeone:2014:Feet3D,
author={Simeone, A.L. and Velloso, E. and Alexander, J. and Gellersen, H.},
booktitle={3D User Interfaces 2014},
series = {3DUI 2014},
title={Feet movement in desktop 3D interaction},
year={2014},
month={March},
pages={71-74},
keywords={gesture recognition;human computer interaction;image sensors;solid modelling;3D UI;3D modelling context;Microsoft Kinect;desktop 3D interaction;feet movement;foot movements;fundamental 3D interaction tasks;mid-air hand gestures;seated position;three-dimensional navigation;Cameras;Foot;Legged locomotion;Mice;Navigation;Three-dimensional displays;Tracking},
doi={10.1109/3DUI.2014.6798845},}
Many interactions naturally extend across smart-phones and devices with larger screens. Indeed, data might be received on the mobile but more conveniently processed with an application on a larger device, or vice versa. Such interactions require spontaneous data transfer from a source location on one screen to a target loca- tion on the other device. We introduce a cross-device Drag-and- Drop technique to facilitate these interactions involving multiple touchscreen devices, with minimal effort for the user. The technique is a two-handed gesture, where one hand is used to suitably align the mobile phone with the larger screen, while the other is used to select and drag an object between devices and choose which application should receive the data.
@inproceedings{Simeone2013DragAndDrop,
author = {Simeone, Adalberto L. and Seifert, Julian and Schmidt, Dominik and Holleis, Paul and Rukzio, Enrico and Gellersen, Hans},
title = {A Cross-device Drag-and-drop Technique},
booktitle = {Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia},
series = {MUM '13},
year = {2013},
isbn = {978-1-4503-2648-3},
location = {Lule\å, Sweden},
pages = {10:1--10:4},
articleno = {10},
numpages = {4},
url = {http://doi.acm.org/10.1145/2541831.2541848},
doi = {10.1145/2541831.2541848},
acmid = {2541848},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {data transfer, drag-and drop, mobile devices},
}
One of the most popular scenarios for advertising interactive surfaces in the home is their support for solving co-located collaborative tasks. Examples include joint planning of events (e.g., holidays) or deciding on a shared purchase (e.g., a present for a common friend). However, this usually implies that all interactions with information happen on the common display. This is in contrast to the current practices to use personal devices and further, most people's behavior to constantly switch between individual and group phases because people have differing search strategies, preferences, etc. We therefore investigated how the combination of personal devices and a simple way of exchanging information between these devices and an interactive surface changes the way people solve collaborative tasks compared to an existing approach of using personal devices. Our study results clearly indicate that the combination of personal and a shared device allows users to fluently switch between individual and group work phases and users take advantage of both device classes.
@inproceedings{Seifert2012Mobisurf,
author = {Seifert, Julian and Simeone, Adalberto and Schmidt, Dominik and Holleis, Paul and Reinartz, Christian and Wagner, Matthias and Gellersen, Hans and Rukzio, Enrico},
title = {MobiSurf: Improving Co-located Collaboration Through Integrating Mobile Devices and Interactive Surfaces},
booktitle = {Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces},
series = {ITS '12},
year = {2012},
isbn = {978-1-4503-1209-7},
location = {Cambridge, Massachusetts, USA},
pages = {51--60},
numpages = {10},
url = {http://doi.acm.org/10.1145/2396636.2396644},
doi = {10.1145/2396636.2396644},
acmid = {2396644},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {co-located collaboration, interactive surfaces, smart phones},
}
Informing caregivers by providing them with contextual medical information can significantly improve the quality of patient care activities. However, information flow in hospitals is still tied to traditional manual or digitised lengthy patient record files that are often not accessible while caregivers are attending to patients. Leveraging the proliferation of pervasive awareness technologies (sensors, actuators and mobile displays), recent studies have explored this information presentation aspect borrowing theories from context-aware computing, i.e., presenting subtle information contextually to support the activity at hand. However, the understanding of the information space (i.e., what information should be presented) is still fairly abstruse, which inhibits the deployment of such real-time activity support systems. To this end, this paper first presents situated glyphs, a graphical entity to encode situation specific information, and then presents our findings from an in-situ qualitative study addressing the information space tailored to such glyphs. Applying technology probes using situated glyphs and different glyph display form factors, the study aimed at uncovering the information space pertained to both primary and secondary medical care. Our analysis has resulted in a large set of information types as well as given us deeper insight on the principles for designing future situated glyphs. We report our findings in this paper that we expect would provide a solid foundation for designing future assistive systems to support patient care activities.
@INPROCEEDINGS{Vermeulen:2012:Glyphs,
author={Vermeulen, J. and Kawsar, F. and Simeone, A.L. and Kortuem, G. and Luyten, K. and Coninx, K.},
booktitle={Visual Languages and Human-Centric Computing (VL/HCC), 2012},
title={Informing the Design of Situated Glyphs for a Care Facility},
year={2012},
pages={89-96},
keywords={health care;medical computing;ubiquitous computing;care facility;context-aware computing;contextual medical information;glyph display form factors;graphical entity;information flow;information presentation aspect borrowing theories;patient care activity quality;patient record files;pervasive awareness technologies;primary medical care;real-time activity support systems;secondary medical care;situated glyph design;situation specific information;Conferences;Electronic mail;Hospitals;Medical diagnostic imaging;Probes;Visualization},
doi={10.1109/VLHCC.2012.6344490},
ISSN={1943-6092},}
The increasing adoption of video surveillance makes it possible to watch over sensitive areas and identify people responsible for damage, theft and violence. However, when such events are not detected immediately, the subsequent video analysis can be a long and tedious task. The aim of this paper is to present a technique that allows a human investigator to focus only on those parts of a video showing the event as it unfolds, and so helping to save on the time needed to identify and understand how it happened. The presented technique creates a single interactive image of the whole video that shows everything that happened m the scene. The human investigator can then select an area of interest and those parts of the video related to that specific area will start to play.
@inproceedings{Buono:2010:Video,
author = {Buono, Paolo and Simeone, Adalberto L.},
title = {Video Abstraction and Detection of Anomalies by Tracking Movements},
booktitle = {Proceedings of the International Conference on Advanced Visual Interfaces},
series = {AVI '10},
year = {2010},
isbn = {978-1-4503-0076-6},
location = {Roma, Italy},
pages = {249--252},
numpages = {4},
url = {http://doi.acm.org/10.1145/1842993.1843036},
doi = {10.1145/1842993.1843036},
acmid = {1843036},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {video abstraction, video analysis, visual analytics},
}
This paper analyzes how User-Centred Design (UCD) has been carried out in the creation of a web-based system, whose aim is monitoring air quality for sustainable industrial development. This distributed multimedia system has been commissioned by the Puglia region and it is used primarily by industries and regional government experts. Several lessons are learned from this analysis and hints about the effective application of UCD and the fruitful involvement of users for creating usable systems are derived.
@INPROCEEDINGS{Ardito:2010:UCD,
author={Ardito, C. and Buono, P. and Costabile, M.F. and Lanzilotti, R. and Piccinno, A. and Simeone, A.L.},
booktitle={Proceedings of Distributed Multimedia Systems 2010, DMS '10},
title={Analysis of the UCD process of a web-based system},
series={DMS '10},
publisher={Knowledge Systems Institute},
year={2010},
pages={180-185},
,}
Interaction with 3D Virtual Environments has always suffered from a lack of widely available and low cost input devices. Recently, thanks to the diffusion of gaming systems such as the Microsoft XBox 360 or the Nintendo Wii, new input devices are on the market at a relatively cheap price. This paper describes a study whose aim is to compare input devices in order to identify effective alternatives for the mouse and keyboard in such settings where their use is not advisable or feasible, e.g. museums and other public areas. This study has been carried out using a 3D Virtual Environment in which the participants were required to perform three canonical 3D interaction tasks. Two different groups participated to the test: the first group was involved in a pilot study to check the test environment. The second group performed the test.
@INPROCEEDINGS{Ardito:2009:LowCost3D,
author={Ardito, C. and Buono, P. and Costabile, M.F. and Lanzilotti, R. and Simeone, A.L.},
booktitle={Proccedings of Human System Interactions, 2009. HSI '09},
title={Comparing Low Cost Input Devices for Interacting with 3D Virtual Environments},
year={2009},
pages={292-297},
keywords={virtual reality;3D Virtual Environments;3D interaction tasks;Microsoft XBox 360;Nintendo Wii;gaming systems;low cost input devices;Costs;Keyboards;Laboratories;Mice;Performance evaluation;Testing;User interfaces;Virtual environment;Virtual reality;Wheels;3D virtual environments;human-computer interaction;input devices;user studies},
doi={10.1109/HSI.2009.5090995},}
Food tracking has become an important issue in recent times. In light of recent events such as the various outbreaks of food related epidemics, both governmental regulations and increased customer attention, push the demand for systems able to provide knowledge about a supply chain. Tools supporting the traceability and tracking of foods can go a long way towards preventing these unfortunate events or, at the very least, help in minimizing the damage that may occur, thanks to improved and faster intervention possibilities. The study presented in this paper concerns the development of a web application which visualizes supply chain data collected from various companies that are part of the supply chain.
@INPROCEEDINGS{Ardito:2010:UCD,
author={Buono, P. and Simeone, A.L. and Ardito, C. and Lanzilotti, R.},
booktitle={Proceedings of Distributed Multimedia Systems 2009, DMS '09},
title={Visualizing Data to Support Tracking in Food Supply Chain,
series={DMS '09},
publisher={Knowledge Systems Institute},
year={2009},
pages={10-15},
,}
Simulating the evacuation of an office building can be helpful to better prepare the potential occupants in the event of fire. Virtual environments are the ideal candidates for this type of simulations because they allow testing of numerous scenarios with minimal costs. We also needed a multiplayer collaborative environment for our experiment and for this reason, our choice fell on Second Life. In our study numerous tests were conducted on various groups of users to analyze their behavior and reactions and experienced through a virtual environment during a dangerous situation. In this paper we describe how our experiment was enacted and the results and observations made after the tests.
@inproceedings{buono2008simulation,
title={A Simulation of a Fire Accident in Second Life},
author={Buono, Paolo and Cortese, Tiziana and Lionetti, Fabrizio and Minoia, Marco and Simeone, Adalberto},
booktitle={Proceedings of the 11th Conference on Presence},
pages={183--190},
series = {Presence '08},
year={2008}
}
Time-series forecasting has a large number of applications. Users with a partial time series for auctions, new stock offerings, or industrial processes desire estimates of the future behavior. We present a data driven forecasting method and interface called similarity-based forecasting (SBF). A pattern matching search in an historical time series dataset produces a subset of curves similar to the partial time series. The forecast is displayed graphically as a river plot showing statistical information about the SBF subset. A forecasting preview interface allows users to interactively explore alternative pattern matching parameters and see multiple forecasts simultaneously. User testing with 8 users demonstrated advantages and led to improvements.
@INPROCEEDINGS{Buono:2007:TimeSeries,
author={Buono, P. and Plaisant, C. and Simeone, A. and Aris, A. and Shneiderman, B. and Shmueli, G. and Jank, W.},
booktitle={Information Visualization, 2007. IV '07. 11th International Conference},
title={Similarity-Based Forecasting with Simultaneous Previews: A River Plot Interface for Time Series Forecasting},
year={2007},
pages={191-196},
keywords={data visualisation;graphical user interfaces;time series;data driven forecasting method;forecasting preview interface;historical time series dataset;new stock offerings;partial time series;pattern matching search;river plot interface;similarity-based forecasting;time series forecasting;Data visualization;Economic forecasting;Laboratories;Pattern matching;Predictive models;Rivers;Smoothing methods;Technological innovation;Testing;Weather forecasting},
doi={10.1109/IV.2007.101},
ISSN={1550-6037},}
This paper describes the prototype of a framework for designing interactive applications for cultural heritage sites by following an end-user development approach. The framework is devoted to all design stakeholders, i.e. software engineers, HCI experts, cultural heritage experts and visitors, and provides them with tailored design environments for contributing their expertise to shaping the final applications. The design is guided by application templates, which provide the rules for assembling the basic components, called building blocks, whose result is the final application.
@incollection{Simeone:2011:EUD,
year={2011},
isbn={978-3-642-21529-2},
booktitle={End-User Development},
volume={6654},
series={Lecture Notes in Computer Science},
editor={Costabile, MariaFrancesca and Dittrich, Yvonne and Fischer, Gerhard and Piccinno, Antonio},
doi={10.1007/978-3-642-21530-8_32},
title={EUD Software Environments in Cultural Heritage: A Prototype},
url={http://dx.doi.org/10.1007/978-3-642-21530-8_32},
publisher={Springer Berlin Heidelberg},
keywords={end-user development; meta-design; cultural heritage},
author={Simeone, Adalberto L. and Ardito, Carmelo},
pages={313-318}
}
This paper describes the experience in designing and developing the CET system according to user-centred and participatory approaches. CET is a web-based system used by industries and experts of the regional government that monitor air quality. With CET, industries can officially declare their pollutant emissions in the atmosphere, while air quality experts can easily visualize how the industries are distributed in the regional territory, the type and quantity of emissions coming from their production processes and other important information to support their decision-making process. The experience provides hints about proper user involvement for designing successful systems.
@incollection{Buono:2010:CET,
year={2010},
isbn={978-3-7908-2147-5},
booktitle={Information Systems: People, Organizations, Institutions, and Technologies},
editor={D'Atri, Alessandro and Saccà, Domenico},
doi={10.1007/978-3-7908-2148-2_58},
title={An Experience About User Involvement for Successful Design},
url={http://dx.doi.org/10.1007/978-3-7908-2148-2_58},
publisher={Physica-Verlag HD},
author={Buono, P. and Simeone, A.L.},
pages={503-510},
language={English}
}
Scheduling staff shift work in a hospital ward is a well-known problem in the operation research field but, as such, it is very often studied from the algorithmic point of view and seldom from the human-computer interaction perspective. In most cases, the automatic solutions that operations research may provide do not satisfy the involved people. After discussing the inconveniences of an automatic approach with physicians, we have designed a staff scheduling system that combines an expert system with an information visualization (IV) system; in this way the schedule generated by the expert system is presented through the IV system to the schedule manager, who can modify the results if last minute changes are necessary, by directly manipulating the visualized data and obtaining immediate feedback about the changes made.
@incollection{Ardito:2009:Hospital,
year={2009},
isbn={978-3-642-02582-2},
booktitle={Human-Computer Interaction. Interacting in Various Application Domains},
volume={5613},
series={Lecture Notes in Computer Science},
editor={Jacko, JulieA.},
doi={10.1007/978-3-642-02583-9_48},
title={An Information Visualization Approach to Hospital Shifts Scheduling},
url={http://dx.doi.org/10.1007/978-3-642-02583-9_48},
publisher={Springer Berlin Heidelberg},
keywords={Information Visualization; Shift Scheduling},
author={Ardito, Carmelo and Buono, Paolo and Costabile, MariaF. and Lanzilotti, Rosa and Simeone, AdalbertoL.},
pages={439-447}
}
Now that virtual reality headsets are finally reaching the wider consumer market, how can we merge the physical and virtual worlds to create a unified multi-sensory experience?
@article{simeone2015VRHome,
title={Substitutional Reality: Bringing virtual reality home},
author={Simeone, Adalberto L. and Velloso, Eduardo},
journal={XRDS: Crossroads, The ACM Magazine for Students},
volume={22},
number={1},
pages={24--29},
year={2015},
publisher={ACM}
}
We present the technical framework supporting a cross-device Drag-and-Drop technique designed to facilitate interactions involving multiple touchscreen devices. This technique supports users that need to transfer information received or produced on one device to another device which might be more suited to process it. Furthermore, it does not require any additional instrumentation. The technique is a two-handed gesture where one hand is used to suitably align the mobile phone with the larger screen, while the other is used to select and drag an object from one device to the other where it can be applied directly onto a target application. We describe the implementation of the framework that enables spontaneous data-transfer between a mobile device and a desktop computer.
@inproceedings{Simeone:2013:TechDragDrop,
author = {Simeone, Adalberto L. and Seifert, Julian and Schmidt, Dominik and Holleis, Paul and Rukzio, Enrico and Gellersen, Hans},
title = {Technical Framework Supporting a Cross-device Drag-and-drop Technique},
booktitle = {Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia},
series = {MUM '13},
year = {2013},
isbn = {978-1-4503-2648-3},
location = {Lule\å, Sweden},
pages = {40:1--40:3},
articleno = {40},
numpages = {3},
url = {http://doi.acm.org/10.1145/2541831.2541879},
doi = {10.1145/2541831.2541879},
acmid = {2541879},
publisher = {ACM},
address = {New York, NY, USA},
}
We demonstrate a novel approach for building situated information systems using wirelessly connected miniaturized displays. These displays are spatially distributed in a physical work environment and present situated glyphs - human-readable abstract graphical signs - to provide activity centric notification and feedback. The demo will showcase how such miniaturized display networks can be used in dynamic workplaces, e.g., a hospital to support complex activities.
ICT technologies have a great potential not only for preserving and increasing awareness about cultural heritage, but also for allowing people to better experience this huge legacy. Various application tools have already been developed which provide different types of multimedia resources, such as 3D representations of objects and places, videos, graphics, sounds, in order to augment the physical context by providing virtual, location-specific information, so that people can experience some aspects of ancient life which would otherwise be very difficult to figure out. The effort spent to create multimedia resources is considerable; therefore, it is worth reusing them to produce applications suited to other types of visitors. In this paper, we present our on-going work to provide tailored applications that support different types of visitors. Such applications are developed according to a model that describes how multimedia resources can be combined, also depending on the type of users and devices. Examples of these solutions are briefly illustrated.
@inproceedings{Ardito:2010:Multimedia,
author = {Ardito, Carmelo and Costabile, Maria Francesca and Lanzilotti, Rosa and Simeone, Adalberto Lafcadio},
title = {Combining Multimedia Resources for an Engaging Experience of Cultural Heritage},
booktitle = {Proceedings of the 2010 ACM Workshop on Social, Adaptive and Personalized Multimedia Interaction and Access},
series = {SAPMIA '10},
year = {2010},
isbn = {978-1-4503-0171-8},
location = {Firenze, Italy},
pages = {45--48},
numpages = {4},
url = {http://doi.acm.org/10.1145/1878061.1878077},
doi = {10.1145/1878061.1878077},
acmid = {1878077},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {cultural heritage, multimedia},
}
In this paper we address some challenges that came out during the design of software applications for multitouch wall displays in the domains of cultural heritage and tourism. These challenges are related to the collaboration possibilities offered by such displays, to the identification of who is touching a certain screen point of the wall display as well as to social aspects of public interaction spaces.
Cultural heritage assets keep alive the history of a territory and of its inhabitants. Several systems have been developed to support people during their visits to historical sites and museums, with the goal of improving the overall user experience. In many cases, people travelling together would appreciate the possibility of collaborating in gathering information and planning a personalized itinerary. Large sharable multitouch screens may offer this possibility. This paper is about ongoing research that is investigating possible applications of large multitouch screens in cultural heritage and tourism. In particular, an application is described, which aims at allowing tourists to get information about a territory and create itineraries for their visits by interacting together on a large screen.
@INPROCEEDINGS{Ardito:2010:Sharable,
author={Ardito, C. and Costabile, M.F. and Lanzilotti, R. and Simeone, A.L.},
booktitle={Visual Languages and Human-Centric Computing (VL/HCC), 2010 IEEE Symposium on},
title={Sharable Multitouch Screens in Cultural Heritage and Tourism Applications},
year={2010},
pages={271-272},
keywords={touch sensitive screens;travel industry;cultural heritage;personalized itinerary;sharable multitouch screens;tourism application;user experience;Cities and towns;Collaboration;Cultural differences;Electronic mail;Mobile handsets;Prototypes;USA Councils},
doi={10.1109/VLHCC.2010.54},
ISSN={1943-6092},}
The adoption of visual analytics methodologies in security applications is an approach that could lead to interesting results. Usually, the data that has to be analyzed finds in a graphical representation its preferred nature, such as spatial or temporal relationships. Due to the nature of these applications, it is very important that key-details are made easy to identify. In the context of the VAST 2008 Challenge, we developed a visualization tool that graphically displays the movement of 82 employees of the Miami Department of Health (USA). We also asked 13 users to identify potential suspects and observe what happened during an evacuation of the building caused by an explosion. In this paper we explain the results of the user testing we conducted and how the users interpreted the event taken into account.
@INPROCEEDINGS{Simeone:2008:Terrorist,
author={Simeone, A.L. and Buono, P.},
booktitle={IEEE Symposium on Visual Analytics Science and Technology, 2008. VAST '08.},
title={Evacuation traces mini challenge: User testing to obtain consensus discovering the terrorist},
year={2008},
pages={-},
keywords={data visualisation;emergency services;security;terrorism;user interfaces;evacuation traces mini challenge;graphical representation;security applications;terrorist;user testing;visual analytics;Animation;Data security;Data visualization;Discrete event simulation;Information processing;Psychology;System testing;Visual analytics;H.1.2 [Models and principles]: User/Machine Systems—Human information processing;H.1.2 [Models and principles]: User/Machine Systems—Software psychology;casualties detection;evacuation;user testing;visual analytics},
doi={10.1109/VAST.2008.4677390},}
Time series analysis is a process whose goal is to understand phenomena. The analysis often involves the search for a specific pattern. Finding patterns is one of the fundamental steps for time series observation or forecasting. The way in which users are able to specify a pattern to use for querying the time series database is still a challenge. We hereby propose an enhancement of the SearchBox, a widget used in TimeSearcher, a well known tool developed at the University of Maryland that allows users to find patterns similar to the one of interest.
@inproceedings{Buono:2008:Shape,
author = {Buono, Paolo and Simeone, Adalberto Lafcadio},
title = {Interactive Shape Specification for Pattern Search in Time Series},
booktitle = {Proceedings of the Working Conference on Advanced Visual Interfaces},
series = {AVI '08},
year = {2008},
isbn = {978-1-60558-141-5},
location = {Napoli, Italy},
pages = {480--481},
numpages = {2},
url = {http://doi.acm.org/10.1145/1385569.1385666},
doi = {10.1145/1385569.1385666},
acmid = {1385666},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {information visualization, interactive system, interactive visualization, visual querying},
}