default search action
3DUI 2016: Greenville, SC, USA
- Bruce H. Thomas, Rob Lindeman, Maud Marchal:
2016 IEEE Symposium on 3D User Interfaces, 3DUI 2016, Greenville, SC, USA, March 19-20, 2016. IEEE Computer Society 2016, ISBN 978-1-5090-0842-1
Keynote
- Steven K. Feiner:
Keynote speaker: Getting real. xi
Papers 1: 3D Interaction
- Merwan Achibet, Géry Casiez, Maud Marchal:
DesktopGlove: A multi-finger force feedback interface separating degrees of freedom between hands. 3-12 - Adalberto Lafcadio Simeone:
Indirect touch manipulation for interaction with stereoscopic displays. 13-22 - Jérémy Lacoche, Thierry Duval, Bruno Arnaldi, Eric Maisel, Jérôme Royan:
D3PART: A new model for redistribution and plasticity of 3D user interfaces. 23-26 - Zhixin Yan, Robert W. Lindeman, Arindam Dey:
Let your fingers do the walking: A unified approach for efficient short-, medium-, and long-distance travel in VR. 27-30 - Swaroop K. Pal, Marriam Khan, Ryan P. McMahan:
The benefits of rotational head tracking. 31-38
Papers 2: Navigation
- Ryohei Tanaka, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose:
Guidance field: Potential field to guide users to target locations in virtual environments. 39-48 - Markus Zank, Andreas M. Kunz:
Eye tracking for locomotion prediction in redirected walking. 49-58 - Ryohei Tanaka, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose:
Motive compass: Navigation interface for locomotion in virtual environments constructed with spherical images. 59-62 - Mahdi Azmandian, Timofey Grechkin, Mark T. Bolas, Evan A. Suma:
Automated path prediction for redirected walking using navigation meshes. 63-66 - Sebastian Freitag, Benjamin Weyers, Torsten W. Kuhlen:
Automatic speed adjustment for travel through immersive virtual environments based on viewpoint quality. 67-70 - Shahidul Islam, Bogdan Ionescu, Cristian Gadea, Dan Ionescu:
Full-body tracking using a sensor array system and laser-based sweeps. 71-80
Papers 3: Multimodal & Multisensory
- Granit Luzhnica, Jörg Simon, Elisabeth Lex, Viktoria Pammer:
A sliding window approach to natural hand gesture recognition using a custom data glove. 81-90 - Victor Adriel de Jesus Oliveira, Luciana Porcher Nedel, Anderson Maciel:
Proactive haptic articulation for intercommunication in collaborative virtual environments. 91-94 - Mi Feng, Arindam Dey, Robert W. Lindeman:
An initial exploration of a multi-sensory design space: Tactile support for walking in immersive virtual environments. 95-104 - Keigo Matsumoto, Yuki Ban, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose:
Curvature manipulation techniques in redirection using haptic cues. 105-108 - Sebastian Pick, Andrew S. Puika, Torsten W. Kuhlen:
SWIFTER: Design and evaluation of a speech-based text input metaphor for immersive virtual environments. 109-112
Papers 4: User Studies
- Daniel Zielasko, Sven Horn, Sebastian Freitag, Benjamin Weyers, Torsten W. Kuhlen:
Evaluation of hands-free HMD-based navigation techniques for immersive data analysis. 113-119 - Sharif Mohammad Shahnewaz Ferdous, Imtiaz Muhammad Arafat, John Quarles:
Visual feedback to improve the accessibility of head-mounted displays for persons with balance impairments. 121-128 - Jian Ma, Prathamesh Potnis, Alec G. Moore, Ryan P. McMahan:
VUME: The voluntary-use methodology for evaluations. 129-132 - David J. Zielinski, Marc A. Sommer, Hrishikesh M. Rao, Lawrence G. Appelbaum, Nicholas D. Potter, Regis Kopper:
Evaluating the effects of image persistence on dynamic target acquisition in low frame rate virtual environments. 133-140 - Soma Kawamura, Ryugo Kijima:
Effect of HMD latency on human stability during quiescent standing on one foot. 141-144 - Andrea Bönsch, Benjamin Weyers, Jonathan Wendt, Sebastian Freitag, Torsten W. Kuhlen:
Collision avoidance in the presence of a virtual agent in small-scale virtual environments. 145-148
Papers 5: Augmented Reality
- Benjamin Nuernberger, Kuo-Chin Lien, Tobias Höllerer, Matthew A. Turk:
Interpreting 2D gesture annotations in 3D augmented reality. 149-158 - Kenneth R. Moser, J. Edward Swan II:
Evaluation of user-centric optical see-through head-mounted display calibration using a leap motion controller. 159-167 - Kevin Ponto, Daniel Lisowski, Shuxing Fan:
Designing extreme 3D user interfaces for augmented live performances. 169-172 - Koheiushima, Kenneth R. Moser, Damien Constantine Rompapas, J. Edward Swan II, Sei Ikeda, Goshiro Yamamoto, Takafumi Taketomi, Christian Sandor, Hirokazu Kato:
SharpView: Improved clarity of defocused content on optical see-through head-mounted displays. 173-181 - Alex Stamm, Patrick Teall, Guillermo Blanco Benedicto:
Augmented virtuality in real time for pre-visualization in film. 183-186
Papers 6: Perception
- Themis Omirou, Asier Marzo Pérez, Sriram Subramanian, Anne Roudaut:
Floating charts: Data plotting using free-floating acoustically levitated representations. 187-190 - Sabah Boustila, Antonio Capobianco, Dominique Bechmann, Olivier Génevaux:
A hybrid projection to widen the vertical field of view with large screens to improve the perception of personal space in architectural project review. 191-200 - Ajoy S. Fernandes, Steven K. Feiner:
Combating VR sickness through subtle dynamic field-of-view modification. 201-210 - Eike Langbehn, Gerd Bruder, Frank Steinicke:
Scale matters! Analysis of dominant scale estimation in the presence of conflicting cues in multi-scale collaborative virtual environments. 211-220 - J. Adam Jones, Darlene Edewaard, Richard A. Tyrrell, Larry F. Hodges:
A schematic eye for virtual environments. 221-230
Posters
- Aryabrata Basu, Catherine Ball, Benjamin Manning, Kyle Johnsen:
Effects of user physical fitness on performance in virtual reality. 233-234 - Elham Ebrahimi, Sabarish V. Babu, Christopher C. Pagano, Sophie Jörg:
Towards a comparative evaluation of visually guided physical reach motions during 3D interactions in real and virtual environments. 237-238 - Max Ehrlich, Philippos Mordohai:
Discriminative hand localization in depth images. 239-240 - Juliano Franz, Aline Menin, Luciana P. Nedel:
3D gesture mouse: Being multitask without losing the focus. 241-242 - Paul S. Haynes, Eckart Lange:
In-situ flood visualisation using mobile AR. 243-244 - Joseph Isaac, Sabarish V. Babu:
Supporting computational thinking through gamification. 245-246 - Florian Jeanne, Yann Soullard, Indira Thouvenin:
What is wrong with your gesture? An error-based assistance for gesture training in virtual environments. 247-248 - Ryugo Kijima, Kento Miyajima:
Looking into HMD: A method of latency measurement for head mounted display. 249-250 - Hyung-il Kim, Woontack Woo:
Smartwatch-assisted robust 6-DOF hand tracker for object manipulation in HMD-based augmented reality. 251-252 - Ryota Kondo, Keisuke Goto, Katsuya Yoshiho, Yasushi Ikei, Koichi Hirota, Michiteru Kitazaki:
Rhythmic vibrations to heels and forefeet to produce virtual walking. 253-254 - Wallace Santos Lages, Gustavo A. Arango, David H. Laidlaw, John J. Socha, Doug A. Bowman:
Designing capsule, an input device to support the manipulation of biological datasets. 255-256 - Nicholas G. Lipari, Christoph W. Borst:
Toward vibrotactile rendering for irregular 2D tactor arrays. 257-258 - Jia Luo, Patrick Kania, P. Pat Banerjee, Shammema Sikder, Cristian J. Luciano, William G. Myers:
A part-task haptic simulator for ophthalmic surgical training. 259-260 - Hitomi Matsuki, Shohei Mori, Sei Ikeda, Fumihisa Shibata, Asako Kimura, Hideyuki Tamura:
Considerations on binocular mismatching in observation-based diminished reality. 261-262 - Jérémy Plouzeau, Aida Erfanian, Cynthia Chiu, Frédéric Mérienne, Yaoping Hu:
Navigation in virtual environments: Design and comparison of two anklet vibration patterns for guidance. 263-264 - Sharif Shahnewaz, Imtiaz Afarat, Tanvir Irfan, Gayani Samaraweera, Mikael Dallaire-Cote, David R. Labbé, John Quarles:
Gaitzilla: A game to study the effects of virtual embodiment in gait rehabilitation. 265-266 - Patrick Saalfeld, Sylvia Glaßer, Oliver Beuing, Mandy Grundmann, Bernhard Preim:
3D sketching on interactively unfolded vascular structures for treatment planning. 267-268 - Elliott Tanner, Siddharth Savadatti, Benjamin Manning, Kyle Johnsen:
Usability and cognitive benefits of a mobile tracked display in virtual laboratories for engineering education. 269-270
Contest
- Naëm Baron:
CollaborativeConstraint: UI for collaborative 3D manipulation operations. 273-274 - Marcio Cabral, Gabriel Roque, Mario Nagamura, Andre Montes, Eduardo Zilles Borba, Celso Setsuo Kurashima, Marcelo Knörich Zuffo:
Batmen - Hybrid collaborative object manipulation using mobile devices. 275-276 - Morgan Le Chénéchal, Thierry Duval, Jérémy Lacoche, Valérie Gouranton, Jérôme Royan, Bruno Arnaldi:
When the giant meets the ant an asymmetric approach for collaborative object manipulation. 277-278 - Jerônimo G. Grandi, Iago U. Berndt, Henrique Galvan Debarba, Luciana P. Nedel, Anderson Maciel:
Collaborative 3D manipulation using mobile phones. 279-280 - Wallace Santos Lages:
Ray, camera, action! A technique for collaborative 3D manipulation. 281-282 - Leonardo Pavanatto Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli, Márcio Sarroglia Pinho, Regis Kopper:
Collaborative hybrid virtual environment. 283-284
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.