Difeng Yu | 俞迪枫

Difeng is a 3rd year Ph.D. student in Human-Computer Interaction Group, The University of Melbourne, advised by Dr. Tilman Dingler, Dr. Eduardo Velloso, and Dr. Jorge Goncalves. He received his BSc degree in Computer Science from Xi’an Jiaotong-Liverpool University in 2018 and was a research assistant at X-CHI Lab directed by Dr. Hai-Ning Liang. He was a research intern at Meta Reality Labs in 2021. His recent research in Human-Computer Interaction (HCI) focuses on 1) designing interactive techniques in augmented or virtual reality systems and 2) investigating, analyzing, and modeling user behavior in 3D virtual environments. He is also interested in computer graphics, sensing techniques, and machine learning.

Selected Publications

Gaze-Supported 3D Object Manipulation in Virtual Reality
D. Yu, X. Lu, R. Shi, H. N. Liang, T. Dingler, E. Velloso, J. Goncalves (CHI '21) [PDF] [Video]
This work investigates integration, coordination, and transition strategies of gaze and hand input for 3D object manipulation in VR. Specifically, we aim to understand whether incorporating gaze input can benefit VR object manipulation tasks and how it should be combined with hand input for improved usability and efficiency. We designed and compared four techniques that leverage different combination strategies. For example, ImplicitGaze allows the transition between gaze and hand input to happen without any trigger mechanism like button pressing.
Fully-Occluded Target Selection in Virtual Reality
D. Yu, Q. Zhou, J. Newn, T. Dingler, E. Velloso, J. Goncalves (TVCG '20) [PDF] [Video]
🏅 Best Paper Nominee at ISMAR 2020
The presence of fully-occluded targets is common within virtual environments, ranging from a virtual object located behind a wall to a datapoint of interest hidden in a complex visualization. However, efficient input techniques for locating and selecting these targets are mostly underexplored in VR systems. In this research, we developed ten techniques for fully-occluded target selection in VR and evaluated their performance through two user studies. We further demostrated how the techniques can be applied to real application scenarios.
Engaging Participants during Selection Studies in Virtual Reality
D. Yu, Q. Zhou, B. Tag, T. Dingler, E. Velloso, J. Goncalves (VR '20) [PDF]
Selection studies are prevalent and indispensable for VR research. However, due to the tedious and repetitive nature of many such experiments, participants can become disengaged during the study, which is likely to impact the results and conclusions. In this work, we investigate the issues of participant disengagement in VR selection experiments, evaluate the usefulness of four strategies to keep participants engaged during such studies, and investigate how they impact user performance.
Modeling Endpoint Distribution of Pointing Selection Tasks in Virtual Reality Environments
D. Yu, H. N. Liang, X. Lu, K. Fan, B. Ens (TOG '19) [PDF] [Video]
Understanding the endpoint distribution of pointing selection tasks can reveal underlying patterns on how users tend to acquire a target. We introduce EDModel, a novel endpoint distribution model which can predict how endpoint distributes when selecting targets with different characters (width, distance, and depth) in virtual reality (VR) environments. We demonstrate three applications of EDModel and opensource our experiment data for future research purposes.
DepthMove: Leveraging Head Motions in the Depth Dimension to Interact with Virtual Reality Head-Worn Displays
D. Yu, H. N. Liang, X. Lu, T. Zhang, W. Xu (ISMAR '19) [PDF]
We explore the potential of a new approach, called DepthMove, to allow interactions based on head motions along the depth dimension. With DepthMove, a user can interact with a VR system proactively by moving the head perpendicular to the VR HWD forward or backward. We use two user studies to investigate, model, and optimize DepthMove by taking into consideration user performance, subjective response, and social acceptability. We demonstrate four application scenarios of DepthMove in a third study.
Design and Evaluation of Visualization Techniques of Off-Screen and Occluded Targets in Virtual Reality Environments
D. Yu, H. N. Liang, K. Fan, H. Zhang, C. Fleming, K. Papangelis (TVCG '19) [PDF]
Locating targets of interest in a 3D environment often becomes difficult when the targets reside outside the user’s view or are occluded by other objects (e.g. buildings) in the environment. In this research, we explored the design and evaluation of five visualization techniques (we call 3DWedge, 3DArrow, 3DMinimap, Radar, and 3DWedge+). Based on the results of the two user studies, we provide a set of recommendations for the design of visualization techniques of off-screen and occluded targets in 3D VE.
PizzaText: Text Entry for Virtual Reality Systems Using Dual Thumbsticks
D. Yu, K. Fan, H. Zhang, D. V. Monteiro, W. Xu, and H. N. Liang (TVCG '18) [PDF] [Video]
PizzaText is a circular keyboard layout technique for text entry in virtual reality systems that uses the dual thumbsticks of a hand-held game controller. By rotating the two joysticks of a game controller, users can easily enter text by using this circular keyboard layout. This technique makes text entry simple, easy, and efficient, even for novice users. The results show that novice users can achieve an average of 8.59 Words per Minute, while expert users are able to reach 15.85 WPM, with just two hours of training.
View full publications

Side Projects

EmoDrone VR Haptic Drone [Video]
ShadowDancXR ShadowDancXR [Video]
VRHome RL for StarCraft 2 [Video]
On-Pet Interaction On-Pet Interaction [Report]
VRHome VRHome [Video]
CopyQues CopyQues [GitHub]
RestReminder Rest Reminder