Difeng YU | 俞迪枫

Difeng is a 1st year Ph.D. student at Interaction Design Lab, The University of Melbourne, advised by Dr. Jorge Goncalves, Dr. Tilman Dingler, Dr. Eduardo Velloso. He received his BSc degree in Computer Science from Xi’an Jiaotong-Liverpool University in 2018 and was a research assistant at VR Lab, led by Dr. Hai-Ning Liang. His recent research in Human-Computer Interaction (HCI) focuses on 1) designing novel interactive techniques in augmented or virtual reality systems and 2) investigating, analyzing, and modeling user behavior in 3D environments. He is also interested in computer vision, sensing techniques, and machine learning.

Recent Publications

Modeling Endpoint Distribution of Pointing Selection Tasks in Virtual Reality Environments
D. Yu, H. N. Liang, X. Lu, K. Fan, B. Ens (ACM TOG '19) [PDF]
Understanding the endpoint distribution of pointing selection tasks can reveal underlying patterns on how users tend to acquire a target. We introduce EDModel, a novel endpoint distribution model which can predict how endpoint distributes when selecting targets with different characters (width, distance, and depth) in virtual reality (VR) environments. We demonstrate three applications of EDModel and opensource our experiment data for future research purposes.
DepthMove: Leveraging Head Motions in the Depth Dimension to Interact with Virtual Reality Head-Worn Displays
D. Yu, H. N. Liang, X. Lu, T. Zhang, W. Xu (IEEE ISMAR '19) [PDF]
We explore the potential of a new approach, called DepthMove, to allow interactions based on head motions along the depth dimension. With DepthMove, a user can interact with a VR system proactively by moving the head perpendicular to the VR HWD forward or backward. We use two user studies to investigate, model, and optimize DepthMove by taking into consideration user performance, subjective response, and social acceptability. We demonstrate four application scenarios of DepthMove in a third study.
Design and Evaluation of Visualization Techniques of Off-Screen and Occluded Targets in Virtual Reality Environments
D. Yu, H. N. Liang, K. Fan, H. Zhang, C. Fleming, K. Papangelis (IEEE TVCG '19) [PDF]
Locating targets of interest in a 3D environment often becomes difficult when the targets reside outside the user’s view or are occluded by other objects (e.g. buildings) in the environment. In this research, we explored the design and evaluation of five visualization techniques (we call 3DWedge, 3DArrow, 3DMinimap, Radar, and 3DWedge+). Based on the results of the two user studies, we provide a set of recommendations for the design of visualization techniques of off-screen and occluded targets in 3D VE.
DMove: Directional Motion-based Interaction for Augmented Reality Head-Mounted Displays
W. Xu, H. N. Liang, Y. Zhao, D. Yu, and D. V. Monteiro (ACM CHI '19) [PDF]
We present DMove, a directional motion-based interaction approach for Augmented Reality (AR) Head-Mounted Displays (HMDs) that is both hands- and device-free. With DMove, users can interact with virtual objects through directional body movements (such as directional walking). We conducted two users studies to optimize the DMove design and evaluate its performance. We envision that DMove could be used for a range of applications including menu selection, remote control, and exergame.
Evaluating Engagement Level and Analytical Support of Interactive Visualizations in Virtual Reality Environments
F. Lu, D. Yu, H. N. Liang, W. Chen, K. Papangelis, and N. M. Ali (IEEE ISMAR '18) [PDF]
In recent years, there has been an explosion of commercial virtual reality (VR) head-mounted displays (HMD). These VR devices are meant to offer high levels of engagement and improve users’ analytical exploration of the displayed content. However, given their rapid market introduction, the possible influences and usefulness that VR could bring in terms of supporting users’ exploration with interactive visualizations remain largely underexplored. We attempt to fill this gap and provide results of an empirical study of an interactive visualization tool that we have developed for a VR HMD system.
PizzaText: Text Entry for Virtual Reality Systems Using Dual Thumbsticks
D. Yu, K. Fan, H. Zhang, D. V. Monteiro, W. Xu, and H. N. Liang (IEEE TVCG '18) [PDF]
PizzaText is a circular keyboard layout technique for text entry in virtual reality systems that uses the dual thumbsticks of a hand-held game controller. By rotating the two joysticks of a game controller, users can easily enter text by using this circular keyboard layout. This technique makes text entry simple, easy, and efficient, even for novice users. The results show that novice users can achieve an average of 8.59 Words per Minute, while expert users are able to reach 15.85 WPM, with just two hours of training.
View full publications

Misc.

  • My first academic conference was IEEE ISMAR 2018 in Munich, Germany.
  • I've stayed for more than 40 days in Bulgaria.
  • I've had a 7-day motorcycle tour in the Himalayan Mountains with my Brazilian friend.
  • I've delivered two speeches to more than 800 students in India.
  • I released my first piano album "十日弹" when I was 18; unfortunately the album was not for sell.

Latest News

  • One paper is conditionally accepted by SIGGRAPH Asia '19. July 28
  • Embark Ph.D. in UniMelb! SUPER THRILLED! July 15
  • One paper is conditionally accepted by ISMAR '19. May 29
  • Accept the Ph.D. offer from UniMelb, Australia! May 13
  • Glad that I've helped with NSFC proposal writing. Mar 18
  • 3DWedge paper is finally accepted by IEEE TVCG! Mar 3
  • IEEE VR doesn't like me :( Nov 16
  • Attend ISMAR 2018 in Munich, Germany. Oct 14
  • Obtain decent scores in TOEFL and GRE. Sep 28
  • Graduate from XJTLU. Aug 1

Fun Projects

VRHome VRHome [Demo Video]
CopyQues CopyQues [GitHub]
RestReminder RestReminder [Personal Use]
RestReminder PhD Application [PS] [CV]