Rediscovering Affordance: A Reinforcement Learning Perspective [CHI ‘22, 14-page Paper]
We propose an integrative theory of affordance-formation based on the theory of reinforcement learning in cognitive sciences. The key assumption is that users learn to associate promising motor actions to percepts via experience when reinforcement signals (success/failure) are present. They also learn to categorize actions (e.g., “rotating” a dial), giving them the ability to name and reason about affordance. Upon encountering novel widgets, their ability to generalize these actions determines their ability to perceive affordances. We implement this theory in a virtual robot model, which demonstrates human-like adaptation of affordance in interactive widgets tasks. While its predictions align with trends in human data, humans are able to adapt affordances faster, suggesting the existence of additional mechanisms.
In Proc. CHI ‘22 // [Project Page], [Paper], [video], [full video]
Investigating Positive and Negative Qualities of Human-in-the-Loop Optimization for Designing Interaction Techniques [CHI ‘22, 13-page Paper, Honorable Mention]
In this paper, we study Bayesian optimization as an algorithmic method to guide the design optimization process. It operates by proposing to a designer which design candidate to try next, given previous observations. We report observations from a comparative study with 40 novice designers who were tasked to optimize a complex 3D touch interaction technique. The optimizer helped designers explore larger proportions of the design space and arrive at a better solution, however they reported lower agency and expressiveness. Designers guided by an optimizer reported lower mental effort but also felt less creative and less in charge of the progress. We conclude that human-in-the-loop optimization can support novice designers in cases where agency is not critical.
In Proc. CHI ‘22 // [Project Page], [Paper], [video], [full video]
Button Simulation and Design via FDVV Models [CHI ‘20, 10-page Paper]
Designing a push-button with desired sensation and performance is challenging because the mechanical construction must have the right response characteristics. In this paper, we extend the typical force-displacement (FD) modeling to include vibration (V) and velocity-dependence characteristics (V). The resulting FDVV models better capture tactility characteristics of buttons. They increase the range of simulated buttons and the perceived realism relative to FD models. The paper also demonstrates methods for obtaining these models, editing them, and simulating accordingly. Our approach enables the analysis, prototyping, and optimization of buttons, and supports exploring designs that would be hard to implement mechanically.
In Proc. CHI ‘20 // [Project Page], [Paper], [30s Video], [Full Video].
Dwell+: Multi-Level Mode Selection Using Vibrotactile Cues [UIST ‘17, 10-page Paper]
This paper presents Dwell+, a method that boosts the effectiveness of typical dwell select by augmenting the passive dwell duration with active haptic ticks which promptly drives rapid switches of modes forward through the user’s skin sensation. Dwell+ enables multi-level dwell select using rapid haptic ticks. To select a mode from a button, users dwell-touch the button until the mode of selection being haptically prompted. Applications demonstrated implementing Dwell+ across different interfaces; ranging from vibration-enabled touchscreens to non-vibrating interfaces.
In Proc. UIST ‘17 // [Project Page], [Paper], [30s Video], [Full Video].
Outside-In: Visualizing Out-of-Sight Region-of-Interests in a 360 Video Using Spatial Picture-in-Picture Previews
[UIST ‘17, 9-page Paper]
We propose Outside-In, a visualization technique which re-introduces off-screen ROIs into the main screen as spatialpicture-in-picture (PIP) previews. The geometry of the pre-view windows further encodes the ROIs’ relative directions tothe main screen view, allowing for effective navigation.
In Proc. UIST ‘17 // [Project Page], [Paper], [Video].
EdgeVib: Effective Alphanumeric Character Output Using a Wrist-Worn Tactile Display. [UIST ‘16, 6-page Paper]
Yi-Chi Liao, Yi-Ling Chen, Jo-Yu Lo, Rong-Hao Liang, Liwei Chan, Bing-Yu Chen
“Transferring rich spatialtemporal tactile messages while retaining the recognition rates” has been a major challenge in the development of tactile displays. We present EdgeVib, a set of multistroke alphanumeric patterns based on EdgeWrite. Learning these patterns takes comparable period to learning Graffiti (15min), while the recognition rates achive 85.9% and 88.6% for alphabet and digits respectively.
In Proc. UIST ‘16 // [Project Page], [Paper], [Video].
ThirdHand: Wearing a Robotic Arm to Experience Rich Force Feedback. [Siggraph Asia’15 Emerging Technology]
Yi-Chi Liao, Shun-Yao Yang, Rong-Hao Liang, Liwei Chan, Bing-Yu Chen
ThirdHnad is a wearable robotic arm provides 5-DOF force feedback to enrich the mobile gaming experience. Comparing to traditional mounted-on-environment force-feedback devices such as phantom, ThirdHand provides higher mobility due to its wearable form. Also, comparing to the muscle-propelled and gyro-effect solutions, our approach enables more accurate control with stronger forces.
In Proc. Siggraph Asia’15 Emerging Technology // [Project Page], [Paper], [Video].
Facebook Reality Labs, 2022 (start in May, 2022).
ACM CHI 2022 Video Preview Chair
ACM IUI 2022 Student Volunteer Chair
ACM CHI 2021, 2022 Late-Breaking Work
ACM TEI 2022 Work-in-Progress
IEEE Transactions on Haptics: 2019, 2021
IEEE World Haptics Conference: 2021
International Journal of Human - Computer Studies: 2021
ACM CHI: 2016-2022
ACM Creativity and Cognition: 2021
IEEE Haptics Symposium: 2020
ACM MobileHCI: 2017-2020
ACM TEI: 2017-2018
ACM UbiComp/ISWC: 2017
Augmented Human: 2016-2017
Special Recognitions for Outstanding Reviews:
3 x recognition for CHI 2021 Papers
1 x recognition for CHI 2020 Papers
I will soon join Meta Reality Labs as a research intern, working in AR/VR interaction design.
In 2020, I gave a lecture about Bayesian Statistics and its applications in User Research course, Aalto University (by PhD. Aurélien Nioche). I also gave a lecture on Deep Learning in Computational User Interface Design, Aalto University (by Prof. Antti Oulasvirta).
In 2019, I gave a lecture in Computational User Interface Design course, Aalto University (by Prof. Antti Oulasvirta) for introducing Probilistic Decoding. In another course, Engineering for Humans, Aalto University (by Prof. Antti Oulasvirta), I talked about Input Sensing Pipeline and Data Processing.
During 2014 to 2016, I’ve been a teaching assistant of Introduction to HCI (lectured by Prof. Rong-Hao Liang and Prof. Bing-Yu Chen), and Computer Architecture (lectured by Prof. Bing-Yu Chen) in National Taiwan University.