I am an Assistant Professor in the Department of Computer Science at the University of Virginia. My research interests span across various areas of human-computer interaction, with an emphasis on interaction techniques and technologies.
I was previously a postdocotral researcher in the DGP lab at the University of Toronto, where I worked with Prof. Daniel Wigdor. I received my Ph.D. in Computer Science at the KAIST in 2017, advised by Prof. Geehyuk Lee. I was also a research intern at industry research labs, including Samsung Advanced Institute of Technology, Microsoft Research (with Dr. Ken Hinckley), and Autodesk Research (with Dr. Tovi Grossman).
seongkook@virginia.edu | Rice Hall 524 | CV | Google scholar
My research can be grouped into three categories: 1) expanding interaction bandwidth between humans and computers , 2) enhancing human-human communication through computers , and 3) designing interaction techniques for wearable computers.
Our interaction with mobile computers is usually made by a finger movement on a flat surface, delivering our intention as a 2D position information and displays information via a visual display. In the real world, our physical manipulation is rich and bidirectional. We skillfully make complex hand movements to efficiently and effectively express our intentions and feel the reaction through our touch. My research developed new technologies and interaction techniques to enable computers to sense and generate previously unused properties of physical manipulation in human-computer interaction, both for input and output.
FDSense enables estimating the stiffness of end effectors touching the surface by calculating the relationship between the changes in touch area and force by time measured on a commodity force-sensing touchpad. We demonstrate the uses of stiffness-augmented touch interaction for expressive applications.
UIST '18Our touch is in fact started before making a contact. We explored a set of interaction techniques that utilizes grip and hover information before, during, and after the contact.
CHI '16While hotkeys makes our interaction more efficient, learning to use them can be time-consuming and degrades performance. MelodicTap uses a series of multi-finger tapping gestures for invoking hotkeys, which helps people to learn hotkeys without performace drops.
OzCHI '16We collected and analyzed touch gestures made on an iPad while using various applications and found an input space that is currently unused - consecutive distant taps. We show two interaction techniques that uses this complementary input space.
CHI '14While tangential force can add complementary input vocabulary to a touch interface, sensing tangential forces at multiple touch points is challenging. We developed a method that estimates tangential forces from subtle touch movements made by finger tip deformation.
CHI '13LongPad is a touchpad that uses sets of infrared LEDs and photodiodes that can measure shapes and distance of hand parts on or above the touch surface. With LongPad, we can have new hover-enriched touch gestures and prevent > 99% of accidental touches.
CHI '13Force Gestures are touch screen gestures augmented with the normal and tangential forces on the screen. We built a custom device to enable measuring the multidimensional forces and designed the new gesture set.
UIST '11ForceDrag introduces a new method of using normal force as a modifier of a touch gesture. This work investigates the use of modifiers with different number of levels and modifier selection techniques.
OzCHI '12Some modern mobile devices can sense force from a touch screen, however, it was not available when this work was being conducted. We developed a sensor fusion technique that uses device acceleration to estimate the impact of a tapping gesture.
MobileHCI '11Thor's Hammer is a new handheld haptic device that uses propeller propulsion to generate ungrounded 3-DOF force feedback without the need for physical grounding or heavy air compressors. It enables strong and continuous 3-DOF force feedback to be used in mobile AR/VR scenarios.
CHI '18We developed a haptic feedback method that generates an elastic movement illusion on a rigid surface with a tangential force sensor and a vibrotactile actuator. The illusive movement can be programmatically controlled and can deliver information through touch.
IEEE Trans. Haptics, 2017These days, many of our communications to other people are made through computers. We conducted research to understand how people communicate differently to others in different types of relationships, how people communicate and engage through livestreams, and develop new methods to infer relationship types and to support knowledge learning behaviors through livestreams.
Knowledge sharing livestreams bring new learning experiences that is more engaging and interactive compared to traditional online learning methods. StreamWiki is a tool that allows the viewers to collaboratively generate archival documentation.
CSCW '18Live streaming is making a huge impact in China, with more than 200 million viewers each night. This work shares the motivations and experiences of the livestream vieweres in China as well as the insights we gained from the study.
CHI '18We developed a method that infers social relationship types among people in the same organization by analyzing communication patterns. Our method collects co-location data of people and instant messenger data and uses machine learning techniques to infer social relationship types.
CSCW '13Wearable computers, such as smart glasses and smartwatches, make people to access information in variety of scenarios that were not possible before. However, these computers suffer from usability issues coming from the limited input area. We devleoped new interfaces and interaction techniques for wearable computers.
Smartwatches require both hands to operate, which makes the use in situations where the user is actively performing a task with their hands difficult. This work explores alternate ways to interact with smartwatches other than using finger touch.
GI '17Entering text for smart glasses is challenging because it can be tiring to keep arms high to touch the device and voice input is not always appropriate. We developed a cross-device method that allows users to use their smartwatches to enter text to smart glasses.
ISS '17SplitBoard is a new smartwatch keyboard that splits the QWERTY keyboard layout into half and lets users to quickly switch between spilts with a flick gesture. Our study showed that the technique outperfoms other techniques both in stationary and walking scenarios.
CHI '15IwC, 2016IrPen is a 6-DOF pen device that uses IrCube technology for tracking, which uses a set of infrared LEDs on the pen and a set of photodiodes installed on the frame of the tablet device. We present several interaction techniques that utilizes the position and the orientation of the pen above the surface.
IEEE CG&A, 2014IrCube is an optical 6-DOF tracking method that is low-cost, small, and accurate. Photodiode sensors placed in the environment measure the light intensity emitted from 13 LEDs installed at a pointer device in different orientations. With the measurements, our system can estimate the position and the orientation of the pointer by solving an inverse problem.
UIST '11 Electronics Letters, 2011