Hi. I’m a PhD candidate at the KAIST studying HCI, especially things related to developing new ways to interact with computers in different form factors, with my advisor Geehyuk Lee. I was lucky to have chances to collaborate with various companies, such as Samsung Electronics, Hyundai Motors, Golfzon, and Hancom, and with researchers in different departments, from electronic engineering to industrial design. Last summer, I was an intern at Microsoft Research with Ken Hinckley, and we developed various interaction techniques for mobile devices using hover and grip information. Early this year, I did another internship at Autodesk Research in Toronto with Tovi Grossman.
I always have been a geek, and it is exciting to see the technologies that were in books and movies becoming real. Now computers are everywhere, and even tiny computers like the ones on our wrist are powerful enough to run various apps and connected to the internet. However, I think computers are still not paying enough attention to us when we interact with them, like what we wanted to do or what we can do, so that we have to put much effort to let them work for us. I am enthusiastic about making these computers more sensitive to us so that we can benefit the mighty power of computers better. While many of my PhD research focus on "touch", my research interest expands to mobile and wearable interfaces, cross-device interaction, tangible interfaces, and haptic interfaces.
What I love: thinking and building new ideas, solving problems, things that are beautiful, either aesthetically, logically, or mechanically, or something with a brilliant idea. And I love playing basketball, cycling, and photography.
Our natural touch is rich and nuanced with full of physical properties such as force and posture implying our intentions. Touch interfaces, however, ignore this rich information and only consider whether our finger is in contact with a screen and the contact location. Here I describe the approaches my colleagues and I made to enhance the impoverished touch interaction, for instance by using additional modalities like force or hover, or by exploiting underutilized touch gestures.
When we tap a physical object, the tap slightly moves the object. ForceTap tried to detect this movement with an accelerometer embedded on a mobile phone and use this to make a tap to a strong tap and a gentle tap. While we're touching the surface, we can control the force, in a normal and tangential directions. This is what we are used to; we interact with physical objects with force, sometimes make a dent on it and sometimes we examine its physical property. What if we can utilize these force on a touch screen? That's how our Force Gestures came out. We built a device that can sense both normal and tangential force with touch and explored the use of this rich input. Force is a continuous property, so it has its richness by itself even only with a single-dimensional force: normal force. In ForceDrag, with our force-sensitive touch prototype, we showed two way of utilizing normal force to work well with a touch screen: apply force to decide touch mode and drag without force and simultaneously control force and touch to support continuous mode changes. The tangential force has richer information than the normal force with its directional information. However, it is difficult to measure tangential force of multiple touch contacts since previous methods measure the tangential force transferred via a rigid surface. We thus developed a new method to estimate tangential force from the slight touch movement made from the finger deformation, a result of the combination of tangential force, friction, and elastic nature of our finger. However, force input on rigid surface can cause frustration and fatigue because in the real world, we expect an object to be moved or deformed as we apply force. We developed a new haptic technique that can create a compliance illusion on a rigid surface using vibrotactile feedback according to the tangential force change.
Hover plays an important role in the real world. It describes where our finger came from, how its posture is like, or how fast the finger moves, which make the meaning of a gesture completely different. We wanted to augment touch input using these information. Since there are not many hover-sensitive touch devices, we built our own hover-tracking touchpad, ThickPad, with infrared LEDs and Phototransistors. The touchpad can detect 3D location and shape fingers over the surface by measuring the reflected light intensity, and detect the finger contact using the ITO film placed on the LED array. After we tested its successful sensing capability, we built a larger hover-tracking optical touchpad, which we named as LongPad, that can cover the whole laptop palm rest area. The area and shape-detection capability allowed us to block >99% of accidental touches made by the palm contact and to have rich interaction such as bimanual touch interactions. We further explored the possibilities of using both hover and force information with various scenarios. Pre-Touch explores the hover input space in a more sophisticated and holistic way. We designed new interaction techniques that uses hover and grip information to provide an anticipatory reaction before making a touch, to retroactively interpret a touch, and to enable a richer touch operation.
Seongkook Heo, Jaehyun Han, Geehyuk Lee, Designing Rich Touch Interaction through Proximity and 2.5D Force Sensing Touchpad OZCHI 2013 Paper
Seongkook Heo, Jaehyun Han, and Geehyuk Lee, Designing for Hover- and Force-Enriched Touch Interaction, Computer-Human Interaction. Cognitive Effects of Spatial Interaction, Learning, and Ability Lecture Notes in Computer Science Volume 8433 (2015) Paper
Other than physical properties made while we are touching the surface, we also have a great control of making quick and accurate finger movements. We explored what our fingers can do better. However, sometimes, these are not enabled on touch input. Consecutive distant taps are fairly easy to perform with our fingers but has not been used for touch input. We designed ways to utilize this new input for richer input vocabulary on a mobile touch interface. How about the precision of our fingers? It's known as a pretty inaccurate input, but in fact, people can even type on a tiny smartwatch. We developed SplitBoard, which splits the QWERTY keyboard in half of its size and uses a flick gesture, which is not well utilized for text entry (especially on small screens that make the drawing movement on a keyboard difficult), to switch between different part of the keyboard.
Jonggi Hong, Seongkook Heo, Poika Isokoski, and Geehyuk Lee, Comparison of Three QWERTY Keyboards for a Smartwatch, Interacting with Computers (2016)
6DOF, which describes a combination of both 3D position and 3D posture information, sensing devices have been used since very early days of computing history. With 6DOF sensing, we can understand where an object is located and how it is tilted. However, 6DOF sensing devices are expensive and require large tracking object. Through a collaborative project with Samsung Electronics, we built a new method to measure 6DOF of a tracker using only cheap and off-the-shelf parts: infrared LEDs and photodiodes. IrCube and IrPen describe how this method works and how we can utilize this for various use scenarios.
Seongkook Heo, Jaehyun Han, Sangwon Choi, Seunghwan Lee, Geehyuk Lee, Hyong-Euk Lee, SangHyun Kim, Won-Chul Bang, DoKyoon Kim, and ChangYeong Kim, IrCube Tracker: An Optical 6-DOF Tracker based on LED Directivity, UIST '11 Paper | Video
Jaehyun Han, Seongkook Heo, Geehyuk Lee, Won-Chul Bang, DoKyoon Kim, and ChangYeong Kim, 6-DOF tracker using LED directivity, Electronics Letters, 47(3):177-178, 2011 Paper
Here are some projects aim to make computers to understand people and to help designers to better understand in-situ challenges while designing interaction. I like the snapping feature of many modern applications, but at the same time, I have always been struggling aligning shapes at the exact location I want. With my colleagues, we discussed and conducted an experiment to see where people align different shapes and how it is different from the way computers align. We then came with a Shape-dependent snapping algorithm. Would there be a way to discover types of relationship between members in an organization? We investigated location and messaging history between members and found that we can determine whether these members are friends or colleagues.
Seongkook Heo, Yong-ki Lee, Jiho Yeom, Geehyuk Lee, Design of a Shape Dependent Snapping Algorithm, CHI 2012 Works-in-Progress Paper
Jinhyuk Choi, Seongkook Heo, Jaehyun Han, Geehyuk Lee, and Junehwa Song, Mining Social Relationship Types in an Organization using Communication Patterns, CSCW 2013 Paper
Changmin Kim, Seongkook Heo, Kyeongah Jeong, Youn-kyoung Lim, Formula One: Rapid In-the-Wild Design and Evaluation of Interactive Prototypes , HCI Korea 2016 (Best paper)
Using a touch-sensitive mouse, we built a system that measures your mouse grip and identifies who you are. We got a 2nd place People's choice award at UIST '11 Student Innovation Contest. Video
We built a slingshot device that is controlled with a force-sensitive touchpad. We named it as TteokPad, where 'Tteok' is how we call rice cakes in Korean. We received 2nd place People's choice award again at UIST '12 Student Innovation Contest. Video
In this project, we bulit a system that controls the water level in a glass bottle and lets users to blow wind by pressing an air bag. This project got into the 2nd Place in Most Creative section at UIST '13 Student Innovation Contest. Video
I love bikes. Here are some pictures of me and my bikes.
My first bike, literally.
During my undergrads, I used to ride a BMX enjoying some tricks
Then I got an MTB to do some other tricks
I travelled with this boy from Brisbane to Sydney, in Australia
Actually, it's pretty recent that I knew how fun it is to ride a road bike.