previous arrowprevious arrow
next arrownext arrow

Introduction to Professor Chang-Su Kim and His Lab at Korea University

1. Could you briefly introduce yourself (and your University/Lab)?
I am a Professor in the School of Electrical Engineering in Korea University. I happen to work on computer vision and machine learning at this moment, but my major research topic is still image processing. About 20 years ago, I focused on robust video compression and 3D data compression. In fact, I was a postdoc in Prof. Jay Kuo’s lab in USC in 2000-2001, where I diversified my research topics. When I visited his lab again in my sabbatical year of 2012, I started to study computer vision. Now, as an image processing engineer, I focus on low-level vision tasks, such as segmentation, tracking, and image enhancement. Korea University is one of the most prestigious universities in Korea, and my lab now consists of 15 hard-working graduate students, who are very productive.

2. What have been your most significant research contributions up to now?
My lab aims to pursue novelty, rather than to follow trends. In terms of novelty (new problems, new solutions, new concepts), I think the power-constrained contrast enhancement, layered depth representation for 2D histogram equalization, multiple random walkers for clustering, backpropagation refinement for segmentation, and order learning for ranking problems have some contributions.

3. What problems in your research field deserve more attention (or what problems will you like to solve) in the next few years, and why?
Order learning is a new concept for ranking or ordinal regression; It performs relative assessment, which is easier than absolute assessment in most cases. Our lab proposed this concept in ICLR2020 and ICLR2021. It is intuitive and shows promising performance. I like to investigate this concept further and provide stronger foundations for it. Another thing is to combine deep learning with more traditional analytical tools. Deep learning is good for ambiguous tasks in which solutions are best described by examples, not by mathematical equations. However, it is not easily explainable and not easily controllable. So, traditional tools have edges in tasks which require precise control of errors or output. For the last decade, deep learning replaced handcrafted features, because of its superior performance. We regard a technique to be inferior if it is not trained end-to-end. But we will see more and more techniques combining deep learning with traditional analytical tools so that we can take advantage of both approaches.

4. What advice would you like to give to the young generation of researchers/engineers?
I think students, studying computer vision and machine learning, are lucky that these research topics now draw a lot of attention. There are a lot of demands from both industry and academia. So, they can select a job from many good offers. Even more important, they can do research in very competitive areas. Because of the competition, they learn how to adapt and how to extract the best from themselves. However, there is a risk. If the only goal of students is to publish as many CVPR papers as possible, they may not learn the fundamentals, such as linear algebra, information theory, and signal processing. Fundamentals are important. The trendy techniques today will be no more relevant in just a few years. So, young researchers should spend more time learning fundamentals and be more adaptive.