Seunghyeon Seo

I'm a Ph.D. student at College of Engineering in Seoul National University, where I'm advised by Prof. Nojun Kwak in the Machine Intelligence and Pattern Analysis Lab (MIPAL). Previously, I got my bachelor's degree from College of Agriculture and Life Sciences in SNU, where I majored in Agricultural Economics.

I'm interested in computer vision, machine learning, and neural rendering. Much of my interest is currently focused on the efficient training framework of NeRF and 3D-GS.

Email  /  CV  /  Google Scholar  /  LinkedIn  /  Github

profile photo
News
Research
Unleash the Potential of CLIP for Video Highlight Detection
Donghoon Han*, Seunghyeon Seo*, Eunhwan Park, SeongUk Nam, Nojun Kwak
CVPR 2024 Workshop on Efficient Large Vision Models
arXiv

We leverage the pre-trained multimodal model CLIP to achieve state-of-the-art performance in video highlight detection by fine-tuning the encoder and integrating a novel saliency pooling technique.

HourglassNeRF: Casting an Hourglass as a Bundle of Rays for Few-shot Neural Rendering
Seunghyeon Seo, Yeonjin Chang, Jayeon Yoo, Seungwoo Lee, Hojun Lee, Nojun Kwak
Under Review
arXiv

We cast an hourglass as an additional training ray, which adaptively regularizes the high-frequency components of the samples, and enhance the integrity of training framework by conceptualizing the hourglass as a bundle of flipped diffuse reflection rays, aligning with the Lambertian assumption.

Fast Sun-aligned Outdoor Scene Relighting based on TensoRF
Yeonjin Chang, Yearim Kim, Seunghyeon Seo, Jung Yi, Nojun Kwak
WACV 2024
arXiv

We simplify outdoor scene relighting for NeRF by aligning with the sun, eliminating the need for environment maps and speeding up the process using a novel cubemap concept within the framework of TensoRF.

ConcatPlexer: Additional Dim1 Batching for Faster ViTs
Donghoon Han, Seunghyeon Seo, DongHyeon Jeon, Jiho Jang, Chaerin Kong, Nojun Kwak
NeurIPS 2023 Workshop on Advancing Neural Network Training   (Oral)
arXiv

We expedite ViT inference by concatenating abstract visual tokens from multiple images along dim=1 and processing them collectively.

FlipNeRF: Flipped Reflection Rays for Few-shot Novel View Synthesis
Seunghyeon Seo, Yeonjin Chang, Nojun Kwak
ICCV 2023
project page / code / video / arXiv

We utilize the flipped reflection rays as additional training resources for the few-shot novel view synthesis, leading to more accurate surface normal estimation.

MDPose: Real-Time Multi-Person Pose Estimation via Mixture Density Model
Seunghyeon Seo, Jaeyoung Yoo, Jihye Hwang, Nojun Kwak
UAI 2023
arXiv

We model the high-dimensional joint distribution of human keypoints with a mixture density model by Random Keypoint Grouping strategy and achieve competitive performance working in real-time by eliminating additional instance identification process.

End-to-End Multi-Object Detection with a Regularized Mixture Model
Jaeyoung Yoo*, Hojun Lee*, Seunghyeon Seo, Inseop Chung, Nojun Kwak
ICML 2023
code / arXiv

We propose the end-to-end multi-object Detection with a Regularized Mixture Model (D-RMM), which is trained by minimizing the NLL with the proposed regularization term, maximum component maximization (MCM) loss, preventing duplicate predictions.

MixNeRF: Modeling a Ray with Mixture Density for Novel View Synthesis from Sparse Inputs
Seunghyeon Seo, Donghoon Han*, Yeonjin Chang*, Nojun Kwak
CVPR 2023   (Qualcomm Innovation Fellowship Korea 2023 Winner)
project page / code / video / arXiv

We model a ray with mixture density model, leading to efficient learning of density distribution with sparse inputs, and propose an effective auxiliary task of ray depth estimation for few-shot novel view synthesis.

MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection
JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak
CVPR 2022
code / arXiv

We introduce a simple yet effective data augmentation method, Mix/UnMix (MUM), which unmixes feature tiles for the mixed image tiles for the SSOD framework.


Thanks for sharing the website template, Jon Barron. :)