default search action
Sungnyun Kim
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c10]Sungnyun Kim, Haofu Liao, Srikar Appalaraju, Peng Tang, Zhuowen Tu, Ravi Kumar Satzoda, R. Manmatha, Vijay Mahadevan, Stefano Soatto:
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models. EMNLP 2024: 3167-3193 - [c9]Kangwook Jang, Sungnyun Kim, Hoirin Kim:
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models. ICASSP 2024: 10721-10725 - [c8]Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun:
DistiLLM: Towards Streamlined Distillation for Large Language Models. ICML 2024 - [i15]Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun:
DistiLLM: Towards Streamlined Distillation for Large Language Models. CoRR abs/2402.03898 (2024) - [i14]Seongyoon Kim, Minchan Jeong, Sungnyun Kim, Sungwoo Cho, Sumyeong Ahn, Se-Young Yun:
FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning. CoRR abs/2406.02355 (2024) - [i13]Sungnyun Kim, Kangwook Jang, Sangmin Bae, Hoirin Kim, Se-Young Yun:
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition. CoRR abs/2407.03563 (2024) - [i12]Jihwan Oh, Sungnyun Kim, Gahee Kim, Sunghwan Kim, Se-Young Yun:
Diffusion-based Episodes Augmentation for Offline Multi-Agent Reinforcement Learning. CoRR abs/2408.13092 (2024) - [i11]Sungnyun Kim, Haofu Liao, Srikar Appalaraju, Peng Tang, Zhuowen Tu, Ravi Kumar Satzoda, R. Manmatha, Vijay Mahadevan, Stefano Soatto:
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models. CoRR abs/2410.03061 (2024) - 2023
- [c7]Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun:
Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network. AAAI 2023: 197-205 - [c6]Sungnyun Kim, Sangmin Bae, Se-Young Yun:
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning. CVPR 2023: 7537-7547 - [c5]Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim:
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation. INTERSPEECH 2023: 316-320 - [c4]Sangmin Bae, June-Woo Kim, Won-Yang Cho, Hyerim Baek, Soyoun Son, Byungjo Lee, Changwan Ha, Kyongpil Tae, Sungnyun Kim, Se-Young Yun:
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification. INTERSPEECH 2023: 5436-5440 - [i10]Sungnyun Kim, Sangmin Bae, Se-Young Yun:
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning. CoRR abs/2303.11101 (2023) - [i9]Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim:
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation. CoRR abs/2305.11685 (2023) - [i8]Sangmin Bae, June-Woo Kim, Won-Yang Cho, Hyerim Baek, Soyoun Son, Byungjo Lee, Changwan Ha, Kyongpil Tae, Sungnyun Kim, Se-Young Yun:
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification. CoRR abs/2305.14032 (2023) - [i7]Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn:
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models. CoRR abs/2305.15194 (2023) - [i6]Kangwook Jang, Sungnyun Kim, Hoirin Kim:
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models. CoRR abs/2312.09040 (2023) - 2022
- [j1]Sungnyun Kim, Se-Young Yun:
Calibration of Few-Shot Classification Tasks: Mitigating Misconfidence From Distribution Mismatch. IEEE Access 10: 53894-53908 (2022) - [c3]Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun:
ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning. CIKM 2022: 4359-4363 - [c2]Sungnyun Kim, Jaewoo Shin, Seongha Eom, Jihwan Oh, Se-Young Yun:
Real-time and Explainable Detection of Epidemics with Global News Data. Healthcare AI and COVID-19 Workshop 2022: 73-90 - [c1]Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun:
Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty. NeurIPS 2022 - [i5]Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun:
Understanding Cross-Domain Few-Shot Learning: An Experimental Study. CoRR abs/2202.01339 (2022) - [i4]Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun:
ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning. CoRR abs/2205.05282 (2022) - [i3]Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun:
Revisiting the Updates of a Pre-trained Model for Few-shot Learning. CoRR abs/2205.07874 (2022) - 2021
- [i2]Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun:
Self-Contrastive Learning. CoRR abs/2106.15499 (2021) - 2020
- [i1]Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun:
MixCo: Mix-up Contrastive Learning for Visual Representation. CoRR abs/2010.06300 (2020)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-15 20:35 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint