default search action
Guan-Ting Lin
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c21]Guan-Ting Lin, Cheng-Han Chiang, Hung-yi Lee:
Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations. ACL (1) 2024: 6626-6642 - [c20]Guan-Ting Lin, Hung-yi Lee:
Can LLMs Understand the Implication of Emphasized Sentences in Dialogue? EMNLP (Findings) 2024: 13391-13401 - [c19]Guan-Ting Lin, Wei Huang, Hung-yi Lee:
Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech. EMNLP 2024: 20003-20015 - [c18]Guan-Ting Lin, Prashanth Gurunath Shivakumar, Ankur Gandhe, Chao-Han Huck Yang, Yile Gu, Shalini Ghosh, Andreas Stolcke, Hung-Yi Lee, Ivan Bulyko:
Paralinguistics-Enhanced Large Language Modeling of Spoken Dialogue. ICASSP 2024: 10316-10320 - [c17]Chyi-Jiunn Lin, Guan-Ting Lin, Yung-Sung Chuang, Wei-Lun Wu, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, Lin-Shan Lee:
SpeechDPR: End-To-End Spoken Passage Retrieval For Open-Domain Spoken Question Answering. ICASSP 2024: 12476-12480 - [c16]Kevin Everson, Yile Gu, Chao-Han Huck Yang, Prashanth Gurunath Shivakumar, Guan-Ting Lin, Jari Kolehmainen, Ivan Bulyko, Ankur Gandhe, Shalini Ghosh, Wael Hamza, Hung-Yi Lee, Ariya Rastrow, Andreas Stolcke:
Towards ASR Robust Spoken Language Understanding Through in-Context Learning with Word Confusion Networks. ICASSP 2024: 12856-12860 - [i16]Kevin Everson, Yile Gu, Chao-Han Huck Yang, Prashanth Gurunath Shivakumar, Guan-Ting Lin, Jari Kolehmainen, Ivan Bulyko, Ankur Gandhe, Shalini Ghosh, Wael Hamza, Hung-yi Lee, Ariya Rastrow, Andreas Stolcke:
Towards ASR Robust Spoken Language Understanding Through In-Context Learning With Word Confusion Networks. CoRR abs/2401.02921 (2024) - [i15]Chyi-Jiunn Lin, Guan-Ting Lin, Yung-Sung Chuang, Wei-Lun Wu, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Lin-Shan Lee:
SpeechDPR: End-to-End Spoken Passage Retrieval for Open-Domain Spoken Question Answering. CoRR abs/2401.13463 (2024) - [i14]Guan-Ting Lin, David Cheng-Han Chiang, Hung-yi Lee:
Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations. CoRR abs/2402.12786 (2024) - [i13]Guan-Ting Lin, Wei-Ping Huang, Hung-yi Lee:
Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech. CoRR abs/2406.11064 (2024) - [i12]Guan-Ting Lin, Hung-yi Lee:
Can LLMs Understand the Implication of Emphasized Sentences in Dialogue? CoRR abs/2406.11065 (2024) - [i11]Tzu-Quan Lin, Guan-Ting Lin, Hung-yi Lee, Hao Tang:
Property Neurons in Self-Supervised Speech Transformers. CoRR abs/2409.05910 (2024) - 2023
- [c15]Derek Xu, Shuyan Dong, Changhan Wang, Suyoun Kim, Zhaojiang Lin, Bing Liu, Akshat Shrivastava, Shang-Wen Li, Liang-Hsuan Tseng, Guan-Ting Lin, Alexei Baevski, Hung-yi Lee, Yizhou Sun, Wei Wang:
Introducing Semantics into Speech Encoders. ACL (1) 2023: 11413-11429 - [c14]Guan-Ting Lin, Qingming Tang, Chieh-Chi Kao, Viktor Rozgic, Chao Wang:
Weight-Sharing Supernet for Searching Specialized Acoustic Event Classification Networks Across Device Constraints. ICASSP 2023: 1-5 - [c13]Guan-Wei Wu, Guan-Ting Lin, Shang-Wen Li, Hung-yi Lee:
Improving Textless Spoken Language Understanding with Discrete Units as Intermediate Target. INTERSPEECH 2023: 1503-1507 - [i10]Guan-Ting Lin, Qingming Tang, Chieh-Chi Kao, Viktor Rozgic, Chao Wang:
Weight-sharing Supernet for Searching Specialized Acoustic Event Classification Networks Across Device Constraints. CoRR abs/2303.10351 (2023) - [i9]Guan-Wei Wu, Guan-Ting Lin, Shang-Wen Li, Hung-yi Lee:
Improving Textless Spoken Language Understanding with Discrete Units as Intermediate Target. CoRR abs/2305.18096 (2023) - [i8]Min-Han Shih, Ho-Lam Chung, Yu-Chi Pai, Ming-Hao Hsu, Guan-Ting Lin, Shang-Wen Li, Hung-Yi Lee:
GSQA: An End-to-End Model for Generative Spoken Question Answering. CoRR abs/2312.09781 (2023) - [i7]Guan-Ting Lin, Prashanth Gurunath Shivakumar, Ankur Gandhe, Chao-Han Huck Yang, Yile Gu, Shalini Ghosh, Andreas Stolcke, Hung-yi Lee, Ivan Bulyko:
Paralinguistics-Enhanced Large Language Modeling of Spoken Dialogue. CoRR abs/2312.15316 (2023) - 2022
- [j4]Juin-Ming Tsai, Shiu-Wan Hung, Guan-Ting Lin:
Continued usage of smart wearable devices (SWDs): cross-level analysis of gamification and network externality. Electron. Mark. 32(3): 1661-1676 (2022) - [c12]Guan-Ting Lin, Chan-Jan Hsu, Da-Rong Liu, Hung-Yi Lee, Yu Tsao:
Analyzing The Robustness of Unsupervised Speech Recognition. ICASSP 2022: 8202-8206 - [c11]Guan-Ting Lin, Shang-Wen Li, Hung-yi Lee:
Listen, Adapt, Better WER: Source-free Single-utterance Test-time Adaptation for Automatic Speech Recognition. INTERSPEECH 2022: 2198-2202 - [c10]Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shuyan Annie Dong, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Lin-Shan Lee:
DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question Answering. INTERSPEECH 2022: 5165-5169 - [c9]Yu-Jen Huang, Guan-Ting Lin, Pin-Hsuan Ting, Zheng-Wei Huang, Shao-Hung Yu, Yi-Jun Cai, Chia-Chien Wei, Sien Chi, Chun-Ting Lin:
Mobile 14-GHz Bandwidth Fronthaul Link Supporting 128 RF-Chain Signals for 6G Ma-MIMO Beamforming. OFC 2022: 1-3 - [c8]Guan-Ting Lin, Chi-Luen Feng, Wei-Ping Huang, Yuan Tseng, Tzu-Han Lin, Chen-An Li, Hung-yi Lee, Nigel G. Ward:
On the Utility of Self-Supervised Models for Prosody-Related Tasks. SLT 2022: 1104-1111 - [i6]Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shuyan Dong, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Lin-Shan Lee:
DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question Answering. CoRR abs/2203.04911 (2022) - [i5]Guan-Ting Lin, Shang-Wen Li, Hung-yi Lee:
Listen, Adapt, Better WER: Source-free Single-utterance Test-time Adaptation for Automatic Speech Recognition. CoRR abs/2203.14222 (2022) - [i4]Guan-Ting Lin, Chi-Luen Feng, Wei-Ping Huang, Yuan Tseng, Tzu-Han Lin, Chen-An Li, Hung-yi Lee, Nigel G. Ward:
On the Utility of Self-supervised Models for Prosody-related Tasks. CoRR abs/2210.07185 (2022) - [i3]Derek Xu, Shuyan Dong, Changhan Wang, Suyoun Kim, Zhaojiang Lin, Akshat Shrivastava, Shang-Wen Li, Liang-Hsuan Tseng, Alexei Baevski, Guan-Ting Lin, Hung-yi Lee, Yizhou Sun, Wei Wang:
Introducing Semantics into Speech Encoders. CoRR abs/2211.08402 (2022) - 2021
- [j3]Song-Jeng Huang, Philip Nathaniel Immanuel, Yi-Kuang Yen, Ching-Lung Yen, Chi-En Tseng, Guan-Ting Lin, Che-Kuan Lin, Zhong-Xuan Huang:
Tungsten Disulfide Nanotube-Modified Conductive Paper-Based Chemiresistive Sensor for the Application in Volatile Organic Compounds' Detection. Sensors 21(18): 6121 (2021) - [c7]Shu-Wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu-Hsien Huang, Wei-Cheng Tseng, Ko-tik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, Hung-yi Lee:
SUPERB: Speech Processing Universal PERformance Benchmark. Interspeech 2021: 1194-1198 - [i2]Shu-Wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu-Hsien Huang, Wei-Cheng Tseng, Ko-tik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, Hung-yi Lee:
SUPERB: Speech processing Universal PERformance Benchmark. CoRR abs/2105.01051 (2021) - [i1]Guan-Ting Lin, Manuel Giambi:
Context-gloss Augmentation for Improving Word Sense Disambiguation. CoRR abs/2110.07174 (2021) - 2020
- [j2]Shun-Ping Chung, Hui-Yu Pai, Chia-Hsin Lin, Guan-Ting Lin:
Performance analysis of the pending interest table in the cellular network with reservation and retrials. IET Networks 9(4): 156-164 (2020) - [j1]Guan-Ting Lin, Vinay Malligere Shivanna, Jiun-In Guo:
A Deep-Learning Model with Task-Specific Bounding Box Regressors and Conditional Back-Propagation for Moving Object Detection in ADAS Applications. Sensors 20(18): 5269 (2020)
2010 – 2019
- 2019
- [c6]Kuan-Chou Chen, Guan-Ting Lin, Che-Tsung Lin, Jiun-In Guo:
Recognizing Chinese Texts with 3D Convolutional Neural Network. ICIP 2019: 2120-2123 - 2018
- [c5]Min-Chieh Chou, Jen-Yuan (James) Chang, Chieh Yang, Guan-Ting Lin, Tune-Hune Kao, Meng-Chi Huang:
A Novel Design and Fabrication of Tactile Sensor for Humanoid Robot Finger. AIM 2018: 444-447 - [c4]Guan-Ting Lin, Patrisia Sherryl Santoso, Che-Tsung Lin, Chia-Chi Tsai, Jiun-In Guo:
One Stage Detection Network with an Auxiliary Classifier for Real-Time Road Marks Detection. APSIPA 2018: 1379-1382 - 2017
- [c3]Guan-Ting Lin, Patrisia Sherryl Santoso, Che-Tsung Lin, Chia-Chi Tsai, Jiun-In Guo:
Stop line detection and distance measurement for road intersection based on deep learning neural network. APSIPA 2017: 692-695 - [c2]Wei-Yu Chiu, Guan-Ting Lin:
A cross-layer design for power flow control in smart grids. ASCC 2017: 1584-1589 - 2012
- [c1]Shih-Wen Chiu, Jen-Huo Wang, Guan-Ting Lin, Chia-Lin Chang, Hsin Chen, Kea-Tiong Tang:
Towards a fully integrated electronic nose SoC. MWSCAS 2012: 166-169
Coauthor Index
aka: Hung-Yi Lee
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-19 20:42 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint