default search action
Hyun Kwon
Person information
- affiliation: Korea Military Academy, Department of Artificial Intelligence and Data Science, Seoul, Republic of Korea
- affiliation (PhD 2020): KAIST, Yuseong-gu, Daejeon, South Korea
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j41]Hyun Kwon, Jun Lee:
AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack. IEEE Access 12: 5345-5356 (2024) - [j40]Hyun Kwon:
AudioGuard: Speech Recognition System Robust against Optimized Audio Adversarial Examples. Multim. Tools Appl. 83(20): 57943-57962 (2024) - 2023
- [j39]Hyun Kwon:
Dual-Targeted Textfooler Attack on Text Classification Systems. IEEE Access 11: 15164-15173 (2023) - [j38]Yongrae Kim, Hyunmin Gwak, Jaehoon Oh, Minho Kang, Jinkyu Kim, Hyun Kwon, Sunghwan Kim:
CloudNet: A LiDAR-Based Face Anti-Spoofing Model That Is Robust Against Light Variation. IEEE Access 11: 16984-16993 (2023) - [j37]Hyun Kwon, Sanghyun Lee:
Detecting textual adversarial examples through text modification on text classification systems. Appl. Intell. 53(16): 19161-19185 (2023) - [j36]Hyun Kwon, Sanghyun Lee:
Erratum to 'Ensemble transfer attack targeting text classification systems' [Computers & Security 117 (2022) 1-8/ 102695]. Comput. Secur. 124: 102944 (2023) - [j35]Hyun Kwon, Seung-Hun Nam:
Audio adversarial detection through classification score on speech recognition systems. Comput. Secur. 126: 103061 (2023) - [j34]Kyoungmin Ko, Sunghwan Kim, Hyun Kwon:
Multi-targeted audio adversarial example for use against speech recognition systems. Comput. Secur. 128: 103168 (2023) - [j33]Hyun Kwon:
Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network. IEICE Trans. Inf. Syst. 106(2): 262-266 (2023) - [j32]Hyun Kwon:
Adversarial image perturbations with distortions weighted by color on deep neural networks. Multim. Tools Appl. 82(9): 13779-13795 (2023) - 2022
- [j31]Hyun Kwon, Kyoungmin Ko, Sunghwan Kim:
Optimized Adversarial Example With Classification Score Pattern Vulnerability Removed. IEEE Access 10: 35804-35813 (2022) - [j30]Hyun Kwon, Sanghyun Lee:
Ensemble transfer attack targeting text classification systems. Comput. Secur. 117: 102695 (2022) - [j29]Hyun Kwon:
Multi-Model Selective Backdoor Attack with Different Trigger Positions. IEICE Trans. Inf. Syst. 105-D(1): 170-174 (2022) - [j28]Hyun Kwon, Changhyun Cho, Jun Lee:
Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each Classifier. IEICE Trans. Inf. Syst. 105-D(11): 1880-1889 (2022) - [j27]Hyun Kwon, Yongchul Kim:
Toward Selective Membership Inference Attack against Deep Learning Model. IEICE Trans. Inf. Syst. 105-D(11): 1911-1915 (2022) - [j26]Hyun Kwon, Sunghwan Cho:
Multi-Targeted Poisoning Attack in Deep Neural Networks. IEICE Trans. Inf. Syst. 105-D(11): 1916-1920 (2022) - [j25]Hyun Kwon, Sanghyun Lee:
Friend-guard adversarial noise designed for electroencephalogram-based brain-computer interface spellers. Neurocomputing 506: 184-195 (2022) - [j24]Hyun Kwon, Jongwook Jeong:
AdvU-Net: Generating Adversarial Example Based on Medical Image and Targeting U-Net Model. J. Sensors 2022: 1-13 (2022) - [j23]Hyun Kwon, Yongchul Kim:
BlindNet backdoor: Attack on deep neural network using blind watermark. Multim. Tools Appl. 81(5): 6217-6234 (2022) - [j22]Minsoo Lee, Hyun Kwon, Hyunsoo Yoon:
Compliance-Driven Cybersecurity Planning Based on Formalized Attack Patterns for Instrumentation and Control Systems of Nuclear Power Plants. Secur. Commun. Networks 2022: 4714899:1-4714899:13 (2022) - 2021
- [j21]Hyun Kwon, Hyunsoo Yoon, Daeseon Choi:
Data Correction For Enhancing Classification Accuracy By Unknown Deep Neural Network Classifiers. KSII Trans. Internet Inf. Syst. 15(9): 3243-3257 (2021) - [j20]Kyoungmin Ko, Hyunmin Gwak, Nalinh Thoummala, Hyun Kwon, Sunghwan Kim:
SqueezeFace: Integrative Face Recognition Methods with LiDAR Sensors. J. Sensors 2021: 1-8 (2021) - [j19]Hyun Kwon, Jang-Woon Baek:
Adv-Plate Attack: Adversarially Perturbed Plate for License Plate Recognition System. J. Sensors 2021: 1-10 (2021) - [j18]Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi:
Classification score approach for detecting adversarial example in deep neural network. Multim. Tools Appl. 80(7): 10339-10360 (2021) - [j17]Hyun Kwon:
MedicalGuard: U-Net Model Robust against Adversarially Perturbed Images. Secur. Commun. Networks 2021: 5595026:1-5595026:8 (2021) - [j16]Hyun Kwon, Jun Lee:
Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks. Symmetry 13(3): 428 (2021) - [c12]Ravikumar V. Chakaravarthy, Hyun Kwon, Hua Jiang:
Vision Control Unit in Fully Self Driving Vehicles using Xilinx MPSoC and Opensource Stack. ASP-DAC 2021: 311-317 - 2020
- [j15]Hyun Kwon:
Detecting Backdoor Attacks via Class Difference in Deep Neural Networks. IEEE Access 8: 191049-191056 (2020) - [j14]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
Robust CAPTCHA Image Generation Enhanced with Adversarial Example Methods. IEICE Trans. Inf. Syst. 103-D(4): 879-882 (2020) - [j13]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks. IEICE Trans. Inf. Syst. 103-D(4): 883-887 (2020) - [j12]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
Acoustic-decoy: Detection of adversarial examples through audio modification on speech recognition system. Neurocomputing 417: 357-370 (2020) - [j11]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
CAPTCHA Image Generation: Two-Step Style-Transfer Learning in Deep Neural Networks. Sensors 20(5): 1495 (2020) - [j10]Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi:
Selective Audio Adversarial Example in Evasion Attack on Speech Recognition System. IEEE Trans. Inf. Forensics Secur. 15: 526-538 (2020) - [c11]Hyun Kwon, Jungmin Roh, Hyunsoo Yoon, Ki-Woong Park:
TargetNet Backdoor: Attack on Deep Neural Network with Use of Different Triggers. ICIIT 2020: 140-145 - [c10]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
FriendNet Backdoor: Indentifying Backdoor Attack that is safe for Friendly Deep Neural Network. ICSIM 2020: 53-57
2010 – 2019
- 2019
- [j9]Hyun Kwon, Hyunsoo Yoon, Daeseon Choi:
Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example. IEEE Access 7: 60908-60919 (2019) - [j8]Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi:
Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes. IEEE Access 7: 73493-73503 (2019) - [j7]Ohmin Kwon, Hyun Kwon, Hyunsoo Yoon:
Rootkit inside GPU Kernel Execution. IEICE Trans. Inf. Syst. 102-D(11): 2261-2264 (2019) - [j6]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
Selective Poisoning Attack on Deep Neural Networks †. Symmetry 11(7): 892 (2019) - [c9]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error. AIKE 2019: 136-139 - [c8]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
POSTER: Detecting Audio Adversarial Example through Audio Modification. CCS 2019: 2521-2523 - [c7]Hyun Kwon, Hyunsoo Yoon, Daeseon Choi:
Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks. ICAIIC 2019: 399-404 - [c6]Hyun Kwon, Ohmin Kwon, Hyunsoo Yoon, Ki-Woong Park:
Face Friend-Safe Adversarial Example on Face Recognition System. ICUFN 2019: 547-551 - [c5]Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park:
CAPTCHA Image Generation Using Style Transfer Learning in Deep Neural Network. WISA 2019: 234-246 - 2018
- [j5]Hyun Kwon, Yongchul Kim, Ki-Woong Park, Hyunsoo Yoon, Daeseon Choi:
Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network. IEEE Access 6: 46084-46096 (2018) - [j4]Hyun Kwon, Yongchul Kim, Ki-Woong Park, Hyunsoo Yoon, Daeseon Choi:
Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier. Comput. Secur. 78: 380-397 (2018) - [j3]Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi:
CAPTCHA Image Generation Systems Using Generative Adversarial Networks. IEICE Trans. Inf. Syst. 101-D(2): 543-546 (2018) - [j2]Hyun Kwon, Yongchul Kim, Ki-Woong Park, Hyunsoo Yoon, Daeseon Choi:
Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers. IEICE Trans. Inf. Syst. 101-D(10): 2485-2500 (2018) - [j1]Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi:
Random Untargeted Adversarial Example on Deep Neural Network. Symmetry 10(12): 738 (2018) - [c4]Hyun Kwon, Hyunsoo Yoon, Daeseon Choi:
POSTER: Zero-Day Evasion Attack Analysis on Race between Attack and Defense. AsiaCCS 2018: 805-807 - [c3]Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi:
Fooling a Neural Network in Military Environments: Random Untargeted Adversarial Example. MILCOM 2018: 456-461 - [c2]Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi:
One-Pixel Adversarial Example that Is Safe for Friendly Deep Neural Networks. WISA 2018: 42-54 - 2017
- [c1]Hyun Kwon, Hyunsoo Yoon, Daeseon Choi:
Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network. ICISC 2017: 351-367
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-08-05 20:13 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint