Shen-sian Syu, Juncheng Xie, Hung-yi Lee, “Improving Non-Autoregressive Translation Quality With Pretrained Language Model, Embedding Distillation and Upsampling Strategy for CTC,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 32, 4121-4133, Dec. 2024
Kai-Wei Chang, Haibin Wu, Yu-Kai Wang, Yuan-Kuei Wu, Hua Shen, Wei-Cheng Tseng, Iu-Thing Kang, Shang-Wen Li , Hung-Yi Lee, “SpeechPrompt: Prompting Speech Language Models for Speech Processing Tasks,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 32, 3730-3744, Aug. 2024
Shu-wen Yang, Heng-Jui Chang, Zili Huang, ...etl., Kushal Lakhotia, Shang-Wen Li, Abdelrahman Mohamed, Shinji Watanabe, Hung-yi Lee, “A Large-Scale Evaluation of Speech Foundation Models,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 32, 2884-2899, Apr. 2024
Yun-Yen Chuang, Hung-Min Hsu, Kevin Lin, Ray-I. Chang, Hung-Yi Lee, “MetaEx-GAN: Meta Exploration to Improve Natural Language Generation via Generative Adversarial Networks,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31, 3968-3980, Sept. 2023
Po-chun Hsu, Da-rong Liu, Andy T. Liu, Hung-yi Lee, “Parallel Synthesis for Autoregressive Speech Generation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31, 3095-3111, Aug. 2023
Da-rong Liu, Po-chun Hsu, Yi-chen Chen, Sung-feng Huang, Shun-po Chuang, Da-yi Wu, Hung-yi Lee, “Learning Phone Recognition From Unpaired Audio and Phone Sequences Based on Generative Adversarial Network,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30, 230-243, Dec. 2022
Haibin Wu; Xu Li; Andy T. Liu; Zhiyong Wu; Helen Meng; Hung-Yi Lee, “Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30, 202-217, Dec. 2022
Abdelrahman Mohamed, Hung-yi Lee, Lasse Borgholt, Jakob D. Havtorn, Joakim Edin, Christian Igel, “Self-Supervised Speech Representation Learning: A Review,” IEEE Journal of Selected Topics in Signal Processing, 16, 1179-1210, Oct. 2022
Hung-Yi Lee; Shinji Watanabe; Karen Livescu; Abdelrahman Mohamed; Tara Sainath, “Editorial Editorial of Special Issue on Self-Supervised Learning for Speech and Audio Processing,” IEEE Journal of Selected Topics in Signal Processing, 16, 1174-1178, Oct. 2022
Yi-Long Liou; Jui-Yang Hsu; Chen-Sheng Chen; Alexander H. Liu; Hung-Yi Lee; Tsung-Te Liu, “A Fully Integrated 1.7mW Attention-Based Automatic Speech Recognition Processor,” IEEE Transactions on Circuits and Systems II: Express Briefs, 69, 4178-4182, Oct. 2022
Sung-Feng Huang, Chyi-Jiunn Lin, Da-Rong Liu, Yi-Chen Chen, Hung-yi Lee, “Meta-TTS: Meta-Learning for Few-Shot Speaker Adaptive Text-to-Speech,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30, 1558-1571, Apr. 2022
Shun-Po Chuang, Alexander H. Liu, Tzu-Wei Sung, Hung-yi Lee, “Improving Automatic Speech Recognition and Speech Translation via Word Embedding Prediction,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 93-105, Nov. 2021
Andy T. Liu, Shang-Wen Li, Hung-yi Lee, “TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 2351-2366, Jul. 2021
Chia-Hsuan Lee, Hung-yi Lee, Szu-Lin Wu, Chi-Liang Liu, Wei Fang, Juei-Yang Hsu, “Machine Comprehension of Spoken Content: TOEFL Listening Test and Spoken SQuAD,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27, 1469-1480, Sept. 2019
Yi-Chen Chen, Sung-Feng Huang, Hung-yi Lee, Yu-Hsuan Wang, Chia-Hao She, “Audio Word2vec: Sequence-to-Sequence Autoencoding for Unsupervised Learning of Audio Segmentation and Representation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27, 1481-1493, Sept. 2019
Shun-Yao Shih, Fan-Keng Sun, Hung-yi Lee, “Temporal Pattern Attention for Multivariate Time Series Forecasting,” the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 108, 1421-1441, Jun. 2019
Yi-Lin Tuan, Hung-Yi Lee, “Improving Conditional Sequence Generative Adversarial Networks by Stepwise Evaluation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27, 788-798, Apr. 2019
Hung-Yi Lee, Pei-Hung Chung, Yen-Chen Wu, Tzu-Hsiang Lin, Tsung-Hsien Wen, “Interactive Spoken Content Retrieval by Deep Reinforcement Learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26, 2447-2459, Dec. 2018
Hung-Yi Lee, Bo-Hsiang Tseng, Tsung-Hsien Wen, Yu Tsao, “Personalizing Recurrent-Neural-Network-Based Language Model by Social Network,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25, 519-530, Mar. 2017
Lin-shan Lee, James Glass, Hung-yi Lee, Chun-an Chan, “Spoken Content Retrieval —Beyond Cascading Speech Recognition with Text Retrieval,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, Sept. 2015
Hung-yi Lee, Po-wei Chou, Lin-shan Lee, “Improved open-vocabulary spoken content retrieval with word and subword lattices using acoustic feature similarity,” Computer Speech & Language, Sept. 2014
Hung-yi Lee, Ching-feng Yeh, Yun-Nung Chen, Yu Huang, Sheng-Yi Kong and Lin-shan Lee, “Spoken Knowledge Organization by Semantic Structuring and a Prototype Course Lecture System for Personalized Learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, May 2014
Hung-yi Lee, Lin-shan Lee, “Improved Semantic Retrieval of Spoken Content by Document/Query Expansion with Random Walk over Acoustic Similarity Graphs,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, Jan. 2014
Hung-yi Lee, Lin-shan Lee, “Enhanced Spoken Term Detection Using Support Vector Machines and Weighted Pseudo Examples,” IEEE Transactions on Audio, Speech, and Language Processing, Jun. 2013
Hung-yi Lee, Chia-ping Chen, Lin-shan Lee, “Integrating Recognition and Retrieval with Relevance Feedback for Spoken Term Detection,” IEEE Transactions on Audio, Speech, and Language Processing, Sept. 2012
Yi-cheng Pan, Hung-yi Lee, Lin-shan Lee, “Interactive Spoken Document Retrieval With Suggested Key Terms Ranked by a Markov Decision Process,” IEEE Transactions on Audio, Speech, and Language Processing, Feb. 2012
Conference & proceeding papers:
Liang-Hsuan Tseng, En-Pei Hu, Cheng-Han Chiang, Yuan Tseng, Hung-yi Lee, Lin-shan Lee, Shao-Hua Sun, “REBORN: Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR,” NeurIPS 2024, Vancouver, Canada, Dec. 2024
Yunyen Chuang, Hung-Min Hsu, Kevin Lin, Chen-Sheng Gu, Ling Zhen Li, Ray-I Chang, Hung-yi Lee, “Meta-DiffuB: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration,” NeurIPS 2024, Vancouver, Canada, Dec. 2024
Cheng-Kuang Wu, Zhi Rui Tam, Chieh-Yen Lin, Yun-Nung Chen, Hung-yi Lee, “StreamBench: Towards Benchmarking Continuous Improvement of Language Agents,” NeurIPS 2024, Vancouver, Canada, Dec. 2024
Chun-Yi Kuan, Chih-Kai Yang, Wei-Ping Huang, Ke-Han Lu, Hung-yi Lee, “Speech-Copilot: Leveraging Large Language Models for Speech Processing via Task Decomposition, Modularization, and Program Generation,” SLT 2024, Macao, China, Dec. 2024
Andy T. Liu, Yi-Cheng Lin, Haibin Wu, Stefan Winkler, Hung-yi Lee, “Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget,” SLT 2024, Macao, China, Dec. 2024
Chih-Kai Yang, Kuan-Po Huang, Hung-yi Le, “Do Prompts Really Prompt? Exploring the Prompt Understanding Capability of Whisper,” SLT 2024, Macao, China, Dec. 2024
Haibin Wu, Xuanjun Chen, Yi-Cheng Lin, Jiawei Du, Kai-Wei Chang, Ke-Han Lu, Alexander Liu, Ho Lam Chung, Yuan-Kuei Wu, Dongchao Yang, Songxiang Liu, Yi-Chiao Wu, Xu Tan, James Glass, Shinji Watanabe, Hung-yi Lee, “Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural codec model,” SLT 2024, Macao, China, Dec. 2024
Liang-Hsuan Tseng, Zih-Ching Chen, Weishun Chang, Cheng-Kuang Lee, Tsung-Ren Huang, Hung-yi Lee, “Leave No Knowledge Behind during Knowledge Distillation: Towards Practical and Effective Knowledge Distillation for Code-Switching ASR Using Realistic Dat,” SLT 2024, Macao, China, Dec. 2024
Sung-Feng Huang, Heng-Cheng Kuo, Zhehuai Chen, Xuesong Yang, Chao-Han Huck Yang, Yu Tsao, Yu-Chiang Frank Wang, Hung-yi Lee, Szu-Wei Fu, “Detecting the Undetectable: Assessing the Efficacy of Current Spoof Detection Methods Against Seamless Speech Edits,” SLT 2024, Macao, China, Dec. 2024
Huang-Cheng Chou, Haibin Wu, Lucas Goncalves, Seong-Gyun Leem, Ali Salman, Carlos Busso, Hung-yi Lee, Chi-Chun Lee, “Embracing Ambiguity And Subjectivity Using The All-inclusive Aggregation Rule For Evaluating Multi-label Speech Emotion Recognition Systems,” SLT 2024, Macao, China, Dec. 2024
Huang-Cheng Chou, Haibin Wu, Hung-yi Lee, Chi-Chun Lee, “Stimulus Modality Matters: Impact of Perceptual Evaluations Elicited by Different Modalities on Performances of Speech Emotion Recognition Systems,” SLT 2024, Macao, China, Dec. 2024
Haibin Wu, Huang-Cheng Chou, Kai-Wei Chang, Lucas Goncalves, Jiawei Du, Jyh-Shing Roger Jang, Chi-Chun Lee, Hung-yi Lee, “Open-Emotion: A Reproducible Emo-Superb for Speech Emotion Recognition Systems,” SLT 2024, Macao, China, Dec. 2024
Haibin Wu, Huang-Cheng Chou, Kai-Wei Chang, Lucas Goncalves, Jiawei Du, Jyh-Shing Roger Jang, Chi-Chun Lee, Hung-yi Lee, “A Preliminary Study: Large Language Model-Based Data Automation for Multi-Label Speech Emotion Recognition with Human Subjective Typed Descriptions,” SLT 2024, Macao, China, Dec. 2024
Shih-Heng Wang, Jiatong Shi, Chien-yu Huang, Shinji Watanabe, Hung-yi Lee, “Fusion of Discrete Representations and Self-Augmented Representations for Multilingual Automatic Speech Recognition,” SLT 2024, Macao, China, Dec. 2024
Wenze Ren, Yi-Cheng Lin, Haibin Wu, Huang-Cheng Chou, Chi-Chun Lee, Yu Tsao, Hung-yi Lee, “EMO-Codec: A Depth Look at Emotion Preservation Capability of Legacy and Neural Codec Models With Subjective and Objective Evaluations,” SLT 2024, Macao, China, Dec. 2024
Yi-Cheng Lin, Tzu-Quan Lin, Chih-Kai Yang, Ke-Han Lu, Wei-Chih Chen, Chun-Yi Kuan, Hung-yi Lee, “Listen and Speak Fairly: A Study on Semantic Gender Bias in Speech Integrated Large Language Models,” SLT 2024, Macao, China, Dec. 2024
Yi-Cheng Lin, Wei-Chih Chen, Hung-yi Lee, “Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models,” SLT 2024, Macao, China, Dec. 2024
Jiawei Du, I-Ming Lin, I-Hsiang Chiu, Xuanjun Chen, Haibin Wu, Wenze Ren, Yu Tsao, Hung-yi Lee, Roger Jang, “DFADD: The Diffusion and Flow-Matching Based Audio Deepfake Dataset,” SLT 2024, Macao, China, Dec. 2024
Cheng-Han Chiang, Hung-yi Lee, “Do Metadata and Appearance of the Retrieved Webpages Affect LLM’s Reasoning in Retrieval-Augmented Generation?,” Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, 389-406, Miami, Florida, US, Nov. 2024
Cheng-Han Chiang, Hung-yi Lee, “Do Metadata and Appearance of the Retrieved Webpages Affect LLM’s Reasoning in Retrieval-Augmented Generation?,” Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, 389-406, Miami, Florida, US, Nov. 2024
Cheng-Han Chiang, Wei-Chih Chen, Chun-Yi Kuan, Chienchou Yang, Hung-yi Lee, “Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course,” Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2489-2513, Miami, Florida, USA, Nov. 2024
Hsuan Su, Hua Farn, Fan-Yun Sun, Shang-Tse Chen, Hung-yi Lee, “Task Arithmetic can Mitigate Synthetic-to-Real Gap in Automatic Speech Recognition,” Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 8905-8915, Miami, Florida, USA, Nov. 2024
Guan-Ting Lin, Wei Ping Huang, Hung-yi Lee, “Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech,” Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 20003-20015, Miami, Florida, USA, Nov. 2024
Tzu-Han Lin, Chen-An Li, Hung-yi Lee, Yun-Nung Chen, “DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging,” EMNLP 2024, Miami,Florida,USA, Nov. 2024
Cheng-Kuang Wu, Zhi Rui Tam, Chao-Chung Wu, Chieh-Yen Lin, Hung-yi Lee, Yun-Nung Chen, “I Need Help! Evaluating LLM’s Ability to Ask for Users’ Support: A Case Study on Text-to-SQL Generation,” EMNLP 2024, Miami,Florida,USA, Nov. 2024
Zhi Rui Tam, Cheng-Kuang Wu, Yi-Lin Tsai, Chieh-Yen Lin, Hung-yi Lee, Yun-Nung Chen, “Let Me Speak Freely? A Study On The Impact Of Format Restrictions On Large Language Model Performance.,” EMNLP 2024, Miami,Florida,USA, Nov. 2024
Guan-Ting Lin, Hung-yi Lee, “Can LLMs Understand the Implication of Emphasized Sentences in Dialogue?,” Findings of the Association for Computational Linguistics: EMNLP 2024, 13391-13401, Miami, Florida, USA, Nov. 2024
Hung-Ting Su, Ya-Ching Hsu, Xudong Lin, Xiang-Qian Shi, Yulei Niu, Han-Yuan Hsu, Hung-yi Lee, Winston H. Hsu, “Unveiling Narrative Reasoning Limits of Large Language Models with Trope in Movie Synopses,” EMNLP findings 2024, Miami,Florida,USA, Nov. 2024
Li-Chun Lu, Shou-Jen Chen, Tsung-Min Pai, Chan-Hung Yu, Hung-yi Lee, Shao-Hua Sun, “LLM Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play,” COLM, 2024, Philadelphia,USA, Oct. 2024
Ke-Han Lu, Zhehuai Chen, Szu-Wei Fu, He Huang, Boris Ginsburg, Yu-Chiang Frank Wang, Hung-yi Lee, “DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Jiatong Shi, Shih-Heng Wang, William Chen, Martijn Bartelds, Vanya Bannihatti Kumar, Jinchuan Tian, Xuankai Chang, Dan Jurafsky, Karen Livescu, Hung-yi Lee, Shinji Watanabe, “ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Haibin Wu, Yuan Tseng, Hung-yi Lee, “CodecFake: Enhancing Anti-Spoofing Models Against Deepfake Audios from Codec-Based Speech Synthesis Systems,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Kai-Wei Chang, Ming-Hao Hsu, Shan-Wen Li, Hung-yi Lee, “Exploring In-Context Learning of Textless Speech Language Model for Speech Classification Tasks,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Min-Han Shih, Ho-Lam Chung, Yu-Chi Pai, Ming-Hao Hsu, Guan-Ting Lin, Shang-Wen Li, Hung-yi Lee, “GSQA: An End-to-End Model for Generative Spoken Question Answering,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Fabian Ritter-Gutierrez, Kuan-Po Huang, Jeremy H.M Wong, Dianwen Ng, Hung-yi Lee, Nancy F. Chen, Eng Siong Chng, “Dataset-Distillation Generative Model for Speech Emotion Recognition,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Xuanjun Chen, Jiawei Du, Haibin Wu, Jyh-Shing Roger Jang, Hung-yi Lee, “Neural Codec-based Adversarial Sample Detection for Speaker Verification,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Xuanjun Chen, Haibin Wu, Roger Jang, Hung-yi Lee, “Singing Voice Graph Modeling for SingFake Detection,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Yi-Cheng Lin, Haibin Wu, Huang-Cheng Chou, Chi-Chun Lee, Hung-yi Lee, “Emo-bias: A Large Scale Evaluation of Social Bias on Speech Emotion Recognition,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Chun-Yi Kuan, Wei-Ping Huang, Hung-yi Lee, “Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Tzu-Quan Lin, Hung-yi Lee, Hao Tang, “DAISY: Data Adaptive Self-Supervised Early Exit for Speech Representation Models,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Yi-Cheng Lin, Tzu-Quan Lin, Hsi-Che Lin, Andy T. Liu, Hung-yi Lee, “On the social bias of speech self-supervised models,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Zhe Li, Man-wai Mak, Hung-yi Lee, Helen Meng, “Parameter-efficient Fine-tuning of Speaker-Aware Dynamic Prompts for Speaker Verification,” Interspeech 2024, Kos Island, Greece, Sept. 2024
Shih-Cheng Huang, Pin-Zu Li, Yu-chi Hsu, Kuang-Ming Chen, Yu Tung Lin, Shih-Kai Hsiao, Richard Tsai, Hung-yi Lee, “Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages,” Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Bangkok, Thailand, Aug. 2024
Guan-Ting Lin, Cheng-Han Chiang, Hung-yi Lee, “Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations,” Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Bangkok, Thailand, Aug. 2024
Cheng-Han Chiang, Hung-yi Lee, “Merging Facts, Crafting Fallacies: Evaluating the Contradictory Nature of Aggregated Factual Claims in Long-Form Generations,” Findings of the Association for Computational Linguistics ACL 2024, Bangkok, Thailand and virtual meeting, Aug. 2024
Haibin Wu, Ho-Lam Chung, Yi-Cheng Lin, Yuan-Kuei Wu, Xuanjun Chen, Yu-Chi Pai, Hsiu-Hsuan Wang, Kai-Wei Chang, Alexander Liu, Hung-yi Lee, “Codec-SUPERB: An In-Depth Analysis of Sound Codec Models,” Findings of the Association for Computational Linguistics ACL 2024, Bangkok, Thailand and virtual meeting, Aug. 2024
Siddhant Arora, Ankita Pasad, Chung-Ming Chien, Jionghao Han, Roshan Sharma, Jee-weon Jung, Hira Dhamyal, William Chen, Suwon Shon, Hung-yi Lee, Karen Livescu, Shinji Watanabe, “On the Evaluation of Speech Foundation Models for Spoken Language Understanding,” Findings of the Association for Computational Linguistics ACL 2024, Bangkok, Thailand and virtual meeting, Aug. 2024
Chien-yu Huang, Ke-Han Lu, Shih-Heng Wang, Chun-Yi Kuan, Chi-Yuan Hsiao, Haibin Wu, Siddhant Arora, Kai-Wei Chang, Jiatong Shi, Yifan Peng, Roshan Sharma, Shinji Watanabe, Bhiksha Ramakrishnan, Shady Shehata, Hung-yi Lee, “Dynamic-Superb: Towards a Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark For Speech,” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Apr. 2024
Kuan-Po Huang, Chih-Kai Yang, Yu-Kuan Fu, Ewan Dunbar, Hung-yi Lee, “Zero Resource Code-Switched Speech Benchmark Using Speech Utterance Pairs for Multiple Spoken Languages,” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Apr. 2024
Kevin Everson, Yile Gu, Huck Yang, Prashanth Gurunath Shivakumar, Guan-Ting Lin, Jari Kolehmainen, Ivan Bulyko, Ankur Gandhe, Shalini Ghosh, Wael Hamza, Hung-yi Lee, Ariya Rastrow, Andreas Stolcke, “Towards ASR Robust Spoken Language Understanding Through in-Context Learning with Word Confusion Networks,” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Apr. 2024
Guan-Ting Lin, Prashanth Gurunath Shivakumar, Ankur Gandhe, Chao-Han Huck Yang, Yile Gu, Shalini Ghosh, Andreas Stolcke, Hung-yi Lee, Ivan Bulyko, “Paralinguistics-Enhanced Large Language Modeling of Spoken Dialogue,” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Apr. 2024
Haibin Wu, Heng-Cheng Kuo, Yu Tsao, Hung-yi Lee, “Scalable Ensemble-Based Detection Method Against Adversarial Attacks For Speaker Verification,” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Apr. 2024
Yuan Tseng, Layne Berry, Yi-Ting Chen, I-Hsiang Chiu, Hsuan-Hao Lin, Max Liu, Puyuan Peng, Yi-Jen Shih, Hung-Yu Wang,...etl., Shinji Watanabe, Abdelrahman Mohamed, Chi Luen Feng, Hung-yi Lee, “AV-SUPERB: A Multi-Task Evaluation Benchmark for Audio-Visual Representation Models,” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Apr. 2024
Xuanjun Chen, Haibin Wu, Chung-Che Wang, Hung-yi Lee, Jyh-Shing Roger Jang, “Multimodal Transformer Distillation for Audio-Visual Synchronization,” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Apr. 2024
Chyi-Jiunn Lin, Guan-Ting Lin, Yung-Sung Chuang, Wei-Lun Wu, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Lin-shan Lee, “SpeechDPR: End-To-End Spoken Passage Retrieval For Open-Domain Spoken Question Answering,” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Apr. 2024
Wei-Ping Huang, Sung-Feng Huang, Hung-yi Lee, “Maximizing Data Efficiency for Cross-Lingual TTS Adaptation by Self-Supervised Representation Mixing and Embedding Initialization,” 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Taipei, Taiwan, Dec. 2023
Kai-Wei Chang, Ming-Hsin Chen, Yun-Ping Lin, Jing Neng Hsu, Paul Kuo-Ming Huang, Chien-yu Huang, Shang-Wen Li, Hung-yi Lee, “Prompting and Adapter Tuning For Self-Supervised Encoder-Decoder Speech Model,” 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Taipei, Taiwan, Dec. 2023
Cheng-Han Chiang, Hung-yi Lee, “A Closer Look into Using Large Language Models for Automatic Evaluation,” Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, Dec. 2023
Zih-Ching Chen, Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Shuo-Yiin Chang, Rohit Prabhavalkar, Hung-yi Lee, Tara Sainat, “How to Estimate Model Transferability of Pre-Trained Speech Models?,” INTERSPEECH 2023, Aug. 2023
Guan-Wei Wu, Guan-Ting Lin, Shang-Wen Li, Hung-yi Lee, “Improving Textless Spoken Language Understanding with Discrete Units as Intermediate Target,” INTERSPEECH 2023, Dublin, Ireland, Aug. 2023
Cheng-Han Chiang, Wei-Ping Huang, Hung-yi Lee, “Why We Should Report the Details in Subjective Evaluation of TTS More Rigorously,” INTERSPEECH 2023, Dublin, Ireland, Aug. 2023
Guan-Ting Liu, En-Pei Hu, Pu-Jen Cheng, Hung-yi Lee, Shao-Hua Sun, “Hierarchical Programmatic Reinforcement Learning via Learning to Compose Programs,” ICML 2023, Hawaii, USA, Jul. 2023
Cheng-Han Chiang, Hung-yi Lee, “Are Synonym Substitution Attacks Really Synonym Substitution Attacks?,” Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, Jul. 2023
Suwon Shon, Siddhant Arora, Chyi-Jiunn Lin, Ankita Pasad, Felix Wu, Roshan Sharma, Wei-Lun Wu, Hung-yi Lee, Karen Livescu, Shinji Watanabe, “SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada, Jul. 2023
Derek Xu, Shuyan Dong, Changhan Wang, Suyoun Kim, Zhaojiang Lin, Bing Liu, Akshat Shrivastava, Shang-Wen Li, Liang-Hsuan Tseng, Guan-Ting Lin, Alexei Baevski, Hung-yi Lee, Yizhou Sun, Wei Wang, “Introducing Semantics into Speech Encoders,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada, Jul. 2023
Cheng-Han Chiang, Hung-yi Lee, “Can Large Language Models Be an Alternative to Human Evaluations?,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada, Jul. 2023
Kuan-Po Huang, Tzu-hsun Feng, Yu-Kuan Fu, Tsu-Yuan Hsu, Po-Chieh Yen, Wei-Cheng Tseng, Kai-Wei Chang, Hung-yi Lee, “Ensemble Knowledge Distillation of Self-Supervised Speech Models,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, Jun. 2023
Sung-Feng Huang, Chia-ping Chen, Zhi-Sheng Chen, Yu-Pao Tsai, Hung-yi Lee, “Personalized Lightweight Text-to-Speech: Voice Cloning with Adaptive Structured Pruning,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, Jun. 2023
Dongji Gao, Jiatong Shi, Shun-Po Chuang, Paola Garcia, Hung-yi Lee, Shinji Watanabe, Sanjeev Khudanpur, “Euro: Espnet Unsupervised ASR Open-Source Toolkit,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, Jun. 2023
Layne Berry, Yi-Jen Shih, Hsuan-Fu Wang, Heng-Jui Chang, Hung-yi Lee, David Harwath ICASSP, 2023, “M-SpeechCLIP: Leveraging Large-Scale, Pre-Trained Models for Multilingual Speech to Image Retrieval,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, Jun. 2023
Hsuan-Jui Chen, Yen Meng, Hung-yi Lee, “Once-for-All Sequence Compression for Self-Supervised Speech Models,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, Jun. 2023
Chan-Jan Hsu, Ho Lam Chung, Hung-yi Lee, Yu Tsao, “T5lephone: Bridging Speech and Text Self-Supervised Models for Spoken Language Understanding Via Phoneme Level T5,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, Jun. 2023
Jiatong Shi, Chan-Jan Hsu, Ho Lam Chung, Dongji Gao, Paola Garcia, Shinji Watanabe, Ann Lee, Hung-yi Lee, “Bridging Speech and Textual Pre-Trained Models With Unsupervised ASR,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, Jun. 2023
Yuan Tseng; Cheng-I Jeff Lai; Hung-Yi Lee, “Cascading and Direct Approaches to Unsupervised Constituency Parsing on Spoken Sentences,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, Jun. 2023
Cheng-Han Chiang, Hung-yi Lee, “Over-Reasoning and Redundant Calculation of Large Language Models,” EACL 2024, Malta, Mar. 2023
Xuanjun Chen; Haibin Wu; Helen Meng; Hung-yi Lee; Jyh-Shing Roger Jang, “Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual Active Speaker Detection,” 2022 IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar, Jan. 2023
Kuan Po Huang, Yu-Kuan Fu, Yu Zhang, Hung-yi Lee, “Improving Distortion Robustness of Self-supervised Speech Processing Tasks with Domain Adaptation,” Interspeech 2022, Incheon, Korea, Sept. 2022
Kai-Wei Chang, Wei-Cheng Tseng, Shang-Wen Li, Hung-yi Lee, “An Exploration of Prompt Tuning on Generative Spoken Language Model for Speech Processing Tasks,” Interspeech 2022, Incheon, Korea, Sept. 2022
Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, Hung-yi Lee, “AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks,” Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, United States, Jul. 2022
Hung-yi Lee, Shang-Wen Li, Ngoc Thang Vu, “Meta Learning for Natural Language Processing: A Survey,” Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics, Seattle, United States, Jul. 2022
Haibin Wu, Heng-Cheng Kuo, Naijun Zheng, Kuo-Hsuan Hung, Hung-yi Lee, Yu Tsao, Hsin-Min Wang, Helen Meng, “Partially Fake Audio Detection by Self-Attention-Based Fake Span Discovery,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, May 2022
Haibin Wu, Po-chun Hsu, Ji Gao, Shanshan Zhang, Shen Huang, Jian Kang, Zhiyong Wu, Helen Meng, Hung-yi Lee, “Adversarial Sample Detection for Speaker Verification by Neural Vocoders,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, May 2022
Haibin Wu, Bo Zheng, Xu Li, Xixin Wu, Hung-yi Lee, Helen Meng, “Characterizing the Adversarial Vulnerability of Speech self-Supervised Learning,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, May 2022
Chien-yu Huang, Kai-Wei Chang, Hung-yi Lee, “Toward Degradation-Robust Voice Conversion,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, May 2022
Guan-Ting Lin, Chan-Jan Hsu, Da-Rong Liu, Hung-Yi Lee, Yu Tsao, “Analyzing The Robustness of Unsupervised Speech Recognition,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, May 2022
Wen-Chin Huang, Shu-wen Yang, Tomoki Hayashi, Hung-yi Lee, Shinji Watanabe, Tomoki Toda, “S3PRL-VC: Open-Source Voice Conversion Framework with Self-Supervised Speech Representations,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, May 2022
Yen Meng, Yi-Hui Chou, Andy T. Liu, Hung-yi Lee, “Don't Speak Too Fast: The Impact of Data Bias on Self-Supervised Speech Models,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, May 2022
Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee, “Distilhubert: Speech Representation Learning by Layer-Wise Distillation of Hidden-Unit Bert,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, May 2022
Chan-Jan Hsu, Hung-yi Lee, Yu Tsao, “XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems to Improve Language Understanding,” Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Dublin, Ireland, May 2022
Hsiang-Sheng Tsai, Heng-Jui Chang, ...etl., Jiatong Shi, Xuankai Chang, Phil Hall, Hsuan-Jui Chen, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, Hung-yi Lee, “SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark for Semantic and Generative Capabilities,” Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, May 2022
Cheng-Han Chiang, Hung-yi Lee, “On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets,” AAAI 2022, Vancouver, Canada, Feb. 2022
Wei-Tsung Kao, Hung-yi Lee, “Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models’ Transferability,” Findings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Dominican Republic, Nov. 2021
Haibin Wu, Yang Zhang, Zhiyong Wu, Dong Wang, Hung-yi Lee, “Voting for the Right Answer: Adversarial Defense for Speaker Verification,” Interspeech 2021, Brno, Czechia, Sept. 2021
Shun-Po Chuang, Yung-Sung Chuang, Chih-Chiang Chang, Hung-yi Lee, “Investigating the Reordering Capability in CTC-based Non-Autoregressive End-to-End Speech Translation,” Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Online, Aug. 2021
Jingsong Wang, Yuxuan He, Chunyu Zhao, Qijie Shao, Wei-Wei Tu, Tom Ko, Hung-yi Lee, Lei Xie, “Auto-KWS 2021 Challenge: Task, Datasets, and Baselines,” Interspeech 2021, Brno, Czechia, Aug. 2021
Heng-Jui Chang, Hung-yi Lee, Lin-shan Lee, “Towards Lifelong Learning of End-to-End ASR,” Interspeech 2021, Brno, Czechia, Aug. 2021
Sung-Feng Huang, Shun-Po Chuang, Da-Rong Liu, Yi-Chen Chen, Gene-Ping Yang, Hung-yi Lee, “Stabilizing Label Assignment for Speech Separation by Self-Supervised Pre-Training,” Interspeech 2021, Brno, Czechia, Aug. 2021
Wei-Cheng Tseng, Chien-yu Huang, Wei-Tsung Kao, Yist Y. Lin, Hung-yi Lee, “Utilizing Self-Supervised Representations for MOS Prediction,” Interspeech 2021, Brno, Czechia, Aug. 2021
Jheng-hao Lin, Yist Y. Lin, Chung-Ming Chien, Hung-yi Lee, “S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations,” Interspeech 2021, Brno, Czechia, Aug. 2021
Hsuan Su, Jiun-Hao Jhan, Fan-yun Sun, Saurav Sahay, Hung-yi Lee, “Put Chatbot into Its Interlocutor’s Shoes: New Framework to Learn Chatbot Responding with Intention,” NAACL 2021, Online, Jun. 2021
Cheng-I Lai; Yung-Sung Chuang; Hung-Yi Lee; Shang-Wen Li; James Glass, “Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, May 2021
Haibin Wu; Xu Li; Andy T. Liu; Zhiyong Wu; Helen Meng; Hung-yi Lee, “Adversarial Defense for Automatic Speaker Verification by Cascaded Self-Supervised Learning Models,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, May 2021
Yen-Hao Chen; Da-Yi Wu; Tsung-Han Wu; Hung-yi Lee, “Again-VC: A One-Shot Voice Conversion Using Activation Guidance and Adaptive Instance Normalization,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, May 2021
Yist Y. Lin, Chung-Ming Chien, Jheng-Hao Lin, Hung-yi Lee, Lin-shan Lee, “Fragmentvc: Any-To-Any Voice Conversion by End-To-End Extracting and Fusing Fine-Grained Voice Fragments with Attention,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, May 2021
Yuan-Kuei Wu; Kuan-Po Huang; Yu Tsao; Hung-yi Lee, “One Shot Learning for Speech Separation,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, May 2021
Chung-Ming Chien; Jheng-Hao Lin; Chien-yu Huang; Po-chun Hsu; Hung-yi Lee, “Investigating on Incorporating Pretrained and Learnable Speaker Representations for Multi-Speaker Multi-Style Text-to-Speech,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, May 2021
Heng-Jui Chang; Alexander H. Liu; Hung-yi Lee; Lin-shan Lee, “End-to-End Whispered Speech Recognition with Frequency-Weighted Approaches and Pseudo Whisper Pre-training,” 2021 IEEE Spoken Language Technology Workshop (SLT), Shenzhen, China, Jan. 2021
Chien-yu Huang; Yist Y. Lin; Hung-yi Lee; Lin-shan Lee, “Defending Your Voice: Adversarial Attack on Voice Conversion,” 2021 IEEE Spoken Language Technology Workshop (SLT), Shenzhen, China, Jan. 2021
Tzu-hsien Huang; Jheng-hao Lin; Hung-yi Lee, “How Far Are We from Robust Voice Conversion: A Survey,” 2021 IEEE Spoken Language Technology Workshop (SLT), Shenzhen, China, Jan. 2021
Po-Han Chi, Pei-Hung Chung, Tsung-Han Wu, Chun-Cheng Hsieh, Yen-Hao Chen, Shang-Wen Li, Hung-yi Lee, “Audio Albert: A Lite Bert for Self-Supervised Learning of Audio Representation,” 2021 IEEE Spoken Language Technology Workshop (SLT), Shenzhen, China, Jan. 2021
Cheng-Han Chiang, Sung-Feng Huang, Hung-yi Lee, “Pretrained Language Model Embryology: The Birth of ALBERT,” Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, Nov. 2020
Yung-Sung Chuang, Chi-Liang Liu, Hung-yi Lee, Lin-shan Lee, “SpeechBERT: An Audio-and-Text Jointly Learned Language Model for End-to-End Spoken Question Answering,” Interspeech 2020, Shanghai, China, Oct. 2020
Tao Tu, Yuan-Jui Chen, Alexander H. Liu, Hung-yi Lee, “Semi-Supervised Learning for Multi-Speaker Text-to-Speech Synthesis Using Discrete Speech Representation,” Interspeech 2020, Shanghai, China, Oct. 2020
Haibin Wu, Andy T. Liu, Hung-yi Lee, “Defense for Black-Box Attacks on Anti-Spoofing Models by Self-Supervised Learning,” Interspeech 2020, Shanghai, China, Oct. 2020
Shu-wen Yang, Andy T. Liu, Hung-yi Lee, “Understanding Self-Attention of Self-Supervised Audio Transformers,” Interspeech 2020, Shanghai, China, Oct. 2020
Shun-Po Chuang, Tzu-Wei Sung, Alexander H. Liu, Hung-yi Lee, “Worse WER, but Better BLEU? Leveraging Word Embedding as Intermediate in Multitask End-to-End Speech Translation,” Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, Jul. 2020
Brian Chao; Pin-Lun Hsu; Hung-Yi Lee; Yu-Chiang Frank Wang, “Self-Supervised Deep Learning for Fisheye Image Rectification,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Haibin Wu; Songxiang Liu; Helen Meng; Hung-yi Lee, “Defense Against Adversarial Attacks on Spoofing Countermeasures of ASV,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Da-Yi Wu; Hung-yi Lee, “One-Shot Voice Conversion by Vector Quantization,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Alexander H. Liu; Tao Tu; Hung-yi Lee; Lin-shan Le, “Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Shun-Po Chuang; Tzu-Wei Sung; Hung-yi Lee, “Training Code-Switching Language Model with Monolingual Data,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Alexander H. Liu; Tzu-Wei Sung; Shun-Po Chuang; Hung-yi Lee; Lin-shan Lee, “Sequence-to-Sequence Automatic Speech Recognition with Word Embedding Regularization and Fused Decoding,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Gene-Ping Yang; Szu-Lin Wu; Yao-Wen Mao; Hung-yi Lee; Lin-shan Le, “Interrupted and Cascaded Permutation Invariant Training for Speech Separation,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Chung-Yi Li; Pei-Chieh Yuan; Hung-Yi Lee, “What Does a Network Layer Hear? Analyzing Hidden Representations of End-to-End ASR Through Speech Synthesis,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Andy T. Liu; Shu-wen Yang; Po-Han Chi; Po-chun Hsu; Hung-yi Lee, “Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 2020
Fan-Keng Sun, Cheng-Hao Ho, Hung-Yi Lee, “LAMOL: LAnguage MOdeling for Lifelong Language Learning,” ICLR 2020, Virtual, Apr. 2020
Jui-Yang Hsu; Yuan-Jui Chen; Hung-yi Lee, “Meta Learning for End-To-End Low-Resource Speech Recognition,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, Apr. 2020
Che-Ping Tsai, Hung-Yi Lee, “Order-Free Learning Alleviating Exposure Bias in Multi-Label Classification,” AAAI 2020, New York, USA, Feb. 2020
Ching-Ting Chang, Shun-Po Chuang, Hung-Yi Lee, “Code-Switching Sentence Generation by Generative Adversarial Networks and its Application to Data Augmentation,” Interspeech 2019, Graz, Austria, Sept. 2019
Yuan-Jui Chen, Tao Tu, Cheng-chieh Yeh, Hung-Yi Lee, “End-to-End Text-to-Speech for Low-Resource Languages by Cross-Lingual Transfer Learning,” Interspeech 2019, Graz, Austria, Sept. 2019
Feng-Guang Su, Aliyah R. Hsu, Yi-Lin Tuan, Hung-Yi Lee, “Personalized Dialogue Response Generation Learned from Monologues,” Interspeech 2019, Graz, Austria, Sept. 2019
Andy T. Liu, Po-chun Hsu, Hung-Yi Lee, “Unsupervised End-to-End Learning of Discrete Linguistic Units for Voice Conversion,” Interspeech 2019, Graz, Austria, Sept. 2019
Ju-chieh Chou, Hung-Yi Lee, “One-Shot Voice Conversion by Separating Speaker and Content Representations with Instance Normalization,” Interspeech 2019, Graz, Austria, Sept. 2019
Richard Tzong-Han Tsai; Chia-Hao Chen; Chun-Kai Wu; Yu-Cheng Hsiao; Hung-yi Lee, “Using Deep-Q Network to Select Candidates from N-best Speech Recognition Hypotheses for Enhancing Dialogue State Tracking,” ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, May 2019
Alexander H. Liu; Hung-yi Lee; Lin-shan Lee, “Adversarial Training of End-to-end Speech Recognition Using a Criticizing Language Model,” ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, May 2019
Tzu-Wei Sung; Jun-You Liu; Hung-yi Lee; Lin-shan Lee, “Towards End-to-end Speech-to-text Translation with Two-pass Decoding,” ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, May 2019
Chia-Hung Wan; Shun-Po Chuang; Hung-Yi Lee, “Towards Audio to Scene Image Synthesis Using Generative Adversarial Network,” ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, May 2019
Che-Ping Tsai; Hung-Yi Lee, “Adversarial Learning of Label Dependency: A Novel Framework for Multi-class Classification,” ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, May 2019
Chia-Hsuan Lee, Yun-Nung Chen, Hung-Yi Lee, “Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation,” ICASSP, 2019
Yi-Lin Tuan, Hung-Yi Lee, “Improving Conditional Sequence Generative Adversarial Networks by Stepwise Evaluation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2019
Yi-Chen Chen; Sung-Feng Huang; Chia-Hao Shen; Hung-yi Lee; Lin-shan Lee, “Phonetic-and-Semantic Embedding of Spoken words with Applications in Spoken Content Retrieval,” 2018 IEEE Spoken Language Technology Workshop (SLT), Athens, Greece, Dec. 2018
Cheng-chieh Yeh; Po-chun Hsu; Ju-chieh Chou; Hung-yi Lee; Lin-shan Lee, “Rhythm-Flexible Voice Conversion Without Parallel Data Using Cycle-GAN Over Phoneme Posteriorgram Sequences,” 2018 IEEE Spoken Language Technology Workshop (SLT), Athens, Greece, Dec. 2018
Yu-Hsuan Wang; Hung-Yi Lee; Lin-Shan Lee, “Segmental Audio Word2Vec: Representing Utterances as Sequences of Vectors with Applications in Spoken Term Detection,” ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, Apr. 2018
Chia-Wei Ao, Hung-yi Lee, “Query-by-Example Spoken Term Detection Using Attention-Based Multi-Hop Networks,” ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, Apr. 2018
Chia-Hao Shen; Janet Y. Sung; Hung-Yi Lee, “Language Transfer of Audio Word2Vec: Learning Audio Segment Representations Without Target Language Data,” ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, Apr. 2018
Hsien-Chin Lin; Chi-Yu Yang; Hung-Yi Lee; Lin-Shan Le, “Domain Independent Key Term Extraction from Spoken Content Based on Context and Term Location Information in the Utterances,” ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, Mar. 2018
Tzu-Chien Liu, Yu-Hsueh Wu, Hung-yi Lee, “Query-based Attention CNN for Text Similarity Map,” ICCV 2018, Istanbul, Turkey, Jan. 2018
Pei-Hung Chung, Kuan Tung, Ching-Lun Tai, Hung-Yi Lee, “Joint Learning of Interactive Spoken Content Retrieval and Trainable User Simulator,” INTERSPEECH, 2018
Chia-Hsuan Li, Szu-Lin Wu, Chi-Liang Liu, Hung-yi Lee, “Spoken SQuAD: A Study of Mitigating the Impact of Speech Recognition Errors on Listening Comprehension,” INTERSPEEH, 2018
Da-Rong Liu, Chi-Yu Yang, Szu-Lin Wu, Hung-Yi Lee, “Improving Unsupervised Style Transfer in End-to-End Speech Synthesis with End-to-End Speech Recognition,” SLT, 2018
Ju-chieh Chou, Cheng-chieh Yeh, Hung-yi Lee, Lin-shan Lee, “Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations,” INTERSPEECH, 2018
Hung-Yi Lee, Pei-Hung Chung, Yen-Chen Wu, Tzu-Hsiang Lin, Tsung-Hsien Wen, “Interactive Spoken Content Retrieval by Deep Reinforcement Learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2018
Zih-Wei Lin; Tzu-Wei Sung; Hung-Yi Lee; Lin-Shan Lee, “Personalized word representations carrying personalized semantics learned from social network posts,” 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Okinawa, Japan, Dec. 2017
Shun-Po Chuang; Chia-Hung Wan; Pang-Chi Huang; Chi-Yu Yang; Hung-Yi Lee, “Seeing and hearing too: Audio representation for video captioning,” 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Okinawa, Japan, Dec. 2017
Pin-Jung Chen; I-Hung Hsu; Yi-Yao Huang; Hung-Yi Lee, “Mitigating the impact of speech recognition errors on chatbot using sequence-to-sequence model,” 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Okinawa, Japan, Dec. 2017
Bo-Ru Lu, Frank Shyu, Yun-Nung Chen, Hung-Yi Lee, Lin-Shan Lee, “Order-Preserving Abstractive Summarization for Spoken Content Based on Connectionist Temporal Classification,” Interspeech 2017, Stockholm, Sweden, Aug. 2017
Cheng-Kuan Wei; Cheng-Tao Chung; Hung-Yi Lee; Lin-Shan Lee, “Personalized acoustic modeling by weakly supervised multi-task deep learning using acoustic tokens discovered from unlabeled data,” 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, Mar. 2017
Wei-Jen Ko; Bo-Hsiang Tseng; Hung-Yi Lee, “Recurrent Neural Network based language modeling with controllable external Memory,” 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, Mar. 2017
Tzu-Ray Su, Hung-Yi Lee, “Learning Chinese Word Representations From Glyphs Of Characters,” EMNLP, 2017
Yu-Hsuan Wang, Cheng-Tao Chung, Hung-yi Lee, “Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries,” INTERSPEECH, 2017
Hung-yi Lee, Bo-Hsiang Tseng, Tsung-Hsien Wen, Yu Tsao, “Personalizing Recurrent Neural Network Based Language Model by Social Network,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2017
Wei Fang; Juei-Yang Hsu; Hung-yi Lee; Lin-Shan Lee, “Hierarchical attention model for improved machine comprehension of spoken content,” 2016 IEEE Spoken Language Technology Workshop (SLT), San Diego, CA, Dec. 2016
Lang-Chi Yu; Hung-yi Lee; Lin-shan Lee, “Abstractive headline generation for spoken content by attentive recurrent neural networks with ASR error modeling,” 2016 IEEE Spoken Language Technology Workshop (SLT), San Diego, CA, Dec. 2016
Sheng-syun Shen, Hung-yi Lee, “Neural Attention Models for Sequence Classification: Analysis and Application to Key Term Extraction and Dialogue Act Detection,” Interspeech 2016, San Francisco, USA, Sept. 2016
Bo-Hsiang Tseng, Sheng-syun Shen, Hung-Yi Lee, Lin-Shan Lee, “Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine,” INTERSPEECH, 2016
Cheng-Tao Chung; Cheng-Yu Tsai; Hsiang-Hung Lu; Chia-Hsiang Liu; Hung-yi Lee; Lin-shan Lee, “An iterative deep learning framework for unsupervised discovery of speech features and linguistic units with applications on spoken term detection,” 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, USA, Dec. 2015
Bo-Hsiang Tseng; Hung-yi Lee; Lin-Shan Lee, “Personalizing universal recurrent neural network language model with user characteristic features by social network crowdsourcing,” 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, USA, Dec. 2015
Yi-Hsiu Liao; Hung-yi Lee; Lin-shan Lee, “Towards structured deep neural network for automatic speech recognition,” 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, USA, Dec. 2015
Sheng-syun Shen, Hung-yi Lee, Shang-wen Li, Victor Zue and Lin-shan Lee, “Structuring Lectures in Massive Open Online Courses (MOOCs) for Efficient Learning by Linking Similar Sections and Predicting Prerequisites,” InterSpeech, Sept. 2015
Hung-tsung Lu, Yuan-ming Liou, Hung-yi Lee and Lin-shan Lee, “Semantic Retrieval of Personal Photos using a Deep Autoencoder Fusing Visual Features with Speech Annotations Represented as Word/Paragraph Vectors,” InterSpeech, Sept. 2015
Ching-Feng Yeh, Yuan-ming Liou, Hung-yi Lee and Lin-shan Lee, “Personalized Speech Recognizer with Keyword-based Personalized Lexicon and Language Model using Word Vector Representations,” InterSpeech, Sept. 2015
Hung-yi Lee, Yu Zhang, Ekapol Chuangsuwanich, James Glass, “Graph-based Re-ranking using Acoustic Feature Similarity between Search Results for Spoken Term Detection on Low-resource Languages,” InterSpeech, Sept. 2014
Han Lu, Sheng-syun Shen, Sz-Rung Shiang, Hung-yi Lee and Lin-shan Lee, “Alignment of Spoken Utterances with Slide Content for Easier Learning with Recorded Lectures using Structured Support Vector Machine (SVM),” InterSpeech, Sept. 2014
Sz-Rung Shiang, Hung-yi Lee and Lin-shan Lee, “Spoken Question Answering Using Tree-structured Conditional Random Fields and Two-layer Random Walk,” InterSpeech, Sept. 2014
Yung-ming Liou, Yi-sheng Fu, Hung-yi Lee and Lin-shan Lee, “Semantic Retrieval of Personal Photos using Matrix Factorization and Two-layer Random Walk Fusing Sparse Speech Annotations with Visual Features,” InterSpeech, Sept. 2014
Yun-Chiao Li, Hung-yi Lee, Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Towards Unsupervised Semantic Retrieval of Spoken Content with Query Expansion based on Automatically Discovered Acoustic Patterns,” ASRU, Dec. 2013
Hung-yi Lee, Ting-yao Hu, How Jing, Yun-Fan Chang, Yu Tsao, Yu-Cheng Kao, Tsang-Long Pao, “Ensemble of Machine Learning and Acoustic Segment Model Techniques for Speech Emotion and Autism Spectrum Disorders Recognition,” InterSpeech, Aug. 2013
Sz-Rung Shiang, Hung-yi Lee, Lin-shan Lee, “Supervised Spoken Document Summarization Based on Structured Support Vector Machine with Utterance Clusters as Hidden Variables,” InterSpeech, Aug. 2013
Tsung-Hsien Wen, Aaron Heidel, Hung-yi Lee, Yu Tsao, Lin-shan Lee, “Recurrent Neural Network Based Language Model Personalization by Social Network Crowdsourcing,” InterSpeech, Aug. 2013
Ching-Feng Yeh, Hung-yi Lee and Lin-shan Lee, “Speaking Rate Normalization with Lattice-based Context-dependent Phoneme Duration Modeling for Personalized Speech Recognizers on Mobile Devices,” InterSpeech, Aug. 2013
Hung-yi Lee, Yu-yu Chou, Yow-Bang Wang, Lin-shan Lee, “Unsupervised Domain Adaptation for Spoken Document Summarization with Structured Support Vector Machine,” ICASSP, May 2013
Hung-yi Lee, Yun-Chiao Li, Cheng-Tao Chung, Lin-shan Lee, “Enhancing Query Expansion for Semantic Retrieval of Spoken Content with Automatically Discovered Acoustic Patterns,” ICASSP, May 2013
Tsung-Hsien Wen, Hung-yi Lee, Pei-Hao Su, Lin-shan Lee, “Interactive Spoken Content Retrieval by Extended Query Model and Continuous State Space Markov Decision Process,” ICASSP, May 2013
Hung-yi Lee, Tsung-Hsien Wen, Lin-shan Lee, “Improved Semantic Retrieval of Spoken Content by Language models Enhanced with Acoustic Similarity Graph,” SLT, Dec. 2012
Tsung-Hsien Wen, Hung-yi Lee, Lin-shan Lee, “Personalized Language Modeling by Crowd Sourcing with Social Network Data for Voice Access of Cloud Applications,” SLT, Dec. 2012
Hung-yi Lee, Yu-yu Chou, Yow-Bang Wang, Lin-shan Lee, “Supervised Spoken Document Summarization Jointly Considering Utterance Importance and Redundancy by Structured Support Vector Machine,” InterSpeech, Sept. 2012
Tsung-Hsien Wen, Hung-yi Lee, Lin-shan Lee, “Interactive Spoken Content Retrieval with Different Types of Actions Optimized by a Markov Decision Process,” InterSpeech, Sept. 2012
Hung-yi Lee, Yun-nung Chen, Lin-shan Lee, “Utterance-level Latent Topic Transition Modeling for Spoken Documents and its Application in Automatic Summarization,” ICASSP, Mar. 2012
Tsung-wei Tu, Hung-yi Lee, Lin-shan Lee, “Semantic Query Expansion and Context-based Discriminative Term Modeling for Spoken Document Retrieval,” ICASSP, Mar. 2012
Yun-Nung Chen, Yu Huang, Hung-yi Lee, Lin-shan Lee, “Unsupervised Two-Stage Keyword Extraction from Spoken Documents by Topic Coherence and Support Vector Machine,” ICASSP, Mar. 2012
Ching-Feng Yeh, Aaron Heidel, Hung-yi Lee, Lin-shan Lee, “Recognition of Highly Imbalanced Code-mixed Bilingual Speech with Frame-level Language Detection based on Blurred Posteriorgram,” ICASSP, Mar. 2012
Tsung-wei Tu, Hung-yi Lee, Lin-shan Lee, “Improved Spoken Term Detection using Support Vector Machines with Acoustic and Context Features from Pseudo-relevance Feedback,” ASRU, Dec. 2011
Hung-yi Lee, Yun-nung Chen, Lin-shan Lee, “Improved Speech Summarization and Spoken Term Detection with Graphical Analysis of Utterance Similarities,” APSIPA, Oct. 2011
Hung-yi Lee, Tsung-wei Tu, Chia-ping Chen, Chao-yu Huang, Lin-shan Lee, “Improved Spoken Term Detection Using Support Vector Machines based on Lattice Context Consistency,” ICASSP, May 2011
Yun-nung Chen, Chia-ping Chen, Hung-yi Lee, Chun-an Chan, Lin-shan Lee, “Improved Spoken Term Detection with Graph-based Re-ranking in Feature Space,” ICASSP, May 2011
Hung-yi Lee, Chia-ping Chen, Ching-feng Yeh, Lin-shan Lee, “A Framework Integrating Different Relevance Feedback Scenarios and Approaches for Spoken Term Detection,” SLT, Dec. 2010
Hung-yi Lee, Chia-ping Chen, Ching-feng Yeh, Lin-shan Lee, “Improved Spoken Term Detection by Discriminative Training of Acoustic Models based on User Relevance Feedback,” InterSpeech, Sept. 2010
Chia-ping Chen, Hung-yi Lee, Ching-feng Yeh, Lin-shan Lee, “Improved Spoken Term Detection by Feature Space Pseudo-Relevance Feedback,” InterSpeech, Sept. 2010
Hung-yi Lee and Lin-shan Lee, “Integrating Recognition and Retrieval with User Feedback: A New Framework for Spoken Term Detection,” ICASSP, Mar. 2010
Yu-Hui Chen, Chia-Chen Chou, Hung-yi Lee, Lin-shan Lee, “An Initial Attempt to Improve Spoken Term Detection by Learning Optimal Weights for Different Indexing Features,” ICASSP, Mar. 2010
Hung-yi Lee, Yueh-Lien Tang, Hao Tang, Lin-shan Lee, “Spoken Term Detection from Bilingual Spontaneous Speech Using Code-switched Lattice-based Structures for Words and Subword Units,” ASRU, Dec. 2009
Chao-hong Meng, Hung-yi Lee, Lin-shan Lee, “Improved Lattice-based Spoken Document Retrieval by Directly Learning from the evaluation Measures,” ICASSP, Apr. 2009