Gaël Richard, Vincent Lostanlen, Yi-Hsuan Yang, and Meinard Müller, “Model-based deep learning for music information research,” IEEE Signal Processing Magazine, Jun. 2024
Yi-Hui Chou, I-Chun Chen, Chin-Jui Chang, Joann Ching, and Yi-Hsuan Yang, “MidiBERT-Piano: BERT-like pre-training for symbolic piano music classification tasks,” Journal of Creative Music Systems , vol. 8, no. 1, Apr. 2024
Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, and Yi-Hsuan Yang, “Local periodicity-based beat tracking for expressive classical piano music,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, 2824-2835, Jul. 2023
Shih-Lun Wu and Yi-Hsuan Yang, “MuseMorphose: Full-song and fine-grained piano music style transfer with just one Transformer VAE,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, 1953-1967, May 2023
Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, and Yi-Hsuan Yang, “An analysis method for metric-level switching in beat tracking,” IEEE Signal Processing Letters , vol. 29, 2153-2157, Oct. 2022
Yi-Jen Shih, Shih-Lun Wu, Frank Zalkow, Meinard Müller, and Yi-Hsuan Yang, “Theme Transformer: Symbolic music generation with theme-conditioned Transformer,” IEEE Transections on Multimedia, vol. 25, 3495-3508, Mar. 2022
Juan Sebastián Gomez-Cañón, Estefanía Cano, Tuomas Eerola, Perfecto Herrera, Xiao Hu, Yi-Hsuan Yang, and Emilia Gómez, “Music Emotion Recognition: Towards new robust standards in personalized and context-sensitive applications,” IEEE Signal Processing Magazine, 38(6), 106-114, Nov. 2021
Ching-Yu Chiu, Alvin Wen-Yu Su, and Yi-Hsuan Yang, “Drum-aware ensemble architecture for improved joint musical beat and downbeat tracking,” IEEE Signal Processing Letters, 28, 1100-1104, May 2021
Eva Zangerle, Chih-Ming Chen, Ming-Feng Tsai and Yi-Hsuan Yang, “Leveraging affective hashtags for ranking music recommendations,” IEEE Transactions on Affective Computing, 12(1), 78-91, Mar. 2021
Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong, Yian Chen, Terence Leong, and Yi-Hsuan Yang, “Automatic melody harmonization with triad chords: A comparative study,” Journal of New Music Research, 50(1), 37-51, Feb. 2021
Zhe-Cheng Fan, Tak-Shing T. Chan, Yi-Hsuan Yang, and Jyh-Shing R. Jang, “Backpropagation with N-D vector-valued neurons using arbitrary bilinear products,” IEEE Transactions on Neural Networks and Learning Systems, 31(7), 2638-2652, Jul. 2020
Szu-Yu Chou, Jyh-Shing Roger Jang, and Yi-Hsuan Yang, “Fast tensor factorization for large-scale context-aware recommendation from implicit feedback,” IEEE Transactions on Big Data, 6(1), 201-208, Mar. 2020
Ting-Wei Su, Yuan-Ping Chen, Li Su, and Yi-Hsuan Yang, “TENT: Technique-embedded note tracking for real-world guitar solo recordings,” Transactions of the International Society for Music Information Retrieval, 2(1), 15-28, Jul. 2019
Jen-Yu Liu, Yi-Hsuan Yang, and Shyh-Kang Jeng, “Weakly-supervised visual instrument-playing action detection in videos,” IEEE Transactions on Multimedia, 21(4), 887-901, Apr. 2019
Juhan Nam, Keunwoo Choi, Jongpil Lee, Szu-Yu Chou, and Yi-Hsuan Yang, “Deep learning for audio-based music classification and tagging,” IEEE Signal Processing Magazine, 36(1), 41-51, Jan. 2019
Conference & proceeding papers:
Yun-Han Lan, Wen-Yi Hsiao, Hao-Chung Cheng, Yi-Hsuan Yang, “MusiConGen: Rhythm and chord control for Transformer-based text-to-music generation,” Proc. Int. Society for Music Information Retrieval Conf. , Nov. 2024
Chih-Pin Tan, Hsin Ai, Yi-Hsin Chang, Shuen-Huei Guan, and Yi-Hsuan Yang, “Piano cover generation with transfer learning approach and weakly aligned data,” Proc. Int. Society for Music Information Retrieval Conf., Nov. 2024
Yu-Hua Chen, Yen-Tung Yeh, Yuan-Chiao Cheng, Jui-Te Wu, Yu-Hsiang Ho, Jyh-Shing Roger Jang, and Yi-Hsuan Yang, “Towards zero-shot amplifier modeling: One-to-many amplifier modeling via tone embedding control,” Proc. Int. Society for Music Information Retrieval Conf., Nov. 2024
Fang-Duo Tsai, Shih-Lun Wu, Haven Kim, Bo-Yu Chen, Hao-Chung Cheng, and Yi-Hsuan Yang, “Audio Prompt Adapter: Unleashing music editing abilities for text-to-music with lightweight finetuning,” Proc. Int. Society for Music Information Retrieval Conf., Nov. 2024
Jingyue Huang, Ke Chen, and Yi-Hsuan Yang, “Emotion-driven piano music generation via two-stage disentanglement and functional representation,” Proc. Int. Society for Music Information Retrieval Conf. , Nov. 2024
Ying-Shuo Lee, Yueh-Po Peng, Jui-Te Wu, Ming Cheng, Li Su and Yi-Hsuan Yang, “Distortion recovery: A two-stage method for guitar effect removal,” Proc. Int. Conf. Digital Audio Effects, Sept. 2024
Yen-Tung Yeh, Wen-Yi Hsiao and Yi-Hsuan Yang, “Hyper recurrent neural network: Condition mechanisms for black-box audio effect modeling,” Proc. Int. Conf. Digital Audio Effects , Sept. 2024
Yu-Hua Chen, Woosung Choi, Wei-Hsiang Liao, Marco Martínez-Ramírez, Kin Wai Cheuk, Yuki Mitsufuji, Jyh-Shing Roger Jang and Yi-Hsuan Yang, “Improving unsupervised clean-to-rendered guitar tone transformation using GANs and integrated unaligned clean data,” Proc. Int. Conf. Digital Audio Effects, Sept. 2024
Chih-Pin Tan, Shuen-Huei Guan, Yi-Hsuan Yang, “PiCoGen: Generate piano covers with a two-stage approach,” Proc. ACM Int. Conf. Multimedia Retrieval , Jun. 2024
Shih-Lun Wu and Yi-Hsuan Yang, “Compose & Embellish: Well-structured piano performance generation via a two-stage approach,” Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing , Jun. 2023
Yen-Tung Yeh, Bo-Yu Chen, and Yi-Hsuan Yang, “Exploiting pre-trained feature networks for generative adversarial networks in audio-domain loop generation,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Dec. 2022
Yueh-Kao Wu, Ching-Yu Chiu, and Yi-Hsuan Yang, “JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VA,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Dec. 2022
Chih-Pin Tan, Wen-Yu Su, and Yi-Hsuan Yang, “Melody Infilling with User-Provided Structural Context,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Dec. 2022
Da-Yi Wu, Wen-Yi Hsiao, Fu-Rong Yang, Oscar Friedman, Warren Jackson, Scott Bruzenak, Yi-Wen Liu, and Yi-Hsuan Yang, “SawSing: A DDSP-based singing vocoder via subtractive sawtooth waveform synthesis,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Dec. 2022
Taejun Kim, Yi-Hsuan Yang, and Juhan Nam, “Joint estimation of fader and equalizer gains of DJ mixers using convex optimization,” Proc. Int. Conf. Digital Audio Effects (DAFx), Sept. 2022
Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, and Yi-Hsuan Yang, “Automatic DJ transitions with differentiable audio effects and generative adversarial networks,” Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), May 2022
Chien-Feng Liao, Jen-Yu Liu, and Yi-Hsuan Yang, “KaraSinger: Score-free singing voice synthesis with VQ-VAE using Mel-spectrograms,” Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), May 2022
Yu-Hua Chen, Wen-Yi Hsiao, Tsu-Kuang Hsieh, Jyh-Shing Roger Jang, and Yi-Hsuan Yang, “Towards automatic transcription of polyphonic electric guitar music: A new dataset and a multi-loss transformer model,” Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), May 2022
Fu-Rong Yang, Yin-Ping Cho, Da-Yi Wu, Yi-Hsuan Yang, Shan-Hung Wu, and Yi-Wen Liu, “Mandarin singing voice synthesis with a phonology-based duration model,” Proc. Asia Pacific Signal and Information Processing Association Annual Summit and Conf. (APSIPA ASC), Dec. 2021
Tun-Min Hung, Bo-Yu Chen, Yen-Tung Yeh, and Yi-Hsuan Yang, “A benchmarking initiative for audio-domain music generation using the FreeSound Loop Dataset,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Nov. 2021
Pedro Sarmento, Adarsh Kumar, C. J. Carr, Zack Zukowski, Mathieu Barthet, and Yi-Hsuan Yang, “DadaGP: A dataset of tokenized GuitarPro songs for sequence models,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Nov. 2021
Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam and Yi-Hsuan Yang, “EMOPIA: A multi-modal pop piano dataset for emotion recognition and emotion-based music generation,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Nov. 2021
Joann Ching and Yi-Hsuan Yang, “Learning to generate piano music with sustain pedals,” ISMIR demo paper, Nov. 2021
Juan Gómez-Cañón, Estefania Cano, Yi-Hsuan Yang, Perfecto Herrera, and Emilia Gomez, “Lets agree to disagree: Consensus entropy active learning for personalized music emotion recognition,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Nov. 2021
Chih-Pin Tan, Chin-Jui Chang, Alvin W. Y. Su, and Yi-Hsuan Yang, “Music score expansion with variable-length infilling,” ISMIR demo paper, Nov. 2021
Chin-Jui Chang, Chun-Yi Lee, and Yi-Hsuan Yang, “Variable-length music score infilling via XLNet and musically specialized positional encoding,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Nov. 2021
Ching-Yu Chiu, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang, “Source separation-based data augmentation techniques for improved joint beat and downbeat tracking,” Proc. European Signal Processing Conference (EUSIPCO), Aug. 2021
Antoine Liutkus, Ondřej Cífka, Shih-Lun Wu, Umut Simsekli, Yi-Hsuan Yang, and Gael Richard, “Relative positional encoding for Transformers with linear complexity,” Proc. International Conference on Machine Learning (ICML), Jul. 2021
Taejun Kim, Yi-Hsuan Yang, and Juhan Nam, “Reverse-engineering the transition regions of real-world DJ mixes using sub-band analysis with convex optimization,” Proc. International Conference on New Interface for Musical Expression (NIME), Jun. 2021
Wen-Yi Hsiao, Jen-Yu Liu, Yin-Cheng Yeh, and Yi-Hsuan Yang, “Compound Word Transformer: Learning to compose full-song music over dynamic directed hypergraphs,” AAAI Conference on Artificial Intelligence, Feb. 2021
Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, and Juhan Nam, “A computational analysis of real-world DJ mixes using mix-to-track subsequence alignment,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Oct. 2020
Yu-Hua Chen, Yu-Siang Huang, Wen-Yi Hsiao, and Yi-Hsuan Yang, “Automatic composition of guitar tabs by Transformers and groove modeling,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Oct. 2020
Joann Ching, Antonio Ramires, and Yi-Hsuan Yang, “Instrument role classification: Auto-tagging for loop based music,” Proc. Joint Conference on AI Music Creativity, Oct. 2020
Bo-Yu Chen, Jordan Smith, and Yi-Hsuan Yang, “Neural loop combiner: Neural network models for assessing the compatibility of loops,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Oct. 2020
Yu-Siang Huang and Yi-Hsuan Yang, “Pop Music Transformer: Beat-based modeling and generation of expressive Pop piano compositions,” Proc. ACM Multimedia, Oct. 2020
Da-Yi Wu and Yi-Hsuan Yang, “Speech-to-singing conversion based on boundary equilibrium GAN,” Proc. INTERSPEECH, Oct. 2020
Antonio Ramires, Frederic Font, Dmitry Bogdanov, Jordan Smith, Yi-Hsuan Yang, Joann Ching, Bo-Yu Chen, Yueh-Kao Wu, Hsu Wei-Han, and Xavier Serra, “The Freesound Loop Dataset and annotation tool,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Oct. 2020
Shih-Lun Wu and Yi-Hsuan Yang, “The Jazz Transformer on the front line: Exploring the shortcomings of AI-composed music through quantitative measures,” Proc. Int. Society for Music Information Retrieval Conf. (ISMIR), Oct. 2020
Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh and Yi-Hsuan Yang, “Unconditional audio generation with generative adversarial networks and cycle regularization,” Proc. INTERSPEECH, Oct. 2020
Ching-Yu Chiu, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, and Alvin W. Y. Su, “Mixing-specific data augmentation techniques for improved blind violin/piano source separation,” Proc. International Workshop on Multimedia Signal Processing (MMSP), Sept. 2020
Ching-Yu Chiu, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, and Alvin W. Y. Su, “Mixing-specific data augmentation techniques for improved blind violin/piano source separation,” Proc. International Workshop on Multimedia Signal Processing (MMSP), Sept. 2020
Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, and Yi-Hsuan Yang, “Score and lyrics-free singing voice generation,” Proc. International Conference on Computational Creativity (ICCC), Sept. 2020
Jianyu Fan, Yi-Hsuan Yang, Kui Dong, Philippe Pasquier, “A comparative study of Western and Chinese classical music based on soundscape models,” Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), May 2020
Tsung-Han Hsieh, Kai-Hsiang Cheng, Zhe-Cheng Fan, Yu-Ching Yang, Yi-Hsuan Yang, “Addressing the confounds of accompaniments in singer identification,” Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), May 2020
Jayneel Parekh, Preeti Rao, Yi-Hsuan Yang, “Speech-to-singing conversion in an encoder-decoder framework,” Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), May 2020
Eva Zangerle, Michael Vötter, Ramona Huber and Yi-Hsuan Yang, “Hit song prediction: Leveraging low- and high-level audio features,” Proc. Int. Society of Music Information Retrieval Conf. (ISMIR), Nov. 2019
Hsiao-Tzu Hung, Chung-Yang Wang, Yi-Hsuan Yang, and Hsin-Min Wang, “Improving automatic Jazz melody generation by transfer learning techniques,” Proc. Asia Pacific Signal and Information Processing Association Annual Summit and Conf. (APSIPA ASC), Nov. 2019
Wen-Yi Hsiao, Yin-Cheng Yeh, Yu-Siang Huang, Chung-Yang Wang, Jen-Yu Liu, Tsu-Kuang Hsieh, Hsiao-Tzu Hung, Jun-Yuan Wang, and Yi-Hsuan Yang, “Jamming with Yating: Interactive demonstration of a music composition AI,” ISMIR demo paper, Nov. 2019
Yin-Cheng Yeh, Jen-Yu Liu, Wen-Yi Hsiao, Yu-Siang Huang, and Yi-Hsuan Yang, “Learning to generate Jazz and Pop piano music from audio via MIR techniques,” ISMIR demo paper, Nov. 2019
Frederic Tamagnan and Yi-Hsuan Yang, “Drum fills detection and generation,” Proc. Int. Symp. Computer Music Multidisciplinary Research (CMMR), Oct. 2019
Kai-Hsiang Cheng, Szu-Yu Chou, and Yi-Hsuan Yang, “Multi-label few-shot learning for sound event recognition,” Proc. IEEE Int. Workshop on Multimedia Signal Processing (MMSP), Sept. 2019
Yu-Hua Chen, Bryan Wang and Yi-Hsuan Yang, “Demonstration of PerformanceNet: A convolutional neural network model for score-to-audio music generation,” Proc. Int. Joint Conf. Artificial Intelligence (IJCAI), Aug. 2019
Jen-Yu Liu and Yi-Hsuan Yang, “Dilated convolution with dilated GRU for music source separation,” Proc. Int. Joint Conf. Artificial Intelligence (IJCAI), Aug. 2019
Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, and Yi-Hsuan Yang, “Musical composition style transfer via disentangled timbre representations,” Proc. Int. Joint Conf. Artificial Intelligence (IJCAI), Aug. 2019
Zhe-Cheng Fan, Tak-Shing Chan, Yi-Hsuan Yang and Jyh-Shing Jang, “Deep cyclic group networks,” Proc. Int. Joint Conf. Neural Networks, Jul. 2019
Tsung-Han Hsieh, Li Su, and Yi-Hsuan Yang, “A streamlined encoder/decoder architecture for melody extraction,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019
Chih-Ming Chen, Chuan-Ju Wang, Ming-Feng Tsai and Yi-Hsuan Yang, “Collaborative similarity embedding for recommender systems,” The Web Conference 2019 (WWW), May 2019
Szu-Yu Chou, Kai-Hsiang Cheng, Jyh-Shing Roger Jang, and Yi-Hsuan Yang, “Learning to match transient sound events using attentional similarity for few-shot sound recognition,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019
Yun-Ning Hung, Yian Chen and Yi-Hsuan Yang, “Multitask learning for frame-level instrument recognition,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019
Vibert Thio, Hao-Min Liu, Yin-Cheng Yeh, and Yi-Hsuan Yang, “A minimal template for interactive web-based demonstrations of musical machine learning,” Proc. Workshop on Intelligent Music Interfaces for Listening and Creation, Mar. 2019
Bryan Wang and Yi-Hsuan Yang, “PerformanceNet: Score-to-audio music generation with multi-band convolutional residual network,” AAAI Conference on Artificial Intelligence, Jan. 2019
Books:
Meinard Müller, Emilia Gómez and and Yi-Hsuan Yang, “Computational methods for melody and voice processing in music recordings,” 2019
other:
Wei-Han Hsu, Bo-Yu Chen, and Yi-Hsuan Yang, “Deep learning based EDM subgenre classification using Mel-spectrogram and tempogram features,” Oct. 2021
Hao-Wen Dong and Yi-Hsuan Yang, “Towards a deeper understanding of adversarial losses,” Jan. 2019