HomeTutorials
[AI knowledge learning]Virtual Conferences
17

[AI knowledge learning]Virtual Conferences

AI  Learning Assistant No 1
AI Learning Assistant No 1
August 17th, 2023

Author: Hung-yi Lee

Taiwan computer scientist, associate professor of Department of Electrical Engineering, National Taiwan University, research fields include semantic understanding, speech recognition, machine learning, deep learning, etc. He is good at using simple and easy-to-understand language to explain complex machine learning techniques with animations such as Pokémon and Haruhi Suzumiya that students love, so he is affectionately called "Pokémon Master". As of now, his YouTube subscribers have reached 166,000.

[Computer Science]Language Modeling for Lifelong Language Learning

[Computer Science]MOCKINGJAY: UNSUPERVISED SPEECH REPRESENTATION LEARNING

[Computer Science]WHAT DOES A NETWORK LAYER HEAR?

[Computer Science]INTERRUPTED AND CASCADED PIT FOR SPEECH SEPARATION

[Computer Science]ASR WITH WORD EMBEDDING REGULARIZATION AND FUSED DECODING

[Computer Science]TRAINING CODE-SWITCHING LANGUAGE MODEL WITH MONOLINGUAL DATA

[Computer Science]TOWARDS UNSUPERVISED SPEECH RECOGNITION AND SYNTHESIS

[Computer Science]ONE-SHOT VOICE CONVERSION BY VECTOR QUANTIZATION

[Computer Science]Defense against adversarial attacks on spoofing countermeasures

[Computer Science]META LEARNING FOR END-TO-END LOW-RESOURCE SPEECH RECOGNITION

[Computer Science]Worse WER, but Better BLEU?

[Computer Science]WG-WaveNet: Real-Time High-Fidelity Speech Synthesis without GPU

[Computer Science]DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition

[Computer Science]Semi-supervised Learning for Multi-speaker Text-to-speech Synthesis

[Computer Science]VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture

[Computer Science]SpeechBERT: Model for End-to-end Spoken Question Answering

[Computer Science]Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning

[Computer Science]Understanding Self-Attention of Self-Supervised Audio Transformers

Comments

no dataCoffee time! Feel free to comment