09:00-11:00 Tutorial 1 Speech Translation
14:00-16:00 Tutorial 2 Domain Adaptation for Neural Machine Translation
09:00-10:00 特邀报告1 Multimodal natural language processing: when text is not enough
16:00-17:20 Panel 1机器翻译数据增强技术探讨
09:00-10:00 特邀报告2 Neural Machine Translation with Monolingual Data
14:00-15:20 Panel 2机器翻译技术应用探讨
15:40-16:50 Panel 3机器翻译博士生培养
Keynote 1:Multimodal natural language processing: when text is not enough
报告简介：In this talk I will provide an overview of work on multimodal machine learning, where images are used to build richer context models for natural language tasks. Most of the talk will be focused on approaches to machine translation that exploit both textual and visual information to deal with complex linguistic ambiguities as well as common linguistic biases. I will cover state of the art approaches and their limitations and describe studies on when and how images can be beneficial to the task.
特邀专家简介：Lucia Specia is Professor of Natural Language Processing at Imperial College London and University of Sheffield. Her research focuses on various aspects of data-driven approaches to language processing, with a particular interest in multimodal and multilingual context models and work at the intersection of language and vision. Her work can be applied to various tasks such as machine translation, image captioning, quality estimation and text adaptation. She is the recipient of the MultiMT ERC Starting Grant on Multimodal Machine Translation (2016-2021) and is currently involved in other funded research projects on machine translation, multilingual video captioning and text adaptation. In the past she worked as Senior Lecturer at the University of Wolverhampton (2010-2011), and research engineer at the Xerox Research Centre, France (2008-2009, now Naver Labs). She received a PhD in Computer Science from the University of São Paulo, Brazil, in 2008.
Keynote 2: Neural Machine Translation with Monolingual Data
报告简介: Powered by deep learning, Neural Machine Translation (NMT) has make great progress in past 5 years. In addition to bilingual data, monolingual data also plays an important role in NMT. In this talk, we will introduce several latest techniques using monolingual data for NMT: (1) Dual learning, which helps us to win 4 top places in the recent machine translation challenge organized by the fourth Conference on Machine Translation (WMT19),leverages the structure duality of the forward translation and back translation to learn from monolingual data; (2) MASS, which helps us to win 2 top places in WMT19, is a pre-training method for sequence to sequence generation; and (3)BERT-fuse, which is a fine-tuning method, leverages the pre-trained BERT model in a carefully designed way to boost NMT.
特邀专家简介：秦涛博士，微软亚洲研究院首席研究员/经理，中国科学技术大学兼职教授和博士生导师，IEEE、ACM高级会员，于清华大学电子工程系获得学士和博士学位。他的主要研究领域包括机器学习和人工智能（重点是深度学习和强化学习的算法设计及在实际问题中的应用）、机器翻译、互联网搜索与计算广告、博弈论和多智能体系统，在国际会议和期刊上发表学术论文100余篇。曾任/现任AAAI、SIGIR、AAMAS、ACML领域主席，WWW 2020研讨会主席，DAI 2019工业论坛主席，担任多个国际学术大会程序委员会成员，曾任多个国际学术研讨会联合主席。他带领的团队获得2019年国际机器翻译大赛8项冠军。