Deep Learning on Graphs for Natural Language Processing WWW 2022

Lingfei Wu, JD.COM Silicon Valley Research Center
Yu Chen, Facebook AI
Heng Ji, University of Illinois Urbana-Champaign
Yunyao Li, IBM Research
Bang Liu, University of Montreal

Abstract

There are a rich variety of NLP problems that can be best expressed with graph structures. Due to the great power in modeling non- Euclidean data like graphs, deep learning on graphs techniques (i.e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems, and have already achieved great success. Despite the success, deep learning on graphs for NLP (DLG4NLP) still faces many challenges (e.g., automatic graph construction, graph representation learning for complex graphs, learning mapping between complex data structures).

This tutorial will cover relevant and interesting topics on apply- ing deep learning on graph techniques to NLP, including automatic graph construction for NLP, graph representation learning for NLP, advanced GNN based models (e.g., graph2seq and graph2tree) for NLP, and the applications of GNNs in various NLP tasks (e.g., ma- chine translation, natural language generation, information extrac- tion and semantic parsing). In addition, hands-on demonstration sessions will be included to help the audience gain practical experi- ence on applying GNNs to solve challenging NLP problems using our recently developed open source library – Graph4NLP, the first library for researchers and practitioners for easy use of GNNs for various NLP tasks.

Outline

Introduction (30 mins)
  • Natural Language Processing: A Graph Perspective
  • Graph Based Algorithms for Natural Language Processing
  • Deep Learning on Graphs: Graph Neural Networks
    • Foundations
    • Methodologies
    • Applications in Natural Language Processing: An Overview
Methodologies (60 mins)
  • Automatic Graph Construction for NLP
    • Static Graph Construction
    • Dynamic Graph Construction
  • Graph Representation Learning for NLP
    • Homogeneous Graph Neural Networks for NLP
    • Multi-relational Graph Neural Networks for NLP
    • Heterogeneous Graph Neural Networks for NLP
  • GNN Based Encoder-Decoder Models
    • Graph-to-Seqeunce Models
    • Graph-to-Tree Models
Applications (70 mins)
  • Information Extraction
  • Machine Reading Comprehension
  • Natural Language Generation
  • Clustering and Mining
Hands-on Demonstration (10 mins)
Conclusion and Open Directions (10 mins)

Organizers

Lingfei Wu, JD.COM Silicon Valley Research Center
Lingfei Wu is a Principal Scientist at JD.COM Silicon Valley Re- search Center. Previously, he was a research staff member and team leader at IBM Research. He has published more than 80 top-ranked conference and journal papers and is a co-inventor of more than 40 filed US patents. Because of the high commercial value of his patents, he has received several invention achievement awards and has been appointed as IBM Master Inventors, class of 2020. He was the recipients of the Best Paper Award and Best Student Paper Award of several conferences such as IEEE ICC’19, AAAI workshop on DLGMA’20 and KDD workshop on DLG’19. His research has been featured in numerous media outlets, including NatureNews, YahooNews, Venturebeat, and TechTalks. He has co-organized 10+ conferences (KDD, AAAI, IEEE BigData) and is the founding co- chair for Workshops of Deep Learning on Graphs (with KDD’21, AAAI’21, AAAI’20, KDD’20, KDD’19, and IEEE BigData’19). He has currently served as Associate Editor for IEEE Transactions on Neural Networks and Learning Systems, ACM Transactions on Knowledge Discovery from Data and International Journal of In- telligent Systems. Lingfei Wu has given many tutorials/keynote presentations in deep learning on graphs for natural language pro- cessing in multiple workshops in KDD’20, CVPR’20, AAAI’20, and Machine Learning & Artificial Intelligence’20.
Yu Chen, Facebook AI
Yu Chen is a Research Scientist at Facebook AI. He got his PhD de- gree in Computer Science from Rensselaer Polytechnic Institute. His research interests lie at the intersection of Machine Learning (Deep Learning), and Natural Language Processing, with a particular em- phasis on the fast-growing field of Graph Neural Networks and their applications in various domains. His work has been published in top-ranked conferences including but not limited to NeurIPS, ICML, ICLR, AAAI, IJCAI, NAACL, KDD, WSDM, ISWC, and AMIA. He has delivered tutorials at NAACL, IJCAI, KDD and SIGIR. He was the recipient of the Best Student Paper Award of AAAI DL- GMA’20. He has served as PC members in many conferences (e.g., ACL, EMNLP, NAACL, EACL, AAAI, IJCAI and KDD) and journals (e.g., TNNLS, IJIS, TKDE, TKDD and DAMI).
Heng Ji, University of Illinois Urbana-Champaign
Heng Ji is a professor at the Computer Science Department of University of Illinois at Urbana-Champaign. She has received many awards including "AI’s 10 to Watch" Award by IEEE Intelligent Sys- tems in 2013, NSF CAREER award in 2009, PACLIC2012 Best paper runner-up, "Best of ICDM2013" paper award, and "Best of SDM2013" paper award. She has served as the Program Committee Co-Chair of NAACL-HLT2018, NLP-NABD2018, NLPCC2015, CSCKG2016 and CCL2019, and senior area chair for many conferences. She has given a large number of keynotes/tutorials presentations in Event Extraction, Natural Language Understanding, and Knowledge Base Construction in many conferences including but not limited to ACL, EMNLP, NAACL, NeurIPS, AAAI and KDD.
Yunyao Li, IBM Research
Yunyao Li is a Principal Research Staff Member and Senior Re- search Manager with IBM Research - Almaden, where she manages the Scalable Knowledge Intelligence Department. She is a Master Inventor and a member of the IBM Academy of Technology. Her expertise is in the interdisciplinary areas of natural language pro- cessing, databases, human-computer interaction, and information retrieval. She is interested in designing, developing, and analyzing large-scale systems that are usable by a wide spectrum of users. Her current research focuses on scalable natural language processing. She regularly gives talks and tutorials at conferences and universi- ties across the globe, including a MOOC on information extraction. She received her Ph.D. from the University of Michigan, Ann Arbor.
Bang Liu, University of Montreal
Bang Liu is an Assistant Professor in the Department of Computer Science and Operations Research (DIRO) at the University of Mon- treal, as well as a member of Mila – Quebec Artificial Intelligence Institute. His research interests primarily lie in the areas of natural language processing (NLP), data mining, applied machine learning, and deep learning. He received his B.Engr. degree in 2013 from University of Science and Technology of China (USTC), as well as his M.S. degree and Ph.D. degree from University of Alberta in 2015 and 2020, respectively. He has served as PC members in many conferences (e.g., KDD, AAAI, ICML and WSDM).