Deep Learning on Graphs for Natural Language Processing SIGIR 2021

Lingfei Wu, JD.COM Silicon Valley Research Center
Yu Chen, Facebook AI
Heng Ji, University of Illinois Urbana-Champaign
Bang Liu, University of Montreal

Abstract

Due to its great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graph techniques (i.e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems. There has seen a surge of interests in applying deep learning on graph techniques to NLP, and has achieved considerable success in many NLP tasks, ranging from classification tasks like sentence classification, semantic role labeling and relation extraction, to generation tasks like machine translation, question generation and summarization. Despite these successes, deep learning on graphs for NLP still face many challenges, including automatically transforming original text sequence data into highly graph-structured data, and effectively modeling complex data that involves mapping between graph-based inputs and other highly structured output data such as sequences, trees, and graph data with multi-types in both nodes and edges.

This tutorial will cover relevant and interesting topics on applying deep learning on graph techniques to NLP, including automatic graph construction for NLP, graph representation learning for NLP, advanced GNN based models (e.g., graph2seq, graph2tree, and graph2graph) for NLP, and the applications of GNNs in various NLP tasks (e.g., machine translation, natural language generation, information extraction and semantic parsing). In addition, handson demonstration sessions will be included to help the audience gain practical experience on applying GNNs to solve challenging NLP problems using our recently developed open source library –Graph4NLP, the first library for researchers and practitioners for easy use of GNNs for various NLP tasks.

Outline [Slides]

Introduction (20 mins)
  • Natural Language Processing: A Graph Perspective
  • Graph Based Algorithms for Natural Language Processing
  • Deep Learning on Graphs: Graph Neural Networks
    • Foundations
    • Methodologies
    • Applications in Natural Language Processing: An Overview
Methodologies (60 mins)
  • Automatic Graph Construction for NLP
    • Static Graph Construction
    • Dynamic Graph Construction
  • Graph Representation Learning for NLP
    • Graph Neural Networks for Improved Text Representation
    • Graph Neural Networks for Joint Text & Knowledge Representation
    • Graph Neural Networks for Various Graph Types
  • GNN Based Encoder-Decoder Models
    • Graph-to-Seqeunce Models
    • Graph-to-Tree Models
Applications (60 mins)
  • Machine Translation
    • Methodologies
    • Hands-on Demonstration
  • Natural Language Generation
    • Methodologies
    • Hands-on Demonstration
  • Machine Reading Comprehension
    • Methodologies
    • Hands-on Demonstration
  • Information Extraction
    • Methodologies
    • Hands-on Demonstration
  • Semantic Parsing
    • Methodologies
    • Hands-on Demonstration
Conclusion and Open Direction (10 mins)

Organizers

Lingfei Wu, JD.COM Silicon Valley Research Center
Lingfei Wu earned his Ph.D. degree in computer science from the College of William and Mary in 2016. He currently serves as a Principal Scientist at JD.COM Silicon Valley Research Center. He has published more than 80 top-ranked conference and journal papers and is a co-inventor of more than 40 filed US patents. Because of the high commercial value of his patents, he has received several invention achievement awards and has been appointed as IBM Master Inventors, class of 2020. He was the recipients of the Best Paper Award and Best Student Paper Award of several conferences such as IEEE ICC’19, AAAI workshop on DLGMA’20 and KDD workshop on DLG’19. His research has been featured in numerous media outlets, including NatureNews, YahooNews, Venturebeat, and TechTalks. He has co-organized 10+ conferences (KDD, AAAI, IEEE BigData) and is the founding co-chair for Workshops of Deep Learning on Graphs (with AAAI’21, AAAI’20, KDD’20, KDD’19, and IEEE BigData’19). He has currently served as Associate Editor for IEEE Transactions on Neural Networks and Learning Systems, ACM Transactions on Knowledge Discovery from Data and International Journal of Intelligent Systems, and regularly served as a SPC/PC member of the following major AI/ML/NLP conferences including KDD, IJCAI, AAAI, NIPS, ICML, ICLR, and ACL.
Yu Chen, Facebook AI
Yu Chen is a Research Scientist at Facebook AI. He got his PhD degree in Computer Science from Rensselaer Polytechnic Institute. His research interests lie at the intersection of Machine Learning (Deep Learning), and Natural Language Processing, with a particular emphasis on the fast-growing field of Graph Neural Networks and their applications in various domains. His work has been published in top-ranked conferences including but not limited to NeurIPS, ICML, ICLR, AAAI, IJCAI, NAACL, KDD, WSDM, ISWC, and AMIA. He was the recipient of the Best Student Paper Award of AAAI DLGMA’20. He has served as a PC member in many conferences (e.g., ACL, EMNLP, NAACL, EACL, AAAI, IJCAI and KDD) and journals (e.g., TNNLS, IJIS, TKDE, TKDD, DAMI and TASL).
Heng Ji, University of Illinois Urbana-Champaign
Heng Ji is a professor at the Computer Science Department of University of Illinois at Urbana-Champaign. She is an Amazon Scholar. She was selected as "Young Scientist" and a member of the Global Future Council on the Future of Computing by theWorld Economic Forum in 2016 and 2017. She has received many awards including "AI’s 10 to Watch" Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, ACL2020 Best Demo award, PACLIC2012 Best paper runner-up, "Best of ICDM2013" paper award, and "Best of SDM2013" paper award. She has served as the Program Committee Co-Chair for many conferences including NAACL-HLT2018. She is elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2021. She has given a large number of keynotes/tutorials presentations in Event Extraction, Natural Language Understanding, and Knowledge Base Construction in many conferences including but not limited to ACL, COLING, EMNLP, NAACL, NeurIPS, SIGMOD, AAAI and KDD.
Bang Liu, University of Montreal
Bang Liu is an Assistant Professor in the Department of Computer Science and Operations Research (DIRO) at the University of Montreal, as well as a member of Mila – Quebec Artificial Intelligence Institute. His research interests primarily lie in the areas of natural language processing (NLP), data mining, applied machine learning, and deep learning. He received his B.Engr. degree in 2013 from University of Science and Technology of China (USTC), as well as his M.S. degree and Ph.D. degree from University of Alberta in 2015 and 2020, respectively. He has served as PC members in many conferences (e.g., KDD, AAAI, ICML and WSDM).