The Big Picture:
Crafting a Research Narrative

Workshop @ EMNLP 2023

Singapore, December 7th, 2023

Location: Virgo 1 & 2

Workshop Recording Link

News

 Overview

All research exists within a larger context. Progress is made by standing on the shoulders of giantsbuilding on the foundations laid by earlier researchers. In NLP it has been said that research is "not so much going round in circles as ascending a spiral" (Spark-Jones, 1994) while citation analysis of recent work suggests that research progress looks more like a series of intertwined staircases (Hearst, 2018). In light of rapid publication rates and concise paper formats, it has become increasingly difficult, however, to recognize the larger story to which a paper is connected.

The Big Picture Workshop provides a dedicated venue for exploring and distilling broader NLP research narratives. We invite researchers to reflect on how their individual contributions fit within the overall research landscape and what stories they are telling with their bodies of research. The goals of the workshop are:

Program (December 7th, 2023)

All times are in local Singapore time (GMT+8)

Room: Virgo 1 + 2

1st session
- Opening remarks       9:00–9:15
- Invited talk #1: Raymond J. Mooney   9:1510:05
- Best paper talk: Julian Michael 10:0510:30
The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP

Break 10:3011:00

2nd session
- Invited talk #2: Sarah + Sarthak 11:0012:00

Lunch 12:0013:30

3rd session
- Invited talk #3: Liwei + Zeerak 13:3014:30
- Posters 14:3015:30

Break 15:3016:00

4th session
- Invited talk #4: Sewon + Kang Min + Jun Yeob 16:0017:00
- Closing remarks 17:00–17:15

Speakers

The Vision Thing: 

Finding and Pursuing your Research Passion


A key contribution to being a successful researcher in natural language processing, as in any area, is having a clear overarching vision of what your body of research is trying to accomplish. Using my own 40-year career as an example, I will attempt to provide general advice on formulating and pursuing a coherent research vision.   In particular, I will focus on formulating a unique, personal objective that exploits your specific talents, knowledge, and passions, and that is distinct from the current popular trends in the field.  I will also focus on formulating a vision that bridges existing fields of study to produce an overarching agenda that unifies previously disparate ideas.

the-vision-thing.pptx

Professor at
UT Austin

Is "Attention = Explanation"? Past, Present, and Future

Attention mechanisms have become a core component of neural models in Natural Language Processing over the past decade. These mechanisms not only deliver substantial performance improvements but also claim to offer insights into the models' inner workings. In this talk, we will highlight a series of contributions we have made that provided a critical perspective on the role of attention as a faithful explanation for model predictions, and sparked a larger conversation on the overarching goals of interpretability methods in NLP. We’ll contrast our methodological approaches and findings to highlight that there is no one-size-fits-all answer to the question “Is attention explanation?”. Finally, we’ll explore the role of attention as an explanation mechanism in today’s NLP landscape.

Relevant papers: Jain & Wallace (2019), Wiegreffe & Pinter (2019)

attention-explanation.pdf

Postdoc at
AI2

Applied Scientist at

AWS

Delphi, and Whether Machines Can Learn Morality

Disagreements and conflict are vital for driving scholarly progress, social and scientific alike. In research, we often identify gaps in others' and our own work, to present new ideas that remedy them. Disagreements are often small in nature: We disagree on methods rather than the research programme itself. In this talk, we discuss a disagreement of a different nature: namely one in which the substance of the disagreement is the existence of the task itself. We reflect on the experience of the conflict, how it was resolved, and what outcomes it has had.

In particular, Liwei will share her current interdisciplinary research journey on AI + humanity sparked by the Delphi experience. She will introduce Value Kaleidoscope—a novel computational system aiming to model potentially conflicting, pluralistic human values interwoven in human decision-making. Finally, she will talk about an exciting co-evolution opportunity unfolding between frontier AI technology and humanity fields.

Zeerak will go over ongoing work that considers the foundations and limits of machine learning and NLP with regard to ethically appropriate work. Specifically, they will discuss the use of the distributional hypothesis, and what particular visions of our societies it offers, and how machine learning seeks to construct our future in the vision of the past.

Relevant papers:  Jiang et al. (2021), Talat et al. (2022)

machine-morality.pdf

PhD Student at
University of Washington

Postdoc at
DDI

The Role of Demonstrations: What In-Context Learning Actually Does

In-Context Learning (ICL) enables a language model (LM) to learn a new correlation between inputs and outputs during inference, without explicit gradient updates. In this talk, we show a series of work centered around the research question: whether or not the correctness of demonstrations is needed for good performance of ICL. Through a series of experiments and analyses, we delve into the nuances of this relationship across various experimental setups, models (plain LMs or instruction-tuned ones), and tasks (classification or generation). Our findings contribute to a broader understanding of how LMs engage in in-context learning, shedding light on what new correlations they can or cannot learn, and leading to a new line of research in discovering unexpected behaviors of LMs.
Relevant papers: Min et al. (2022), Yoo & Kim et al. (2022)

icl.pdf

PhD Student at
University of Washington

PhD Student at
Seoul National University

Research Scientist at
NAVER AI Lab

 

Organizing Committee

 

Sponsors

See you at the workshop!