The Big Picture:
Crafting a Research Narrative
Workshop @ EMNLP 2023
Singapore, December 7th, 2023
November 3rd: Released program and list of accepted papers.
October 16th: If your paper was accepted as a findings paper at EMNLP23, and you think there's a good fit, consider presenting your work at our workshop. Please send us an email with a link to the paper.
October 16th: Apply for a funding opportunity to participate in the workshop (deadline: October 29th AoE)!
(specially designed for participants who face challenges in securing financial support)
April 20th: Updated CfP with deadline dates
April 20th: Papers submission is now available through OpenReview
All research exists within a larger context. Progress is made by standing on the shoulders of giants—building on the foundations laid by earlier researchers. In NLP it has been said that research is "not so much going round in circles as ascending a spiral" (Spark-Jones, 1994) while citation analysis of recent work suggests that research progress looks more like a series of intertwined staircases (Hearst, 2018). In light of rapid publication rates and concise paper formats, it has become increasingly difficult, however, to recognize the larger story to which a paper is connected.
The Big Picture Workshop provides a dedicated venue for exploring and distilling broader NLP research narratives. We invite researchers to reflect on how their individual contributions fit within the overall research landscape and what stories they are telling with their bodies of research. The goals of the workshop are:
Enhance the communication and understanding between different lines of work;
Highlight how works connect and build on each other;
Generate insights that are difficult to glean without combining and reconciling different research narratives;
Encourage broader collaboration and awareness of prior work in the NLP community;
Facilitate understanding of the trajectories and insights within the field of NLP, particularly for newcomers and outsiders to the field, in ways that individual research papers typically do not.
Program (December 7th, 2023)
All times are in local Singapore time (GMT+8).
- Opening remarks 9:00–9:15
- Invited talk #1: Raymond J. Mooney 9:15–10:05
- Talk 10:05–10:30
- Invited talk #2: Sarah + Sarthak 11:00–12:00
- Invited talk #3: Liwei + Zeerak 13:30–14:30
- Posters 14:30–15:30
- Invited talk #4: Sewon + Kang Min + Jun Yeob 16:00–17:00
- Closing remarks 17:00–17:15
The Vision Thing:
Is "Attention = Explanation" and the Role of Interpretability in NLP
Attention mechanisms have become a core component of neural models in Natural Language Processing over the past decade. These mechanisms not only deliver substantial performance improvements but also claim to offer insights into the models' inner workings. In this talk, we will highlight a series of contributions we have made that provided a critical perspective on the role of attention as a faithful explanation for model predictions, and sparked a larger conversation on the overarching goals of interpretability methods in NLP. We’ll contrast our methodological approaches and findings to highlight that there is no one-size-fits-all answer to the question “Is attention explanation?”. Finally, we’ll explore the role of attention as an explanation mechanism in today’s NLP landscape.
Delphi, and Whether Machines Can Learn Morality
Disagreements and conflict are vital for driving scholarly progress, social and scientific alike. In research, we often identify gaps in others' and our own work, to present new ideas that remedy them. Disagreements are often small in nature: We disagree on methods rather than the research programme itself. In this talk, we discuss a disagreement of a different nature: namely one in which the substance of the disagreement is the existence of the task itself. We reflect on the experience of the conflict, how it was resolved, and what outcomes it has had.
In particular, Liwei will share her current interdisciplinary research journey on AI + humanity sparked by the Delphi experience. She will introduce Value Kaleidoscope—a novel computational system aiming to model potentially conflicting, pluralistic human values interwoven in human decision-making. Finally, she will talk about an exciting co-evolution opportunity unfolding between frontier AI technology and humanity fields.
Zeerak will go over ongoing work that considers the foundations and limits of machine learning and NLP with regard to ethically appropriate work. Specifically, they will discuss the use of the distributional hypothesis, and what particular visions of our societies it offers, and how machine learning seeks to construct our future in the vision of the past.
The Role of Demonstrations: What In-Context Learning Actually Does
In-Context Learning (ICL) enables a language model (LM) to learn a new correlation between inputs and outputs during inference, without explicit gradient updates. In this talk, we show a series of work centered around the research question: whether or not the correctness of demonstrations is needed for good performance of ICL. Through a series of experiments and analyses, we delve into the nuances of this relationship across various experimental setups, models (plain LMs or instruction-tuned ones), and tasks (classification or generation). Our findings contribute to a broader understanding of how LMs engage in in-context learning, shedding light on what new correlations they can or cannot learn, and leading to a new line of research in discovering unexpected behaviors of LMs.
Relevant papers: Min et al. (2022), Yoo & Kim et al. (2022)