LLM Reading Group (March 5, 19; April 2, 16, 30; May 14; 28)

LLM Reading Group (March 5, 19; April 2, 16, 30; May 14; 28)

Come and meet some of the authors of some seminal papers in LLM/NLP research and hear them them talk about their work

By Human Feedback Foundation

Location

Online

Agenda

12:00 PM - 1:00 PM

March 5: The Linear Representation Hypothesis and the Geometry of LLMs

Kiho Park - Google Brain

12:00 PM - 1:00 PM

March 19: Who Are the Publics Engaging in AI?

Renee Sieber - McGill University

12:00 PM - 1:00 PM

April 2: Large Legal Fictions: Profiling Legal Hallucinations in LLMs

Matt Dahl, Verun Magesh, Mirac Suzgun - Stanford & Yale

12:00 PM - 1:00 PM

April 16: Aya Dataset: An Open-Access Collection for Multilingual Instruction

Marzieh Fadaee, Ahmet Ustun - Cohere For AI

12:00 PM - 1:00 PM

April 30: How Well Can LLMs Negotiate? Negotiation Arena Platform and Analysis

Federico Bianchi - Stanford University

12:00 PM - 1:00 PM

May 14: Detecting LLM Generated Text in Computing Education

Michael Liut - University of Toronto

12:00 PM - 1:00 PM

May 28: Parameter Efficient Reinforcement Learning from Human Feedback (PERL)

Hakim Sidahmed - Google Research

12:00 PM - 1:00 PM

June 11: Is Model Collapse Inevitable? Breaking the Curse of Recursion

Matthias Gerstgrasser, Rylan Schaeffer - Stanford

About this event

This event grew out of a LinkedIn post Andrew Ng created, asking, "if we were to do a course on the seminal papers in LLMs, what would we read?" We collated the answers and created a Discord group for just this purpose -- and got a nonprofit going called the Human Feedback Foundation to make it easier to build human feedback and input into AI projects and models.

Alternate Tuesdays starting March 5, 12:00 - 1:00 PM on Zoom

Read the papers, join us for the talks listed below in the Agenda, and join us on Discord: https://discord.gg/urZj5vgV8h

March 5: The Linear Representation Hypothesis and the Geometry of Large Language Models

March 19: Who Are the Publics Engaging in AI?

April 2: Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models

April 16: Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning

April 30: How Well Can LLMs Negotiate? Negotiation Arena Platform and Analysis

May 14: Detecting LLM Generated Text in Computing Education: A Comparative Study for ChatGPT Cases

May 28: PERL: Parameter Efficient Reinforcement Learning from Human Feedback

June 11: Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data.

Organized by

Human Feedback Foundation provides human input to open source AI community. When AI became a new paradigm for modern software, human feedback has emerged as an essential element of the AI tech stack, and so the Human Feedback Foundation is here to build public input into AI models used in critical domains like healthcare, governance, and democracy, and act as an independent, third-party custodian to create a global database of human feedback – for there to be an authoritative and democratic source of human feedback data for AI builders everywhere.