The Center for Philosophy of Science at the University of Pittsburgh invites you to join us for our 65th Annual Lecture Series Talks. All lectures will be held in room 1008 in the Cathedral of Learning (10th Floor) at 3:30pm EDT.  If you can't join us in person  please visit our live stream on YouTube at https://www.youtube.com/channel/UCrRp47ZMXD7NXO3a9Gyh2sg.

The Annual Lecture Series, the Center’s oldest program, was established in 1960, the year when Adolf Grünbaum founded the Center. Each year the series consists of six lectures, about three quarters of which are given by philosophers, historians, and scientists from other universities.

 

Thomas Ryckman

Stanford University

Friday, September 27  @ 3:30 pm - 6:00 pm EDT

Title: Niels Bohr: Transcendental Physicist

Abstract:

While it would be unwarranted to label Bohr as “neo-Kantian” or indeed adherent of any philosophical school, his understanding of quantum theory crucially employs an intricate transcendental argument. Bohr deemed the quantum postulate, or “wholeness” of interaction between agency of measurement and atomic system, to call into question a core epistemological distinction between subject and object familiar in the concept of ‘observation’ from everyday life and classical physics. Re-conceptualizing that distinction led to redefinition of the term ‘phenomenon’, a corresponding non-representationalist account of the wave function, and to situating the notion of objectivity within “conditions of the possibility of unambiguous communication”.

 

Colin Klein

Australian National University

Friday, October 11th  @ 3:30 pm - 6:00 pm EDT

Title: Transformers, Representational Structure, and the Language of Thought

Abstract:

Transformers are an extraordinarily powerful computational architecture, applicable across a range of domains. They are, notably,  the computational foundation of contemporary Large Language Models (LLMs).  LLMs’ facility with language have led many to draw analogies between LLMs and human cognitive processing. Drawing out the consequences of what seems like an innocuous step—the need for positional encoding of the input to LLMs—I argue that transformers are broad precisely because they have so little built-in representational structure. This naturally raises questions about the need for structured representations and what (if any) advantage they might have over mere representation of structure. I develop this in particular in the context of the contemporary revival of the Language of Thought hypothesis.

 

Melanie Mitchell

Santa Fe Institute

 

Friday, November 22nd  @ 3:30 pm - 6:00 pm EDT

 

Title: AI’s Challenge of Understanding the World

Abstract:

I will survey a debate in the artificial intelligence (AI) research community on the extent to which current AI systems can be said to “understand” language and the physical and social situations language encodes. I will describe arguments that have been made for and against such understanding, hypothesize about what humanlike understanding entails, and discuss what methods can be used to fairly evaluate understanding in AI systems.


A reception with light refreshments will follow each Talk in 1008 lounge from 5-6pm.

All lectures will be live streamed on YouTube at https://www.youtube.com/channel/UCrRp47ZMXD7NXO3a9Gyh2sg.