Content
Friday, April 24
| 10:45 – 11:15 | Refreshments and Introductory Remarks |
| 11:15 – 12:45 | Jennifer Nagel (University of Toronto) “Metacognitionand Backchannel in Conversation: HumansVersus Machines” |
| 12:45 – 2:30 | Lunch |
| 2:30 – 4:00 | Cameron Buckner (University of Florida) “Chains-of-Thought, Inner Speech, and Artificial Epistemic Agency” |
| 4:00 – 4:15 | Break |
| 4:15 – 5:45 | José LuisBermúdez (Texas A&M University) “Large Language Modelsand the Computational Theory of Mind” |
Saturday, April 25
| 9:00 – 9:30 | Refreshments |
| 9:30 – 11:00 | Daniel Gregory (University of Valencia) “Inner Speech, Fragmentation, and Metacognition” |
| 11:00 – 11:15 | Break |
| 11:15 – 12:45 | Dorit Bar-On(University of Connecticut) “Expression and InnerSpeech: Speaking One’sMind in One’s Mind” |
| 12:45 – 2:30 | Lunch |
| 2:30 – 4:00 | Guy Dove (University of Louisville) “Thinking With Wordsand Talking to Ourselves” |
| 4:00 – 4:15 | Break |
| 4:15 – 5:45 | Edouard Machery (University of Pittsburgh) TBD |
| 6:15 – 9:00 | Conference Dinner;The Schorr Suite,The Center for Great PlainsStudies (map) |
Abstracts
Jennifer Nagel (University of Toronto)
Metacognition and Backchannel in Conversation: Humans Versus Machines
Abstract: Face-to-face human conversation essentially involves the bidirectional flow of information, not just in the alternation of speakers, but in special patterns of responsive signaling (‘backchannel’) made by addressees, featuring tokens such as mhm, huh?, mm, yeah, and oh. I argue that these signals implement a form of metacognition at the level of the conversational dyad, enabling the dyad to build common knowledge together in a generator-evaluator structure. Meanwhile, large language models are compromised in their backchannel signaling, for reasons of both training and cognitive architecture. This defective backchannel reduces the epistemic value of the conversations in which LLMs participate, in ways that are subtly evident when LLMs are paired with human conversational partners, and much more strikingly evident in conversations between machines.
Cameron Buckner (University of Florida)
Chains-of-Thought, Inner Speech, and Artificial Epistemic Agency
Abstract: The frontier of AI is being pushed forward now by “Large Reasoning Models” (LRMs) that self-generate long textual “Chains-of-Thought” (CoTs) before answering user queries. The role of these CoTs in generating final answers was inspired by and generates obvious allusions to the roles played by inner speech in human reasoning and metacognition. In both cases, we might wonder whether access to causally relevant streams of linguistic representations might reveal the structure of the agent’s rational inferences or the way they construe their evidence as supporting their conclusions. I argue that there is a degree of negative epistemic parity in both cases: inner linguistic representations require interpretation in both cases, which limits the role such representations might play in rational explanations of inferences or luminous access to inferential grounds. However, in both cases inner linguistic representations might play a role in more forward-directed metacognitive control—though there are still important disanalogies in the epistemic architecture of humans and current artificial agents, especially involving epistemic feelings and the stable adjustment of inferential policies over time. These disanalogies limit the sense in which even frontier AI models possess the kind of individual perspective on the world through which such notions obtain their distinctive explanatory import, though this suggests less in-principle limitations than ambitious targets for near-term AI research.
José Luis Bermúdez (Texas A&M University)
Large Language Models and the Computational Theory of Mind
Abstract: One of the first lessons to draw from the success of LLMs is that the emphasis needs to be shifted in debates about the computational theory of mind (CTM). Both traditionally (in debates about the significance of neural networks) and more recently (in, e.g., Quilty-Dunn, Porot, and Mandelbaum 2022 target paper in BBS), the principal issue has been taken to be one about the nature and structure of representation – role-filler independence, internal structure, and so forth. But LLMs raise a much deeper challenge, to do with the nature of computation itself. How should we understand operations over symbolically structured representations, such as, e.g., natural language sentences used in reasoning and decision-making? The possibility raised by LLMs is that these operations are ultimately to be understood in terms of mechanisms of pattern recognition learnt through forms of associative learning. Standard objections to associative models of cognition cite recursion and similar phenomena as insuperable obstacles. LLMs not only make these objections seem quaint, they also point towards ways in which complex reasoning and problem-solving can be understood in associationist terms. This paper articulates the precise nature of the challenges posed to CTM by large language networks – challenges that might be summarizes informally in the question: Can we prove, in a non-question-begging way, that we are not (just?) next-word predictors?
Daniel Gregory (University of Valencia)
Inner Speech, Fragmentation, and Metacognition
Abstract: At least since Vygotsky, it has been said that inner speech is somehow condensed or fragmented, but there have been few clear suggestions as to exactly what this might mean. I suggest that, on most interpretations, it is either not true that inner speech is condensed/fragmented or true but not in a philosophically interesting way. I point towards some possible ways that inner speech might be condensed/fragmented in philosophically interesting ways, especially regarding metacognition.
Dorit Bar-On (University of Connecticut)
Expression and Inner Speech: Speaking One’s Mind in One’s Mind
Abstract: In earlier work, I have developed a neo-expressivist framework for addressing several standing philosophical puzzles connected with the notion of expression and expressive behavior. Two of these puzzles are directly relevant for my purposes here: a puzzle associated with so-called first-person authority and privileged self-knowledge and a seemingly unrelated puzzle concerning the origins of meaning*. As regards the first puzzle, I have argued that the neo-expressivist framework can allow us to accommodate two seemingly conflicting ideas: the Wittgensteinian idea that our so-called avowals (i.e., first-person mentalistic self-ascriptions) – like natural expressions – serve to express the self-ascribed states themselves, on the one hand, and the idea that our avowals can constitute instances of privileged factual-propositional knowledge, on the other hand. Regarding the second puzzle, I have argued that the neo-expressivist account allows us to accommodate important continuities, side-by-side with palpable discontinuities, between nonhuman and human expressive behaviors. And this helps address the question how meaningful linguistic communication could originate from animal communication.
After briefly reviewing the two puzzles and the neo-expressivist approach to solving them, I turn to my main topic in this talk: expression and inner speech. I lay out a conception of one type of inner speech that naturally fits neatly with the neo-expressivist solutions to these puzzles. This type comprises episodes of inner speech in which an individual speaks their mind in their mind. In such episodes, individuals actively articulate current thoughts silently, instead of out loud. I will be arguing that this idea is – perhaps surprisingly – applicable to nonhuman animals. Contra Davidson and his followers, who famously argue that languageless animals can have no contentful thoughts, I will be arguing that nonhuman animals can have what I label “brute thoughts”, to which they give voice in analogues of inner speech episodes. Since, on my proposed view, the contents of brute thoughts are constrained by the crude meanings of brutes’ nonlinguistic means of expression, crediting languageless brutes with contentful thoughts does not fall afoul of Davidsonian worries concerning animal thoughts.
* Cf. inter alia Bar-On 2004, 2009, 2012, 2013 a, b, 2023, forthcoming, and Bar-On and Wright 2023.
Guy Dove (University of Louisville)
Thinking With Words and Talking to Ourselves
Abstract: A large body of evidence suggests that we think differently with words than we do without them. In other words, we have good reasons to think that the neurologically realized language system actively influences and shapes many cognitive processes. Language can be thought of as a kind of neuroenhancement or disruptive technology that extends our cognitive abilities. This talk explores how inner speech can be viewed as a form of thinking with words. In it, I develop an account of the functional features of inner speech that depends on the dynamic interaction of the language system with other cortical systems associated with interacting and understanding the world. This account emphasizes the functional links between the language system and other systems associated with task-related attention and internally focused cognition.
Edouard Machery (University of Pittsburgh)
TBD