Anthony F. Beavers
Emeritus Professor of Philosophy
The University of Evansville
Affiliated Faculty, Cognitive Science
Indiana University
Research Interests
Philosophy of Artificial Intelligence; Philosophy of Cognitive Science; Philosophy of Information and Information Technology; History of Philosophy (Ancient, Modern, and Early Continental, including Phenomenology); Meta-ethics
Course Syllabus
Philosophical Foundations of the Cognitive and Information Sciences
Recent and Upcoming Academic Activity
Colloquium Presentation: "Cartesian Incursions into the Philosophy of Artificial Intelligence" - Department of the History and Philosophy of Science and Medicine, Indiana University, 1/16/2025.
Journal Article with Devin Wright on transcendence and information fecundity in the context of phenomenology and 4e cognition. In progress.
Critical Review with Marcello Guarini of Cameron Buckner's From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence, Oxford, 2024. In progress.
Keynote Speaker: Two talks on artificial intelligence in education - Fall Faculty Conference, The College of Southern Idaho.
"Information-Transfer vs. Attention-Directing Models of Cognitive Development," 8/13/2024.
This presentation will address critical differences in the way we understand the role of teaching in cognitive development. According to the information-transfer model, we imagine the educational enterprise as one of getting knowledge from the mind of the teacher into the mind of the student. But recent advancements concerning the role of attention in both neuroscience and artificial intelligence suggest a different characterization: knowledge acquisition requires attention management, which is a learned skill along with others like reading, writing, and arithmetic. The teacher’s task then is not one of “informing” students, but of directing their attention to relevant environmental features so that they can discover on their own what it is that we want them to know. This comes to bear on how we use technology in the classroom, the topic of my presentation tomorrow.
"Artificial Intelligence and the Changing Landscape of Educational Technology," 8/14/2024.
Given yesterday’s talk (see above), how must we understand teaching technologies in the context of attention management and skill acquisition? Here, I will examine a variety of technologies from white boards to artificial intelligence to explore under what conditions they are cognitively costly, beneficial, and/or innocuous. Ultimately, my agenda will be to outline opportunities and offer caveats for the appropriate use of artificial intelligence in education.
Journal Article with Eli B. McGraw: "Clues and Caveats concerning Artificial Consciousness from a Phenomenological Perspective." Phenomenology and the Cognitive Sciences, forthcoming, 2024.
In this paper, we use the recent appearance of LLMs and GPT-equipped robotics to raise questions about the nature of semantic meaning and how this relates to issues concerning artificially-conscious machines. To do so, we explore how a phenomenology constructed out of the association of qualia (defined as somatically-experienced sense data) and situated within a 4e enactivist program gives rise to intentional behavior. We argue that a robot without such a phenomenology is semantically empty and, thus, cannot be conscious in any way resembling human consciousness. Finally, we use this platform to address and supplement widely- discussed concerns regarding the dangers of attempting to produce artificially-conscious machines.
Journal Article: "Between Angels and Animals: The Question of Robot Ethics, or Is Kantian Moral Agency Desirable?" American Philosophical Quarterly, accepted pending revisions, 2024.
In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building a moral robot requires that we build into it the possibility of immoral behavior, I go on to argue that we cannot morally want robots to be genuine moral agents, but only beings that simulate moral behavior. Finally, I raise the question that if morality requires us to want robots that are not genuine moral agents, why should we want something different in the case of human beings.
Local Presentation: "Modeling Cognition with Cellular Computing: On the Epistemic Importance of Seeing What You're Doing" - The Undergraduate Cognitive Science Student Organization, Indiana University, 4/3/2024.
In this presentation, I will show how to model some elementary features of cognition using cellular computing. We will look at both engineered and associative networks designed by this method to demonstrate how associativity can be used to learn the ABCs, basic sequences, and even a bit of natural language competence without parsing. The main takeaway will concern the value of (literally) seeing how associativity can account for these basic cognitive operations.
Local Presentation: "Changing Paradigms in Cognitive Science: Emerging Implications from Research with Dynamic Associative Networks" - GeoLab Meeting - Cognitive Science Program, Indiana University, 3/18/2024.
In this presentation, I will address changing paradigms in cognitive science by looking at some architectural features of dynamic associative networks and say a few words about what they imply for eliminating the inside-outside distinction from the theoretical framework of cognitive science. Along these lines, I will suggest that there are AI reasons to push in the direction of more biologically-motivated enactivist paradigms and that doing so continues the effort to reframe central questions concerning cognition that may persistently stand in our way.
Local Presentation: "How LLMs Undermine the Last Bastion of Cartesianism and Why It Matters Today" - GeoLab Meeting - Cognitive Science Program, Indiana University, 2/5/2024.
Is cognitive science still recovering from a 350 year old mistake? In this presentation, I will propose that, in addition to argument elsewhere, LLMs suggest 1) that the representational theory of mind is false, 2) that the information-transfer model of communication is false, and 3) that there is no symbol grounding problem to solve. I will do so specifically by addressing Descartes' criteria for mindedness, which includes among other linguistic competences, the ability to sequence speech. If Descartes knew about LLMs, he would have never been pushed to dualism in the first place and would have continued the trajectory that history provided him, one based on framing questions of cognition as guidance control problems.
The reason this matters to us today is that even though science has jettisoned any form of Cartesian dualism, it nonetheless often preserves a Cartesian notion of some interior private, mental space, though now physically realized and modified to allow an escape into the world through the body. This, I will submit, is not likely a picture Descartes would have endorsed if he had LLMs before him: the three matters above are thus shown to be forced by a specific model of cognition, a physicalized version of Descartes, now undone by current technology, thus signaling that we are at an important moment in the history of philosophical and psychological systems.
To be clear, under no circumstances will I argue that LLMs are minded. The issue is best understood along these lines: If LLMs pass Descartes' test, then more than just dualism is jetisoned. The model of self as contained in a "cabinet of consciousness" is also jetisoned. Several important consequences follow.