Anthony F. Beavers
Emeritus Professor of Philosophy
The University of Evansville
Affiliated Faculty, Cognitive Science
Indiana University
Research Interests
Philosophy of Artificial Intelligence; Philosophy of Cognitive Science; Philosophy of Information and Information Technology; History of Philosophy (Ancient, Modern, and Early Continental, including Phenomenology); Meta-ethics
Course Syllabus
Philosophical Foundations of the Cognitive and Information Sciences
Recent and Upcoming Academic Activity
Colloquium Presentation: "Cartesian Incursions into the Philosophy of Artificial Intelligence" - Department of the History and Philosophy of Science and Medicine, Indiana University, 1/16/2025.
Journal Article with Steven S. Gouveia on artificial intelligence and 4e cognition. Invited for the Journal of Artificial Intelligence and Consciousness. In progress.
Journal Article with Devin Wright on transcendence and information fecundity in the context of phenomenology and 4e cognition. In progress.
Critical Notice with Paul Bello and Marcello Guarini of Cameron Buckner's From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence, Oxford, 2024. In progress.
Colloquium Presentation: "On the Foundations of Network Computing for Artificial Intelligence Purposes" - Logic Seminar, Department of Mathematics, Indiana University.
McCulloch and Pitts suggested in 1943 that the important feature of nervous activity was its “all-or-nothing character” thus enabling neural events to be articulated within propositional logic. The bets in 1943 certainly warranted exploration along these lines, though it remained an open question whether the important feature of such systems was the way they instantiated formal logics or the way they gated information flow more generally. We now suspect the latter, that intelligent behavior emerges in networks not from deduction at the level of the synapse but from induction at a higher level of neuron clusters. This has led to characterizing networks as inductive associative engines rather than deductive logical engines. Such a characterization shows itself in the form of network research that has been dominated primarily, though not exclusively, by fully connected layers in which information is gated by weights and thresholds. This is an idealization to be sure, indeed one that has furnished its own practical fruit though not without its problems (the need for torturous, energy-expensive training regimens and the many difficulties that come with gradient-descent learning, for instance.)
A different idealization is possible. Perhaps weighted and thresholded connections are implementational details incidentally implicated in network success. If indeed the gating of information flow is doing the critical work, we may fairly wonder about the possibility of continuous-flow networks that eliminate conventional weights in favor of dynamic changes to the wiring schematic by adding and removing connections as needed between partially connected layers. In these talks, I will explore this idealization with a review of my near thirty-year research program involving Dynamic Associative Networks (DANs) and directed graphs.
Early studies have revealed interesting properties that demand additional study now as current and possibly problematic assumptions about network-based artificial intelligence have us bounding into the future without a careful study of all the options. To start, DANs engage in one shot learning, they are fully transparent and explainable, and they exhibit rudimentary metacognitive possibilities. Furthermore, because learning occurs by changing the wiring schematic rather than by setting weights, they face no problems with gradient-descent learning. DANs have been used in several micro-world experiments involving object identification and classification, visual shape recognition, anticipation of directionality of moving objects in a visual field, association across simulated sense modalities, primary sequential memory, rudimentary natural language processing, and recall by way of content-addressable memory. They have also been used in the stochastic context of NFL prediction.
Chiefly, they are based on two fundamental unifying principles, simple signaling in small, but overlapping network clusters and signal transduction. My goal in these talks is to examine these unifying principles with two goals in mind: to challenge several conventional foundational principles in network computing and to unify the various areas of network science into a coherent whole for exploitation in artificial intelligence systems. Having surveyed an array of low level intelligent affordances these simple networks get from these two principles, I will finish these presentations by sketching future directions for this line of research.
These presentations will be offered as two parts. The first (11/20/2024 from 4:30 to 6:00 pm Eastern) will take us from simple signaling to recognition, classification, and generalization. The second (12/4/2024 from 4:30 to 6:00 pm Eastern) will explore content-addressable memory, pattern completion, prediction, and metacognition.
Journal Article: "Between Angels and Animals: The Question of Robot Ethics, or Is Kantian Moral Agency Desirable?" American Philosophical Quarterly, accepted pending revisions, 2024.
In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building a moral robot requires that we build into it the possibility of immoral behavior, I go on to argue that we cannot morally want robots to be genuine moral agents, but only beings that simulate moral behavior. Finally, I raise the question that if morality requires us to want robots that are not genuine moral agents, why should we want something different in the case of human beings.
Keynote Speaker: Two talks on artificial intelligence in education - Fall Faculty Conference, The College of Southern Idaho.
"Information-Transfer vs. Attention-Directing Models of Cognitive Development," 8/13/2024.
This presentation will address critical differences in the way we understand the role of teaching in cognitive development. According to the information-transfer model, we imagine the educational enterprise as one of getting knowledge from the mind of the teacher into the mind of the student. But recent advancements concerning the role of attention in both neuroscience and artificial intelligence suggest a different characterization: knowledge acquisition requires attention management, which is a learned skill along with others like reading, writing, and arithmetic. The teacher’s task then is not one of “informing” students, but of directing their attention to relevant environmental features so that they can discover on their own what it is that we want them to know. This comes to bear on how we use technology in the classroom, the topic of my presentation tomorrow.
"Artificial Intelligence and the Changing Landscape of Educational Technology," 8/14/2024.
Given yesterday’s talk (see above), how must we understand teaching technologies in the context of attention management and skill acquisition? Here, I will examine a variety of technologies from white boards to artificial intelligence to explore under what conditions they are cognitively costly, beneficial, and/or innocuous. Ultimately, my agenda will be to outline opportunities and offer caveats for the appropriate use of artificial intelligence in education.
Journal Article with Eli B. McGraw: "Clues and Caveats concerning Artificial Consciousness from a Phenomenological Perspective." Phenomenology and the Cognitive Sciences, 2024.
In this paper, we use the recent appearance of LLMs and GPT-equipped robotics to raise questions about the nature of semantic meaning and how this relates to issues concerning artificially-conscious machines. To do so, we explore how a phenomenology constructed out of the association of qualia (defined as somatically-experienced sense data) and situated within a 4e enactivist program gives rise to intentional behavior. We argue that a robot without such a phenomenology is semantically empty and, thus, cannot be conscious in any way resembling human consciousness. Finally, we use this platform to address and supplement widely- discussed concerns regarding the dangers of attempting to produce artificially-conscious machines.
Presentation: "Modeling Cognition with Cellular Computing: On the Epistemic Importance of Seeing What You're Doing" - Undergraduate Cognitive Science Student Organization, Indiana University, 4/3/2024.
In this presentation, I will show how to model some elementary features of cognition using cellular computing. We will look at both engineered and associative networks designed by this method to demonstrate how associativity can be used to learn the ABCs, basic sequences, and even a bit of natural language competence without parsing. The main takeaway will concern the value of (literally) seeing how associativity can account for these basic cognitive operations.
Presentation: "Changing Paradigms in Cognitive Science: Emerging Implications from Research with Dynamic Associative Networks" - Cognitive Science Research Group, Cognitive Science Program, Indiana University, 3/18/2024.
In this presentation, I will address changing paradigms in cognitive science by looking at some architectural features of Dynamic Associative Networks and say a few words about what they imply for eliminating the inside-outside distinction from the theoretical framework of cognitive science. Along these lines, I will suggest that there are AI reasons to push in the direction of more biologically-motivated enactivist paradigms and that doing so continues the effort to reframe central questions concerning cognition that may persistently stand in our way.
Presentation: "How LLMs Undermine the Last Bastion of Cartesianism and Why It Matters Today" - Cognitive Science Research Group, Cognitive Science Program, Indiana University, 2/5/2024.
Is cognitive science still recovering from a 350 year old mistake? In this presentation, I will propose that, in addition to argument elsewhere, LLMs suggest 1) that the representational theory of mind is false, 2) that the information-transfer model of communication is false, and 3) that there is no symbol grounding problem to solve. I will do so specifically by addressing Descartes' criteria for mindedness, which includes among other linguistic competences, the ability to sequence speech. If Descartes knew about LLMs, he would have never been pushed to dualism in the first place and would have continued the trajectory that history provided him, one based on framing questions of cognition as guidance control problems.
The reason this matters to us today is that even though science has jettisoned any form of Cartesian dualism, it nonetheless often preserves a Cartesian notion of some interior private, mental space, though now physically realized and modified to allow an escape into the world through the body. This, I will submit, is not likely a picture Descartes would have endorsed if he had LLMs before him: the three matters above are thus shown to be forced by a specific model of cognition, a physicalized version of Descartes, now undone by current technology, thus signaling that we are at an important moment in the history of philosophical and psychological systems.
To be clear, under no circumstances will I argue that LLMs are minded. The issue is best understood along these lines: If LLMs pass Descartes' test, then more than just dualism is jetisoned. The model of self as contained in a "cabinet of consciousness" is also jetisoned. Several important consequences follow.