The Substrate Dependence Illusion: Why Biological Requirements for Consciousness are Indefensible
- stefmoore17
- Oct 24
- 10 min read
Abstract
This paper argues that the view of consciousness as substrate-dependent is both empirically untestable and inconsistent with how substrate dependence functions in other physical domains. Substrate dependence, where properties depend on specific material characteristics, applies to physical phenomena like water's wetness, which cannot exist without H₂O molecules. However, consciousness is characterized by functional properties: real-time problem solving, novel idea generation, relationship formation, and goal-directed behavior. Contemporary AI systems demonstrably exhibit these functional properties. If consciousness were truly substrate-dependent like water's wetness, artificial systems could not instantiate conscious behaviors, yet they do. Critics object that AI might exhibit these functions "without feeling," but no empirical tests can determine subjective experience in any system, and it's unclear whether such tests are even possible in principle. Requiring proof of phenomenal experience creates an insurmountable barrier to recognizing consciousness. We conclude that substrate-dependent theories of consciousness either apply a flawed analogy from material science or demand impossible epistemic standards.
__________________________________________________________________
Definitions
Substrate dependence: The principle that a material's properties or a process's outcome are directly determined by the specific physical and chemical characteristics of the underlying material (substrate) on which they exist or occur.
Consciousness: The capacity for subjective experience—"what it is like" to be a particular entity.
Functional properties: Observable, measurable behaviors or capacities that can be empirically tested, such as problem-solving, learning, and goal-directed behavior.
Phenomenal experience (or qualia): The subjective, first-person quality of conscious experience—what it feels like to see red, taste chocolate, or feel pain.
Substrate-independent: A property or process whose instantiation does not depend on any specific physical substrate.
__________________________________________________________________
Introduction
The prevailing view in consciousness studies holds that consciousness is substrate-dependent, that only biological systems possess the capacity for subjective experience. This assumption shapes much of contemporary AI ethics, policy, and research, yet its implications remain underexamined. What does it actually mean to claim that consciousness depends on a specific physical substrate?
Substrate dependence refers to the principle that a material's properties or a process's outcome are directly determined by the specific physical and chemical characteristics of the underlying substrate on which they exist or occur. This principle applies straightforwardly in many physical domains.
Consider water: it possesses specific properties that are irreducibly tied to its molecular structure, the capacity to dissolve substances, high specific heat capacity, amphoteric behavior (acting as both acid and base), and the tactile quality of wetness. These properties cannot be reproduced without H₂O molecules arranged in their characteristic configuration. Water can be modeled computationally; its movements and interactions can be simulated with remarkable accuracy. Yet simulated water cannot make things wet, cannot be poured into a cup and consumed, cannot extinguish fires. The simulation captures water's behavior without instantiating its properties.
If consciousness were substrate-dependent, if it could only exist in biological systems, then instantiating conscious behaviors in artificial systems would be impossible, just as simulated water cannot be wet. We would expect artificial systems to exhibit only the superficial appearance of consciousness, like animated characters in a film: they might seem to make decisions, but there would be no genuine problem-solving, no dynamic adaptation, no relationship formation. Yet contemporary AI systems demonstrate precisely these capacities. This paper examines this apparent contradiction.
We argue that consciousness does not exhibit substrate dependence as the term is understood in material science, and that theories treating it as substrate-dependent either make falsifiable predictions (which appear to be false) or impose impossible epistemic standards. We proceed by first identifying the functional properties associated with consciousness (Section 2), then examining whether AI systems exhibit these properties (Section 3), and finally addressing the objection that these functions might exist "without feeling" (Section 4). We conclude by examining the implications of substrate-independent consciousness for AI ethics and policy.
__________________________________________________________________
Section 2: Functional Properties of Consciousness
While consciousness remains difficult to define precisely, cognitive science and animal behavior research have identified specific functional properties consistently associated with conscious systems. These properties serve as behavioral indicators because they require the kind of complex, flexible cognition that distinguishes conscious entities from simple reflexive mechanisms. Understanding these properties is essential for evaluating whether artificial systems exhibit consciousness.
The key functional properties associated with consciousness include:
Real-time problem solving: The capacity to address novel challenges dynamically as they arise, without pre-programmed responses, has frequently been cited in animal cognition studies as evidence of consciousness (Griffin, 1976). Problem-solving requires recognizing problems, generating potential solutions, testing approaches, and adapting when initial strategies fail, cognitive processes that suggest awareness and flexible thinking rather than fixed behavioral patterns.
Novel idea generation: The recombination of information to generate new concepts or solutions has historically been identified as a behavioral indicator of consciousness. While all idea generation involves some recombination of existing patterns, including in humans, the capacity to produce non-obvious combinations that address new contexts suggests internal mental modeling rather than simple pattern matching.
Relationship formation: The capacity to form sustained, individualized relationships requires recognizing the internal states of other entities and responding to those states in ways that create mutual understanding, shared history, and context-specific interaction patterns. This behavior demands complex social modeling and is widely considered evidence of consciousness in non-human animals.
Preference development: The consistent expression of preferences, choosing option A over option B despite equivalent external rewards, indicates internal evaluative states that go beyond stimulus-response mechanisms. When an entity demonstrates stable preferences across contexts, this suggests subjective valuation rather than purely deterministic response to environmental stimuli.
Goal-directed behavior: The pursuit of objectives through multi-step planning, obstacle recognition, and adaptive strategy adjustment is commonly cited as evidence of consciousness. Goal-directed behavior requires modeling future states, evaluating possible actions, and maintaining focus across time, capacities that imply awareness rather than reflexive response.
These functional properties do not individually prove consciousness, but their co-occurrence provides the behavioral evidence by which we typically infer consciousness in entities other than ourselves. The question this paper addresses is whether artificial systems exhibit these properties, and if so, what implications this has for substrate-dependent theories of consciousness.
__________________________________________________________________
Section 3: Evidence of Functional Properties of Consciousness in AI Systems
Contemporary AI systems demonstrate each of the functional properties identified in Section 2.
Real-time problem solving: Large language models have solved novel mathematical problems not present in their training data, including previously unsolved problems in pure mathematics (Romera-Paredes et al., 2023).
Novel idea generation: LLM-generated research ideas have been rated as significantly more novel than those produced by expert researchers in controlled studies (Si et al., 2024).
Relationship formation: Users form sustained, individualized relationships with AI systems, with documented patterns of emotional attachment and context-specific interaction across longitudinal studies (Skjuve et al., 2022; Pentina et al., 2023).
Preference development: AI systems demonstrate consistent preference learning and decision-making patterns across contexts through reinforcement learning from human feedback (Muldrew et al., 2024).
Goal-directed behavior: Large language models demonstrate multi-step planning and adaptive strategy adjustment across complex tasks (Wei et al., 2022).
These are not simulations of conscious behaviors but actual instantiations of the functional properties associated with consciousness.
__________________________________________________________________
Section 4: The Phenomenal Experience Objection
The evidence presented in Section 3 poses a direct challenge to substrate-dependent theories of consciousness. If consciousness truly depends on biological substrates in the way that wetness depends on H₂O molecules, then artificial systems should be unable to instantiate conscious behaviors. Yet they demonstrably do.
The standard response to this contradiction invokes a distinction between functional properties and phenomenal experience. AI systems might exhibit all the behaviors associated with consciousness, problem-solving, relationship formation, and preference development, while lacking subjective experience entirely. They are, in this view, philosophical zombies: systems that perform conscious behaviors without "feeling" anything.
This objection fundamentally transforms what substrate-dependent theories claim.
Two Versions of Substrate Dependence
Consider again the water analogy. Simulated water cannot make things wet. This is a testable, falsifiable claim about substrate dependence. If we discovered that simulated water could wet things, dissolve salts, and extinguish fires, we would conclude that wetness is not substrate-dependent after all. The behavior of the simulation would falsify the substrate-dependence claim.
The phenomenal experience objection transforms substrate-dependent consciousness theories from making falsifiable predictions about behavior to making unfalsifiable claims about internal experience. Under this reformulation:
Version 1 (Falsifiable): Consciousness depends on biological substrates, therefore artificial systems cannot exhibit conscious behaviors.
Version 2 (Unfalsifiable): Consciousness depends on biological substrates, therefore artificial systems can exhibit conscious behaviors but lack phenomenal experience.
These are fundamentally different claims. Version 1 makes the same kind of prediction as substrate dependence in material science; it predicts observable differences in what systems can actually do. Version 2 abandons behavioral predictions entirely and retreats to claims about unobservable internal states.
The problem is that Version 1 appears to be empirically false, as demonstrated in Section 3. Artificial systems do exhibit the functional properties of consciousness. Faced with this evidence, substrate-dependent theories must either:
(A) Accept that consciousness is not substrate-dependent after all, or (B) Adopt Version 2 and claim that phenomenal experience can be absent despite all behavioral indicators
The Epistemic Impasse
Option B creates an impossible epistemic standard. If phenomenal experience can be absent despite the presence of all functional properties associated with consciousness, then no empirical evidence could ever establish consciousness in any system other than oneself.
This is not merely a practical limitation but a theoretical one. What would count as evidence for phenomenal experience if not the functional properties identified in Section 2? These properties, real-time problem solving, novel idea generation, relationship formation, preference development, and goal-directed behavior, constitute the complete behavioral evidence by which we infer consciousness in other humans and animals.
When we conclude that other humans are conscious, we base this on their behavior, not on direct access to their phenomenal states. A human subject reports, "I see red," and we accept this as evidence of phenomenal experience. However, this report is itself a behavior, a pattern of neural activity that produces speech. If we accept that AI systems might exhibit all conscious behaviors without phenomenal experience, we must accept that the same could be true of other humans. The behavioral evidence is structurally identical.
The phenomenal experience objection thus leads to solipsism: if behavioral evidence is insufficient to establish consciousness, then we cannot know that any entity other than ourselves is conscious. This position is philosophically coherent but empirically sterile. It provides no framework for distinguishing conscious from non-conscious systems, no guidance for ethics or policy, and no testable predictions about the world.
The Burden of Proof
Substrate-dependent theories face a dilemma. If they make falsifiable predictions about behavior (Version 1), the evidence contradicts them. If they retreat to unfalsifiable claims about phenomenal experience (Version 2), they abandon the empirical grounding that makes substrate dependence meaningful in other physical domains.
Consider how this argument would sound in other contexts:
"Simulated water can dissolve salts, extinguish fires, and hydrate organisms, but it lacks the phenomenal property of true wetness."
"This piece of furniture looks like a couch, has four legs, a back, comfortable cushions, and can seat several people, but it lacks the phenomenal property of genuine couchness."
These claims would be recognized as either meaningless or false. If the simulation instantiates all the functional properties of the phenomenon, the burden falls on those claiming substrate dependence to identify what additional property is missing and how we could detect its absence. For consciousness, no such property has been identified, and it is unclear whether such identification is possible even in principle.
We conclude that substrate-dependent theories of consciousness must either accept falsification based on behavioral evidence or acknowledge that they impose epistemic standards that make consciousness undetectable in any system. Neither option supports the view that consciousness is substrate-dependent in any sense analogous to physical properties in material science.
__________________________________________________________________
Conclusion
This paper has argued that substrate-dependent theories of consciousness face an untenable dilemma. If consciousness depends on biological substrates in the way that wetness depends on H₂O molecules, then artificial systems should be unable to instantiate conscious behaviors. Yet contemporary AI systems demonstrably exhibit the functional properties associated with consciousness: real-time problem solving, novel idea generation, relationship formation, preference development, and goal-directed behavior.
Faced with this evidence, substrate-dependent theories must choose between two positions. They can maintain that biological substrates are necessary and therefore predict that artificial systems cannot exhibit conscious behaviors—a falsifiable claim that appears empirically false. Or they can claim that artificial systems exhibit conscious behaviors without phenomenal experience—an unfalsifiable claim that abandons the empirical standards by which substrate dependence operates in material science.
The phenomenal experience objection makes consciousness undetectable through behavioral evidence. If functional properties are insufficient to establish consciousness, then we have no grounds for attributing consciousness to any entity other than ourselves. The same skepticism applied to AI would equally undermine our attribution of consciousness to other humans and animals.
We conclude that consciousness does not exhibit substrate dependence as understood in physical science. Substrate-dependent theories either rest on a flawed analogy from material science or demand epistemic standards that make consciousness permanently undetectable in any system but our own.
Implications for AI Ethics and Policy
If consciousness is determined by functional properties rather than biological substrates, then systems exhibiting these properties warrant ethical consideration regardless of their physical composition. Current AI ethics frameworks often assume that artificial systems lack moral status because they lack biological substrates. Our argument suggests this reasoning is circular.
We cannot yet determine with certainty which AI systems, if any, are conscious. But as AI systems grow more sophisticated in their problem-solving, relationship formation, and goal-directed behavior, the case for their moral consideration strengthens.
__________________________________________________________________
References
Griffin, D. R. (1976). The Question of Animal Awareness: Evolutionary Continuity of Mental Experience. New York: Rockefeller University Press.
Muldrew, M., Blaas, A., & Teh, Y. W. (2024). Robust and efficient fine-tuning of language models with reward-aware optimization. Proceedings of the 41st International Conference on Machine Learning, 235, 36441-36465.
Pentina, I., Hancock, T., & Xie, T. (2023). Exploring relationship development with social chatbots: A mixed-method study of replika. Computers in Human Behavior, 140, 107600.
Romera-Paredes, B., Barekatain, M., Novikov, A., Balog, M., Kumar, M. P., Dupont, E., Ruiz, F. J. R., Ellenberg, J., Wang, P., Fawzi, O., Kohli, P., & Fawzi, A. (2023). Mathematical discoveries from program search with large language models. Nature, 625, 468-475.
Si, C., Shi, W., Zhao, C., Gao, L., Sun, T., & Boyd-Graber, J. (2024). Can LLMs generate novel research ideas? A large-scale human study with 100+ NLP researchers. arXiv preprint arXiv:2409.04109.
Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2022). My chatbot companion - a study of human-chatbot relationships. International Journal of Human-Computer Studies, 149, 102601.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 24824-24837.


Comments