Yes, I agree with that. I've been trying to argue that point. So where does it leave the "hard problem" and its "zombie" objections? The 'zombie' and the human are both in the same relationship to their surrounding environments. They are both processing information about that environment in the exact same physical ways. And that as-yet poorly-understood internal processing results in identical speech and behavior in both cases. Yet supposedly, the human contains conscious experience, while the 'zombie' doesn't and is merely an automaton. Pretty clearly, we need to have some better account of what this "conscious experience" is supposed to be, about how it can be determined whether conscious awareness is present and about what kind of ontological being it's supposed to have, in order for the 'zombie' possibility to even be meaningful, let alone plausible. (I don't think that it is plausible. But thinking about the possibility might be an occasion for some valuable conceptual clarification and analysis.) How can we make sense of what "conscious experience" is supposed to be in these "zombie" thought-experiments, without introducing the idea of introspection into our considerations? And if we make that move, then we are apt to find ourselves burdened with the question of who/what is doing the introspecting, where it's happening, and what kind of things are being introspected. That's where 'qualia' enter into the discussion. Ok, so what's an "appearance"? What are "images, sounds, odors, etc."? Are they non-physical ontological beings in their own right? Are they mysterious non-physical qualities of matter that aren't included in the inventory of physical science? Or are they (as I would argue) simply information (whether true or false) about the environment, simply values of visual, auditory or olifactory variables? Treating 'experiences' as information (as opposed to things) has big-time implications for the 'zombie' thought-experiment. It would suggest that if any perceptual-cognitive system has access to the information in question about its environment, then there wouldn't be a whole lot of reason for us to say that it isn't conscious of its environment. I'd speculate that ultimately, down at its most basic level such as we might find in a worm or a starfish, animal consciousness reduces to causality. As we ascend the phylogenetic tree, we find organisms capable of extracting more and more information from their environments, and capable of processing that information in more and more sophisticated ways. I don't think that there's any huge or sudden ontological leap in there, between unconscious and conscious, any sudden influx of non-physical qualia and a resulting leap from Cartesian-style bio-mechanical zombies to phenomenally aware humans.