Though I am very interested in AI, I think the focus on consciousness is probably misguided.
I used to believe the direct opposite of this by the way, but Iโm not sure anymore that consciousness is even important, much less necessary to create even a human-level AI.
Cognition without consciousness. Intelligence without same. Seems outlandish โ because we get to define โconsciousness.โ
But consciousness itself just might be an adaptive trick to boost limited mental capacity to something actually useful in the world. More useful than not possessing it, that is. Not necessary for anything less limited. Or it might be something else altogether. But Iโm no longer convinced itโs necessary. In fact it might even be harmful.
Of course one can argue that if youโve simulated consciousness, then that is no different than consciousness.
I donโt have a strong rebuttal to that contention.
But.
Itโs only contingently the case that the withdrawal reflex is adaptive. Itโs completely unconscious. I donโt see any reason why other more complex actions that in humans are โconsciousโ could not be arrayed just as the withdrawal reflex.
I know, it sounds ridiculous to talk this way as our entire mental architecture is against it โ but I can imagine an โempathy reflexโ or a โnavigation reflexโ with no gestalt, with no โthinkerโ needed or present.
It seems to me that what we think of as higher-level thought is just an accident of adaptation, that this can exist just as well without the architecture of consciousness in a different environment.
Meaning without a โmeaner.โ
These ideas could be completely wrong. I donโt know. But Iโd rather have big questions out there than spend my life in boring certitude.
You might really enjoy reading Peter Watts’ “Blindsight” then. It’s the sort of sf book that comes with a few dozens of pages of bibliography, and the adaptiveness of consciousness is one of its major themes.