Philosopher Warns on AI: World without Consciousness

,

I watched this video last week at the Beneficial AI 2017 Conference as we were establishing the Asia AI Hub in South Korea. Its quite a long discussion with Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, Jaan Tallinn and moderator Max Tegmark about ‘what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.’

David Chalmers, a world expert in consciousness, has been cited across the media for these warnings:

“For me, that raising the possibility of a massive failure mode in the future, the possibility that we create human or super human level AGI and we’ve got a whole world populated by super human level AGIs, none of whom is conscious. And that would be a world, could potentially be a world of great intelligence, no consciousness no subjective experience at all. Now, I think many many people, with a wide variety of views, take the view that basically subjective experience or consciousness is required in order to have any meaning or value in your life at all. So therefore, a world without consciousness could not possibly a positive outcome. maybe it wouldn’t be a terribly negative outcome, it would just be a 0 outcome, and among the worst possible outcomes.”

His solution to this is to create robots with human-like consciousness.

“I mean, one thing we ought to at least consider doing there is making, given that we don’t understand consciousness, we don’t have a complete theory of consciousness, maybe we can be most confident about consciousness when it’s similar to the case that we know about the best, namely human human consciousness… So, therefore maybe there is an imperative to create human-like AGI in order that we can be maximally confident that there is going to be consciousness,” Chalmers states at 23:51.

This might all sound horrifying – or lunacy – to some. Personally I am not convinced that this can be done. I don’t think scientists yet understand consciousness. But what this does do, is throw up is more discussion about what it is to be human. And I think that this will be one of the unintended consequences of AI and the Fourth Industrial Revolution. This is a great dialogue to have. Despite the doom and gloom about robots out there, there seems to be a global move emphasising creativity. This is natural with the end of the industrial age in my view.

The full video brings up some interesting points.

2 Comments

  1. This is a very interesting discussion, Benjamin. Thanks for posting it.

    What is most striking is what is missing from this dialogue. We basically have a “talking head” panel – a group of middle-aged, middle-class white men who have been processed through the same kinds of educational systems which focus intelligence in very precise ways, while diverting it away from other “cognitive” processes. So the discourse is very narrow.

    The fact that their own survey indicated that they consider consciousness to be the least important variable in AI Futures is particularly telling.

    There is zero discussion about internal human experience, “spiritual” awareness or what I call integrated intelligence. All participants either know nothing about these domains, or the discussion of them is an effective “taboo” – which is what the facilitator mentioned in regard to “consciousness.”

    I know Sam Harris has some experience and interest in these areas, but he retains the misunderstandings and blind spots of mainstream scientific discourse, in my opinion. He has very strong biases, and a rather rigid mindset.

    Harris said he’d like a superintelligemt future AI to be able to nurture and guide humans to create better and wiser futures. Yet we already have this built into the human psyche as part of our conscious evolutionary functioning. These people just don’t seem to know about it. We as individuals do have access to conscious guidance, as does the human race. Of course, AI could potentially enhance this wisdom function.

    To gain a greater understanding of the potential benefits and problems of AI, we need broader participation across more communities. We need to shift away from this talking head syndrome, to include input from more embodied and self-conscious individuals, including women and non-western people. David Chalmers is onto something when he talks about the danger of a non-sentient but super-intelligent future “”civilisation.” Yet this also reflects the problem that our modern education and cognitive social construction focusses upon abstract conceptualisation and verbal-linguistic modes of expression. This conditioning process is already removing or obfuscating a significant range of “sentient” experience from modern humans. That imbalance needs to be corrected, so that this pathology does not then become transferred into our visions of AI futures, thus unconsciously perpetuating that problem and expanding it into dystopian futures. In order to avoid such scenarios, the problem of our current psycho-spiritual imbalance must first be acknowledged. I see no evidence from this video discussion that any participant is even aware of the issue.

    This reminds us of the fish-in-water analogy, unaware of the water it lives in. Without outside perspectives, without a distancing from the problem, we cannot see ourselves. And that is why these conferences must be multi-disciplinary, must be multi-perspective. To use a Steven Covey analogy, the danger is that these AI experts will make great progress hacking through the jungle of AI futures, while not realising that they have been in the wrong forest all along.

    Finally, in order for these bridges to be spanned, we all need to educate ourselves across domains of knowledge and experience that we may be unfamiliar with, and that we may find uncomfortable. For the panelists here, that might mean taking up dancing, yoga, meditation, learning a musical instrument. For others it might mean educating oneself on the essentials of programming, math, rational modes of cognition and so on. In this way we can begin to understand those domains of enquiry, and those who frequent those domains as their preferred ways of knowing and being. The question is, who is prepared to do that?

    Marcus

Leave a Reply

Your email address will not be published. Required fields are marked *

Welcome

Don’t have an account? Sign up

Register For This Site

Registration confirmation will be e-mailed to you.