February 3rd, 2019 by

Guest post by Peter Van

In this post, we’ll have at a more philosophical take on AI and ethics. Should we enhance our own brains with AI technology to keep up with the exponential progress of technology?

Let’s zoom into some of the arguments from Susan Schneider in this recent Edge.org post. Susan holds the Distinguished Scholar chair at the Library of Congress and is the director of the AI, Mind and Society (“AIMS”) Group at the University of Connecticut.

Her reflections circle around the potential for intelligent machines to develop a consciousness. She initially rejects a full skeptical line by highlighting some potential advantages of conscious machines, if they ever would exist.

“If machines turn out to be conscious, we won’t just be learning about machine minds, we may be learning about our own minds. Humans would no longer be special in the sense of being capable of intellectual thought. That could be a very humbling experience for humans.. One thing that worries me about all this is that don’t think AI companies should be settling issues involving the shape of the mind. The future of the mind should be a cultural decision and an individual decision.”

“AI ethics boards at the larger companies are important, but in a sense, it’s the fox guarding the henhouse”

Susan is concerned about unintended consequences, the obsession with technology, and missing the opportunity for enhancing and promoting human flourishing. She calls for a public dialogue with all stakeholders involved, and is pretty articulate who should be part of such panel of wise men/women:

“All stakeholders need to be involved, ranging from people who are researching these technologies to people who are policymakers to ordinary people, especially young people, so that as they make brain enhancement decisions, they will be able to do so with more scrutiny. Here, the classic philosophical issues about the nature of the self and the nature of consciousness come into play.”

Related video by Gerd

 

Share it