April 1st, 2019 by

It looks like universities around the world are engaged in a fierce competition to claim the pole-position on digital ethics, especially when it comes to Artificial Intelligence.

This week, Stanford University announced its Stanford Human-Centered AI Institute (HAI).

Say hi to the H.A.I.!

Stanford University wants position itself as the natural hub for discussion on AI policy and on best practices in AI, and about where AI is going. HAI will be based on 3 principles:

  • First, it’s a bet that the future of AI is going to be inspired by human intelligence,
  • The second is that the technology has to be guided by our understanding of how it is impacting humans and society,
  • And third, AI applications should be designed so that they enhance and augment what humans can do.

What do you think?

 

Guest post by The Futures Agency content curator Petervan

Other Resources:

Recent MMC Investor report on Artificial Intelligence

Digital ethics and the future of humans in a connected world, Gerd’s TEDxBrussels talk from December 2016

Author: Gerd Leonhard

In the words of American poet John Berryman, “the possibility that has been overlooked is the future”. Most of us are far too busy coping with present challenges to explore the future in any depth – and when we do our own cravings and fears often run away with us, resulting in utopias or dystopias that are not very helpful in terms of planning and decisions. Today’s professionals, leaders and their organisations need a dedicated, passionate long-term understanding of the future if they are to successfully navigate the exponential waves of change. For countless individuals and organizations that intelligence is called Gerd Leonhard.

Share it