If you ask the world of technology to slow down a bit, you’re instantly branded as a technophobe. But perhaps a bit of reflection is needed in the artificial intelligence arena.
Google’s Deepmind is one of those which is starting to think a bit deeper about the big, burgeoning world of computer intelligence. The team has announced the formation of DeepMind Ethics & Society to ‘complement’ the work of its engineers and scientists, and make sure we don’t get a little ahead of ourselves.
It is usually a conversation which is relegated to comments boards and conspiracy websites, but the industry does need to have a good look at whether the development of the technology is continuing to work for us. This will be the primary objective of the DeepMind Ethics & Society team; making sure the ethical and social impact of the technology is beneficial to society on the whole.
“The development of AI creates important and complex questions. Its impact on society – and on all our lives – is not something that should be left to chance,” the team said in a blog announcing the initiative.
“Beneficial outcomes and protections against harms must be actively fought for and built-in from the beginning. But in a field as complex as AI, this is easier said than done.”
No-one wants to limit the innovation and imagination of the artificial intelligence pioneers, but unless there are rules and structures to adhere to, the dangers could be massive. Human error at Facebook has already demonstrated this. Due to a slight oversight on the rules created to manage an AI application, the programme decided to invent its own language. If it can do this, want else can this immensely powerful technology do?
In this example, it was an innocent mistake which took place in a controlled lab-environment. There was no damage, but it shows what can happen if the structure in creating applications are not kept at front of mind. Playing around with new technology can be fun and immensely rewarding, but the overall goal has to be kept in mind and managed; this is a technology which has to benefit society.
“At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face.”
It all sounds very doom and gloom, but quite rightly so. We’re talking about some very serious implications should human morals and principles not be taken into account when developing these programmes. And to keep the team in check, Professor Nick Bostrom, Director of the Future of Humanity Institute and the Strategic Artificial Intelligence Research Center at Oxford University has been drafted in as a Fellow.
Bostrom has a slightly bleak view on the development of AI, and has written a number of books which outline the potential dangers should the technology not be correctly implemented. We had a chance to see his keynote at IP Expo last year, where he outlined the difficulties of controlling a technology which has the potential to exceed our own intelligence in a very short period of time. You can see why some people become paranoid around some of these topics.
So this is where DeepMind Ethics & Society will fit in. It will create research and templates to aid the development of artificial intelligence, looking forward to how implementation of certain applications will impact society, and what can be done to prepare us for the change in the tide.
It’s a big ask, but we’ve seen what can happen when difficult questions are swept aside. Security has always been overlooked and look at how many data breaches are occurring week on week. The same cannot happen with artificial intelligence.
“If AI technologies are to serve society, they must be shaped by society’s priorities and concerns. This isn’t a quest for closed solutions but rather an attempt to scrutinise and help design collective responses to the future impacts of AI technologies,” the team said.
“With the creation of DeepMind Ethics & Society, we hope to challenge assumptions – including our own – and pave the way for truly beneficial and responsible AI.”