Royal Society warns of the dangers of unchecked AI and machine learning
Machine learning and AI will require a new framework of rights, including regulation, claims Royal Society
The Royal Society has warned that the development of machine learning and artificial intelligence (AI) will need a new framework of rights and responsibilities, including regulation, in order to prevent human obsolescence.
The institution has produced a report entitled "Machine Learning: the power and promise of computers that learn by example", which it hopes will guide the community on what the future might hold for machine mearning and AI, and to try and remove some of the fear mongering about human obsolescence.
The Royal Society interviewed 200 experts from across different fields, as well as members of the public about its perceived effect.
The report found that most people interact with machine learning every day, but don't actually know what the term means. There was positive feedback on the idea of accelerating the pace of medicine, but concern at the safety of autonomous vehicles.
Some expressed concerns at the dehumanisation of pastimes that are actually enjoyable to them, and that it would further enhance the bubbles and silos developing in modern life.
The report emphasises that machine learning cannot go unchecked, and must be regulated, giving the example that a biased programmer could easily bork an algorithm to suit their own agenda.
Professor Peter Donnelly, who chaired the working group that created the report, said: "It is clear to us that the world we're in at the moment, with the data being collected about us from our phones and other devices, this world has outgrown the existing framework.
"There needs to be a new framework for governance of data.
"Some sectors will require more regulation, like healthcare, but others, like a company reorganising its warehouse, will need less.
"We as a society need to decide how algorithms work. Should they be fair and unbiased, should they be transparent?"
Many science soothsayers have warned of the potential consequences of uncontrolled machine learning and artificial intelligence, with Tesla founder Elon Musk, for example, warning that unless we 'become one' with the machines, they will render us obsolete.
He has suggested an early interface between the human mind and AI could be just four years away.
Elon Musk is also behind Open AI, an industry initiative intended to make sure that the type of positive aims mentioned by the Royal Society report are realised.
Computing's Big Data and IoT Summit 2017 and the Big Data and IoT Summit Awards are coming on 17 May 2017.
Find out what construction giant Amey, Lloyds Banking Group, Financial Times and other big names are doing in big data and the Internet of Things.
Attendance to the Summit is free to qualifying senior IT professionals and IT leaders, but places are strictly limited, so apply now.
AND on the same day, Computing is also proud to present the Big Data and IoT Summit Awards, too. See the finalists - and secure a table for your team at the Awards - now: