Taking control of artificial intelligence
Responsibility for managing the power of AI can't continue to sit with the tech industry alone, argues Suman Nambiar
Dystopias are all the rage right now, in films, in politics and in technology. It's no surprise then that we're beginning to worry about robots not just becoming sentient, but shutting us out of the loop altogether.
Many were alarmed at the news that Facebook found two software robots (bots) communicating with each other in a language they had invented, incomprehensible to humans. However what few realise is that this isn't the first time this has happened, as similar examples date back as far as 2011 - practically an eternity by the technology industry's standards.
But with society as a whole becoming increasingly aware of both the potential and the dangers of AI, this has struck a nerve in our collective consciousness. We've transitioned from sci-fi speculation to contemporary reality. The notion that the machines we've invented may soon be capable of squeezing us out and functioning independently from human control is now a very real possibility.
The stakes are so high that this responsibility can't continue to sit with the tech industry alone. Some of the most prominent voices in both technology and science, such as Stephen Hawking, Bill Gates and Elon Musk, are singing this line from the rooftops.
Concerns lie in the nature of control of these elements of the technology, and what these advances could mean for the vast swathes of the population who will be left permanently out of work.
The technology could also enable repressive regimes to spy on us and pre-empt human thoughts, as well as vastly amplify stresses and divisions within society.
But AI has the power to cure cancer, revolutionise healthcare, provide us with cars that drive themselves, lifelike digital assistants that enable humans to be more productive and feel less alone.
The public-at-large, policy-makers, business leaders and academics all need to understand these technologies and the issues involved, working together towards the common goal of ensuring they collectively benefit society.
Openness and collaboration across the tech industry
Ultimately, we can't control what we can't see. And it's for this reason that there are voices within the technology industry that are pushing for openness and collaboration across different aspects of AI technology.
From the algorithms to the data sets required to train AI models, to the hardware and software tools required. Some examples are illustrating the visible shift in thinking in this area.
Whether it's Google's acquisition of DeepMind, or Microsoft's acquisition of Maluuba, both represent leading lights in deep learning. But the curious thing is, these acquisitions have not resulted in companies ceasing to share their research or publish their papers in public.
In fact, even Apple, not known for a culture of openness, has started publishing its research on AI.
There is now a growing awareness that the complexity of these technologies is such that if we do not understand the algorithms and the data that drives them, then we will never understand how they function, or what kind of outcomes they will drive.
Initiatives such as OpenAI.com, a non-profit AI company (funded in part by Elon Musk), are designed to ensure wider collaboration and discussion on the technology. We should expect - and hope - to see many more such initiatives in the months and years to come.
What does all this mean for business?
Businesses across the board must wake up and recognise AI as a force - not just a technology - that can bring deep transformation to consumers' lives, innovations in their industries, employees, and the global economy.
Whether it be helping customers to feel like valued individuals; to using the technology to predict anything from the next time an aircraft engine will need maintenance to the sales promotion that will yield the best numbers, or even helping with client stock-purchasing decisions - AI can truly be deployed across enterprises.
Here too, openness and transparency will be critical. If businesses are planning on using AI to speak to customers, they need to tell them. If a business is going to store and use customers' data, ensure that they know why, and that the business itself is aware of the reputational damage that this can have on its brand if it gets it wrong. If there are critical decisions being taken by AIs, make sure this is communicated to stakeholders, and ensure that, as a business, you are able to explain the reasons behind this decision.
Above all, humans need to be in positions of power to explain and override this technology in any given circumstance. If we give AIs authority without understanding the implications, that would be a grave loss of control
Today's enterprises must start experimenting with and implementing these technologies today so that they can understand the impacts on their stakeholders, work through the technology and the strategy and ethical issues behind it, thus ensuring that it's ready to become mainstream.
It goes without saying however that without the right data, no AI will be successful.
The role of policy-makers
Algorithms are already omnipresent in our lives, and as they drive more and more systems that impact our day-to-day existence, it becomes all the more important that we understand, debate, and, where appropriate, regulate these systems accordingly.
There are several instances where research has uncovered how AI systems enshrine and even deepen societal biases, because this technology was born and evolved thanks to data. This has resulted in unconsciously preserved the biases present in the real world.
For example, a study by Carnegie Mellon University found that the world's most dominant search engine displayed fewer high-level executive job adverts to women rather than men.
Policy-makers across the world are now scrambling to understand the implications of these technologies, and the data powering them.
How then do governments and institutions introduce regulation in a way that does not stifle innovation? How can they ensure that we can continue to use this technology to our advantage? How do we ensure that the technology is harnessed in a way that benefits society as a whole? How do they prevent the nightmares surrounding autonomous lethal weapons from becoming a reality? How do they ensure the workforce is able to ride out these changes without causing dramatic dislocations within society?
All these questions - and many more - remain largely unanswered.
Why we should embrace the future of AI
We should be incredibly excited about what AI can help us, as a society, to achieve. What seemed impossible no more than a decade ago, is now a matter of understanding the most suitable means of implementation.
If we pursue the path of openness and transparency, and are aware of the risks, we can use this technology to transform our lives and societies for the better.
We owe it to ourselves, and the generations to come, to assess the impacts of these technologies and work together on how best to harness them.
Suman Nambiar is head of AI practice at Mindtree