Algorithms in Society: Protecting people v protecting IP
James Kitching, Solicitor - Corporate, Coffin Mew, discusses the growing controversy around the use of algorithms and AI affecting privacy
In December 2018, the AI Now Institute at New York University, an interdisciplinary research institute dedicated to understanding the social implications of AI technologies, released its annual report. Included in that report were the following two recommendations:
"1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain."
"4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector."
With the growing controversy around the use of algorithms in facial recognition, the ongoing scandals linked to Big Data, and the rise of self-driving cars it is not surprising that there is a growing fear of how AI is used and how it could end up impacting on our lives. But, are the recommendations above a step too far and will they stifle innovation and invention?
All powerful algorithms
In law and in medicine, the outcomes algorithms produce can quite literally be the difference between freedom and imprisonment and life and death. In Durham, the Durham Constabulary is using the HART algorithm in order to inform decisions on custody, and in the NHS an algorithm called MALIMAR is being used to help interpret MRI scans with the hope of detecting cancer earlier.
At this moment in time algorithms, such as the above, are being used alongside trained officers and doctors but there is the very real possibility that, in the future, the human element will be taken away with AI able to make decisions on its own.
There has been a lot in the news recently about the controversial use of facial recognition software by police forces around the country. As has been reported, the algorithms behind the system have repeatedly been misidentifying individuals, incorrectly feeding back that criminals have been spotted when it is in fact someone completely different.
As algorithms are viewed as encroaching more and more on our everyday lives, and, importantly, the privacy of those lives, there is an increased clamour to make them available and accessible for scrutiny.
You get what you pay for
In a previous article, I wrote about algorithms in the justice system and the bias that can flow through into decision making by AI, based on the data that we feed into it. While the data fed into algorithms is incredibly important in deciding what outcomes are produced, "Garbage in Garbage out", the way in which the data is read, analysed, and reported upon is just as vital.
Creating AI is not an easy task. It requires vast amounts of technical expertise and a lot of time spent refining and improving. Unfortunately, for the most part, public bodies don't have the internal expertise available to be able to produce the kind of algorithms needed to create complex AI. As public scrutiny of AI in society increases, public bodies will need to ensure that the algorithms they deploy are of the highest quality and robust enough to provide outcomes that are agreeable to the outside world.
Without the expertise available internally, public bodies will be required to turn to the private sector to create the code behind the algorithms they need, and this will come at a cost. Because of the high level of decision making that AI can have in the public sector, life and death/freedom and imprisonment, the code that is created will need to be incredibly well designed and in turn will be incredibly valuable.
[Turn to next page]
Algorithms in Society: Protecting people v protecting IP
James Kitching, Solicitor - Corporate, Coffin Mew, discusses the growing controversy around the use of algorithms and AI affecting privacy
No value in what is publicly known
The algorithm that governs Google's search engine is its ‘special sauce' - it is the reason it has managed to become a tech leader and one of the most valuable companies in the world.
The same is true for Facebook and its algorithms that help advertisers target their products and services at particular users, for Uber in connecting drivers to passengers, and Amazon in creating a digital marketplace. Were these algorithms available to the public at large, for scrutiny and assessment, anyone could copy them, and their value would diminish.
The importance of these algorithms being secret is so much so that often the company's behind them won't even protect them through the normal legal means available, such as registering a patent. Just as KFC's 5 spice recipe and the process for making Coca-Cola remain legally unprotected, instead covered by layers of confidentiality and security measures, so does Google's famous search algorithm remain inaccessible and unavailable to all but a few.
With governments all around the world turning to AI to help assist in the everyday issues affecting society there are many opportunities for company's, such as Google, to create and sell solutions. But, should there be a requirement to share and make their coding available to whomever governments deem appropriate, there would arguably be less of an incentive to develop it in the first place. What they sell to one country or government, they may well want to then sell on to another.
Where's the middle ground?
The problem we are faced with then is a need for the best and the brightest to be willing to create AI able to fulfil the demanding requirements of the public sector and the society it serves and to do so in a way that allows others to scrutinise it and question it.
Returning back to the AI Now Institute, do their recommendations provide for this?
"1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain."
Having specialist agencies focused on particular sectors, such as justice and healthcare, could be a way of providing scrutiny of AI without disclosing the secrets behind it to the world at large. However, for this to work, these agencies will need the best and the brightest of their own in order to keep a track of and understand what is going on. These individuals will not only need to have an understanding of tech but also of the sectors in which they are operating in. Are there enough people out there with such skills and are they likely to want to sell their skills to oversight agencies?
"4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector."
This is a bold recommendation and one that can only work when conducted in connection with the first. As discussed earlier in this article, the development of AI is not something that is done easily, and it will take a lot to convince the businesses that create it to hand over their secrets, for fear of its value being lost. That being said, if trust can be built between regulators and developers, hopefully this trust will then trickle down into society itself and allow us to feel more confident about how AI is impacting our lives.
Final thoughts
Finding the balance between accountability and secrecy is not an easy one and it is not something that is likely to be figured out any time soon. The ability to create complex algorithms is still very much a sought-after skill and until it becomes more commonly available it will command a high price.
Governments will need to come up with new and unique ways to encourage businesses to develop open systems of AI that can benefit society but which we can also trust. This will take time to come about but we must remember we are still very much at the beginning of the AI revolution with much more yet to come.
James Kitching, Solicitor - Corporate, Coffin Mew