Geoffrey Hinton, the father of Artificial Intelligence, worries about AI as an existential risk for humanity. While he is concerned about the potential societal disruption of mass unemployment, political division driven by social media message targeting, the increase "ease" of going to war when your side only looses "battle-bots" and not people, not to mention the further erosion of trust in messaging and institutions, his principle concern is that unconstrained AI well see no need for human direction and control, and perhaps no need for humans at all. That is the existential thread he and many others worry about.
The threat from AI has been compared to that we faced sixty years ago from nuclear weapons. But there are two critical differences. First, nuclear weapons were designed and built by governments while A.I. is being developed in the private sector. Governments therefore only have indirect control over its development and their ability to influence it is somewhat "arms-length".
The second critical difference is incentives. Both nuclear weapons, particularly the H-bomb, have a clear downside - the total destruction of life on earth. But the up-side is quite different. For the bomb, it's the deterrent effect that is thought to have prevented World War III. Importantly, the diminishing marginal return from the number of weapons allowed governments to negotiate a halt to the arms race.
For AI however, the up-side is not only quite different but much more complex. First, AI offers clear benefits to society from better medical diagnoses, safer roads, more targeted teaching and tutoring and faster access to information. And since AI is being developed in the private sector, the potential profits are a strong incentive that weren't directly part of the decision making in developing the bomb. That incentive is exacerbated by the potential network effects common to software and other non-rival goods and services. Companies who fall behind in developing AI are destined to lose; that creates a strong incentive to push forward as fast as possible. And since the software industry has historically been more concerned with speed then safety, the "fail-fast" mentality, when flaws are discovered after the product is launched rather than before, guard-rails are likely to be an after thought at best and a band-aid for a severed artery at worst.
All of which means that companies are unlikely to self-regulate effectively, leaving that job to governments. But governments have two problems. First, legislators have in the past have had difficulty grasping the some of the basics of new technology, let alone its implications and how they might be mitigated. Second, there is not only inter-firm competition but inter-country competition. The US government for example may be loath to impose too much regulation on AI for fear that China and Chinese firms will gain a competitive advantage which will have consequences for US competitiveness and its economic welfare.
Finally there is an issue of oversight and enforcement. With nuclear weapons, a physical entity, inspection and intelligence gathering allowed both side in the cold war to know enough about the other was doing. That's much harder when the development is not being carried out by singular government entities but dispersed throughout the private sector. And when there are no physical tell-tales to monitor, oversight and enforcement of any international agreement limiting the development of AI is going to be much harder.
None of these issues applied to the development of nuclear weapons. Were one were looking to the history of the cold war for a road map to control the development of AI, the historical analogy would be quite misleading. One can only hope that around the world there are enough people with sufficient foresight to design some kind of effective global agreement. But given the political dysfunction in the US, the lack of cohesion in Europe given the tension between a European identity and its composite sovereign states and the lack of democratic accountability in China and Russia, it's hard to be sanguine about the future.
No comments:
Post a Comment