Saturday, 6 April 2019

Solomon’s Code by Olaf Groth and Mark Nitzberg – Book Summary

This is a book about Artificial Intelligence that deliberately poses more questions than it answers. Groth and Nitzberg’s aim is to outline some of the most important multi-disciplinary debates that need to take place if AI ultimately is going to be beneficial to humanity. 
The authors take a fundamentally optimistic (but not utopian) view of AI and how it can benefit society, but this is grounded in the real politik of twenty-first century multi- and inter-national relations. This optimism is seen in the espousal of a model of human-AI symbiosis (Chapter 3) which enhances humanity: 
a “symbiotic relationship between artificial, human and other types of natural intelligence can unlock incredible ways to enhance the capacity of humanity and environment around us.” (p.69) 
This is worked out in a number of ways through discussion of a number of important social debates: justice and fairness, privacy, security, surveillance and changing patterns of work. In the first half of the book, Groth and Nitzberg discuss a range of important philosophical questions thrown up by AI about the nature of what it is to be human: self-consciousness (pp.96ff), human personhood, autonomy and free will; reshaping the sense of the self (p.76) and the ability for humans to change their values and beliefs over time (p.88). 
The second half of the book is a call to arms to put in place a regulatory framework (“guardrails”) for the use of AI which maximises the benefits of AI, whilst mitigating potential harm. They argue that this will include drafting a “Digital Magna Carta” which defines human freedoms in the age of AI (p.232). In so doing, the authors recognise just how difficult this is likely to be. Indeed, about a third of the book is devoted to outlining the complexities of the emerging geopolitical context for these discussions. 
There is an excellent discussion of “the forces that shape the world’s divergent AI journeys” (Chapter 4), which outlines the different attitudes to AI and technology around the world: “the Digital Barons” (Google, Facebook, Amazon, Alibaba and Baidu); “the Cambrian Countries (US and China); “the Castle Countries” (Russian and Western Europe); “the Knights of the Cognitive Era” (military/defence-based AI – US, China and Israel); “the Improv Artists” (other countries developing aspects of AI – Nigeria, Indonesia, India and Barbados); “Astro Boy” (Japan); and “the CERN of AI” (Canada – the open-source concept of an international network of data generators). 
What comes through this discussion is the range of ways that power, trust and values are being played out across societies, often driven by different regional philosophical traditions. For example, the influence of Taoist, Confucian and Communist thought on China; and the social challenges of an ageing population in Japan mean that these countries have fundamentally different attitudes to the West on issues such as privacy and the relationship between humans to machines. The authors rightly point out that this philosophical diversity poses significant challenges for anyone seeking to formulate a universal approach to regulating the use of AI. 
In the authors’ analysis of ‘the race for global AI influence’ (Chapter 5) the battle for control of data and AI is tantamount to a new arms race which has the potential to reshape the political world order (Putin: the country that leads on AI “will become the ruler of the world” p.151) and discuss each of the main protagonists in turn: US, China, Russia and the EU. 
“Philosophies of regulation, influence and social and economic participation will conflict – as they should. Those clashes and their outcomes will coalesce around issues of values, trust and power” (p.163-4). 
The authors close (chapter 8) by discussing possible ways in which the community of nations might establish “a global governance institution with a mutually accepted verification and enforcement capacity” (p.233). In so doing they discuss the lessons learned from other recent multinational treaties and governance models, such as the Montreal Protocol to reduce chloroflurorocarbons, the Paris Agreement on climate change, the Organization of the Prohibition of Chemical Weapons (OPCW), and the UN Global Compact. In light of these, they argue instead for a “new governance” model which draws its legitimacy from “its inclusion and the robustness of the norms and standards it disseminates”, but which is aligned “with existing pillar of global governance such as the United Nations or the World Trade Organization” (p.249) 
The authors conclude that “the Machine can make us better humans” (p.253): 
“Combining the unique contributions of these sensing, feeling and thinking beings we call human with the sheer cognitive power of the artificially intelligent machine will create a symbio-intelligent partnership with the potential to lift us and the world to new heights.” (p.257) 
Surprisingly for a work of this quality and nature, the book has no index and only has limited referencing.

No comments:

Post a Comment