Kissinger et al’s Metamorphosis

 

Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher The Metamorphosis Atlantic August 2019.

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.

[We] should accept that AI is bound to become increasingly sophisticated and ubiquitous, and ask ourselves: How will its evolution affect human perception, cognition, and interaction? What will be its impact on our culture and, in the end, our history?

Such questions brought together the three authors of this article: a historian and sometime policy maker; a former chief executive of a major technology company; and the dean of a principal technology-oriented academic institution…. Each of us is convinced of our inability, within the confines of our respective fields of expertise, to fully analyze a future in which machines help guide their own evolution, improving themselves to better solve the problems for which they were designed.So as a starting point—and, we hope, a springboard for wider discussion—we are engaged in framing a more detailed set of questions about the significance of AI’s development for human civilization.

The AlphaZero Paradox

The founder of the company that created AlphaZero called its performance “chess from another dimension” and proof that sophisticated AI “is no longer constrained by the limits of human knowledge.

 How can we explain AlphaZero’s capacity to invent a new approach to chess on the basis of a very brief learning period?

We can expect comparable discoveries by AI in other fields. Some will upend conventional wisdom and standard practices; others will merely tweak them. Nearly all will leave us struggling to understand.

My Comments

This raises some key issues, but maybe if we knew what was ‘under the hood’ the observed ‘smart’ behaviours could be explained by superior computational power, copying and other relatively understandable explanations?

The Nature of the Revolution

AI draws lessons from its own experience … . The growing transfer of judgment from human beings to machines denotes the revolutionary aspect of AI, as described last year in these pages (“How the Enlightenment Ends,” June 2018).

What AI can do is to perform well-specified tasks to help discover associations between data and actions, providing solutions for quandaries people find difficult and perhaps impossible. This process creates new forms of automation and in time might yield entirely new ways of thinking.

Yet AI systems today, and perhaps inherently, struggle to teach or to explain how they arrive at their solutions or why those solutions are superior. It is up to human beings to decipher the significance of what AI systems are doing and to develop interpretations..

If AI improves constantly … the changes it will impose on human life will be transformative. Here are but two illustrations: a macro-example from the field of global and national security, and a micro-example dealing with the potential role of AI in human relationships.

My Comments

Unfortunately, ‘drawing lessons from experience’ is often not ‘a well-speciifed task’, and often relies on implicit – perhaps unrealised – assumptions or ideologies. This limits the ability of humans to explain themselves to people of other cultures (or with other agendas) and the use of AI does nothing to diminsih this problem, and could exacerbate it.

Is it possible for an AI to be truly ‘objective’, transcending any failings its developers, operators or interpreters may have in making sense of the world?

AI, Grand Strategy, and Security

This seems  a very insightful section. I aspire to make some comments which I can explain!

Human Contact

Societies will adopt [AI-enabled devices such as Alexa] in ways most compatible with their cultures, in some cases accentuating cultural differences.

Given these developments, it is possible that … the primary sources of interaction and knowledge will be … digital companions, whose constantly available interaction will yield both a learning bonanza and a privacy challenge.

AI algorithms will help open new frontiers of knowledge, while at the same time narrowing information choices and enhancing the capacity to suppress new or challenging ideas. AI is able to remove obstacles of language and many inhibitions of culture. But the same technology also creates an unprecedented ability to constrain or shape the diffusion of information.

The technological capacity of governments to monitor the behavior and movements of tens or hundreds of millions is likewise unprecedented. Even in the West, this quest can, in the name of harmony, become a slippery slope. Balancing the risks of aberrant behavior against limits on personal freedom—or even defining aberrant—will be a crucial challenge of the AI era.

My Comments

I’ve been more focussed on AI versus humanity. But if AI exacerbates cultural clashes then life could get a whole lot worse than it already is. Any particular AI must (I assume) embed some non-trivial ‘ideology’ or sense-making norms (such as pragmatism) which would constrain learning. My view is that while learning is important we also need ‘education’ in the sense of ‘to draw out’. I speculate on a kind of uncertainty principle: whenever one learns from someone else (or from AI) one close off an opportunity for self-education. Whilst learning is crucial for most people most of the time, we need some self-education. AI tends to ‘close off’ issues. (Or maybe it is our use of AI that does this?)

Do ‘we’ understand these issues enough to be able to avoid the worst? I sense that I agree with the author’s concerns, but wish to go further.

The Future

AI will make fundamental positive contributions in vital areas such as health, safety, and longevity.

Still, there remain areas of worrisome impact: in diminished inquisitiveness as humans entrust AI with an increasing share of the quest for knowledge; in diminished trust via inauthentic news and videos; in the new possibilities it opens for terrorism; in weakened democratic systems due to AI manipulation; and perhaps in a reduction of opportunities for human work due to automation.

As AI becomes ubiquitous, how will it be regulated? Monitored?

The challenge of absorbing this new technology into the values and practices of the existing culture has no precedent.

[The] phenomenon of a machine that assists—or possibly surpasses—humans in mental labor and helps to both predict and shape outcomes is unique in human history. The Enlightenment philosopher Immanuel Kant ascribed truth to the impact of the structure of the human mind on observed reality. AI’s truth is more contingent and ambiguous; it modifies itself as it acquires and analyzes data.

How should we respond to the inevitable evolution it will impose on our understanding of truth and reality?

The three of us have discussed many ideas: programming digital assistants to refuse to answer philosophical questions, especially about the bounds of reality; requiring human involvement in high-stakes pattern recognition … ; developing simulations in which AI can practice defining for itself ambiguous human values … in various situations; “auditing” AI and correcting it when it inaccurately emulates our values; establishing a new field, an “AI ethics,” to facilitate thinking about the responsible administration of AI … . Importantly, all such efforts must be undertaken according to three time horizons: what we already know, what we are sure to discover in the near future, and what we are likely to discover when AI becomes widespread. The three of us differ in the extent to which we are optimists about AI. But we agree that it is changing human knowledge, perception, and reality—and, in so doing, changing the course of human history. We seek to understand it and its consequences, and encourage others across disciplines to do the same.

The key point here is that the merits and demerits of AI are uncertain. How should we handle that uncertainty? The use of statistics in medicine still has its controversies: if one ‘side’ continues to implement their ideas using AI then I would expect there to continue to be significant advances. This might be taken as evidence that their ideas were ‘correct’. But would it necessarily be? Might (mis)use of AI lead to a decreased trust in science and experts generally? If AI is ubiquitous then who will be in a position to critique it, let alone regulate it?

More generally, it would be good if AI knew the bounds of its own competence. This can be arranged and tested in cases that ‘we’ already understand. But can AI transcend our own unawareness of ‘our’ limitations, let alone our understanding of the limits of our ability to regulate in an AI-impacted fuiture? How can we answer questions about the safety and resilience of technologies without some implicit ‘philosophy’? And whose ‘values’ should AI conform to?

My Overall Comments

This was drawn to my attention by an interlocutor (do you want to be named?) who said:

AI does not know what to do when there is radical uncertainty. Which we have in this century.

Hence the link to my blog. But how are we to describe what AI can do? Here is my initial, tentative, speculative attempt.

AI certainly seems smart, and in some senses smarter than we are, so why not let it get on with it? Psychologists talk about ‘fluid intelligence’ as ‘the ability to solve novel problems that depend relatively little on stored knowledge or the ability to learn.’ The tests that they use are of number puzzles that have definite answers. So I imagine that some AI would demonstrate fluid intelligence, and be quicker at it than we are. But I have some concerns about this in terms of human ability, which I cover here.Briefly, it seems to me that tests of fluid intelligence inevitably compare performance on some kind of test, and people (and AI) could more easily ‘demonstrate’ fluid intelligence if they had had previous experience of similar puzzles. But what about:

The ability to solve novel types of problems?

While AI can do things that we can’t, can we trust it to solve problems that none of us really understands? What about those things that we think we understand, but actually misunderstand: if AI relies on experience, how can we be sure that that experience will be appropriate?

My own view is that we have come a long way by being ‘pragmatic’, but many of our more significant and intractable problems may be due to our own misconceptions. So we at least need to be ‘humble’ about our understandings ‘going forward’. So perhaps, for the ‘bigger’ problems, we need humble AI, not smart-arse AI. Or if we use AI ‘as is’ we use it appropriately, ands do notm trust it too far.

Any comments would be welcome here.

Dave Marsay

 

 

Advertisements
%d bloggers like this: