Kissinger’s How the Enlightenment Ends

H. Kissinger How the Enlightenment Ends The Atlantic (Monthly) June 2018.

Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.

An interesting contribution to an important debate, with a url “…henry-kissinger-ai-could-mean-the-end-of-human-history/…”. One might be forgiven for reading it as simply ‘Henry and the elites don’t like the prospects of being usurped by AI’. As a Republican Kissinger presumably favours free markets over big government. (In my view) the best argument in favour of this is that you can’t trust even democratically elected governments, so free markets – even with their shortcomings – are the lesser evil. But it seems to me that free markets might be ‘captured’ or at least corrupted by any group (‘elite’) that is able to do so, hence the choice depends on which you see as least risky, which – to me – depends on the actual circumstances, not dogma. Thus there might be a difference between the US and UK as to which is least bad.

Similarly, it is not obvious to me that replacing current elites by AI is necessarily good or bad: it depends on who the elites are and who shapes the AI. Even without AI there can be an issue. The old technologies of shops, media and libraries served communities but also served their owners, and impacted on the nature of communities, sometimes to the benefit of owners. AI, and advanced SAI of the kind that Kissinger critiques, simply exacerbates this effect. But the key questions are, who are the owners, what are their interests, and how are these aligned with communities? If, as in ‘free’ markets, we are assured that ownership does not affect communities, we should be cynical.

Taking Kissinger’s text, I shall replace ‘AI’ with a more general term, such as agent and denote by ‘substantive agent’ one that may have a substantial impact on communities.

What would be the impact on history of self-learning [agents]—[agents] that acquired knowledge by processes particular to themselves, and applied that knowledge to ends [that may be obscure to the populace]? Would these [agents] learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?

[Substantive agents] will in time bring extraordinary benefits to …. . But precisely because [substantive agents] make judgements regarding an evolving, as-yet-undetermined future, uncertainty and ambiguity are inherent in its results. There are three areas of special concern:

First, that [agents] may achieve unintended results. …

Second, that in achieving intended goals, [agents] may change human thought processes and human values. …

Third, that [agents] may reach intended goals, but be unable to explain the rationale for its conclusions. In certain fields—[law, medicine, technology] — [expert agents’] capacities already may exceed those of humans. …

Who is responsible for the actions of [experts]? How should liability be determined for their mistakes? Can a legal system designed by [politicians] keep pace with activities produced by [business people] capable of outthinking and potentially outmaneuvering them?

The introduction of limited liability computers, ‘scientific management’, computers and the Internet simply exacerbate issues that even quite primitive cultures have. The current wisdom seems to be that we need to keep the agents’ self-interest enlightened and in line with our needs. Kissinger concludes:

AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.

Before AI we would say that agents and those who motivate agents need to be adequately enlightened. I shall next quote Kissinger to the effect that AI as such could never be an adequately enlightened agent, and hence should be constrained to be ‘just’ a tool of adequately enlightened people. His conclusions follow from this. But what would be ‘adequately enlightened’?

AI … deals with ends; it establishes its own objectives. To the extent that its achievements are in part shaped by itself, AI is inherently unstable. AI systems, through their very operations, are in constant flux as they acquire and instantly analyze new data, then seek to improve themselves on the basis of that analysis. [AI] makes strategic judgments … .

[The] world … will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself … .

Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities. In contrast, the scientific world is impelled to explore the technical possibilities of its achievements, and the technological world is preoccupied with commercial vistas of fabulous scale. The incentive of both these worlds is to push the limits of discoveries rather than to comprehend them. And governance, insofar as it deals with the subject, is more likely to investigate AI’s applications for security and intelligence than to explore the transformation of the human condition that it has begun to produce.

It seems to me that we need to comprehend all our technologies if we are not to risk unintended consequences, and that the more powerful and wide-reaching the technologies the more important this is. The problem is not science and technology as such, but the way that scientists and technologists are incentivised. The reference to ‘security and intelligence’ is also interesting. Would not enlightened security and intelligence be concerned with any potential ‘ transformation of the human condition’?

All of the above points to the need to take an ‘enlightened’ view of AI. Some specific issues and interesting views are also raised:

Information threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.

The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences. …

Science fiction has imagined scenarios of AI turning on its creators. More likely is the danger that AI will misinterpret human instructions due to its inherent lack of context. … Can we, at an early stage, detect and correct an AI program that is acting outside our framework of expectation? Or will AI, left to its own devices, inevitably develop slight deviations that could, over time, cascade into catastrophic departures?

… Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

These concerns make me think of a student from a small town going to lively university: it is a struggle to make sense of the clash of cultures, and it seems to me that some universities have sometimes produced graduates whose miseducation reflects that of many who are now getting their ‘education’ from social media. So again, not new. But maybe more pressing.

Dave Marsay

 

 

 

Advertisements
%d bloggers like this: