non-human intelligence

Opinion: Non-Human Intelligence at the Threshold

In the turmoil of world news this week, it isn’t hard to find occasions to worry. Yet, two novel subjects have also become prominent, each raising alarm from the high-tech laboratories of Silicon Valley to the halls of Congress. They relate to the potential of AI to make humans obsolete, and to the global threat implied by the mystery of UFOs, even when reframed as the less-intimidating “UAP,” as Pentagon purists prefer.

The real problem is that the two issues are more closely related than anyone had foreseen, and their combined power to disrupt social, business, and perhaps even spiritual realities threatens to become uncontrollable, even if the two constituencies have little in common.

The AI conundrum is surprisingly simple to describe. Under cover of anonymity, late last year, senior staffers of OpenAI, a California non-profit startup (with a for-profit sub) warned that their company’s approach to “artificial general intelligence” (AGI) was about to unleash “systems surpassing humans in most economically valuable tasks.” There was a mysterious project called Q*. Still, the whistleblowers did not reveal themselves, and no details were given ahead of CEO Sam Altman’s return last month.

While these developments were stirring things up for the AI company, its Microsoft investors, and its competitors, a similar drama was taking place in Washington, DC: A proposed amendment to the massive Defense Appropriations bill, eagerly awaited by the public and a vocal portion of the scientific world, was being shot down, or at least deeply wounded, as the Senate buried the concept of UFO disclosure for a few more years. Powerful forces in the Republican party had intervened late in the game to amend, minimize, or eliminate the language introduced by Senator Schumer.

Among other controversial provisions, it would have demanded the confiscation of alleged alien materials or craft, of which almost a dozen had reportedly been captured by special units of the Pentagon. In recent years, such craft had played hide-and-seek with our best fighter aircraft from the Pacific fleet. However, there was a much longer history—largely classified—of scientific work to elucidate their origin and nature. Here, too, most of the whistleblowers remained safely hidden.

As with Q*, full acknowledgment of the reality and potential of exotic technology is thought to threaten humanity. This suggests the need for a historic transition to prepare ourselves to co-exist in a complex future where we humans might become redundant and unable to manage the planet or even our own survival. Like artificial intelligence, the UAP issue has emerged into our world without any easily comparable historical precedent.

The two issues of concern—the imminence of AI and the evidence for UAP—interest me separately and together. I earned one of the very early doctorates in AI at Northwestern in 1967 for a program that took English questions about a large astronomical catalog. It produced calculation results in minutes, eliminating the drudgery of coding and saving an overnight computer run. Second-generation programs were developed by industry in the ensuing years, bringing sophisticated controls to places that included our cars, and boosting productivity from railroad yards to aviation. That phase was invisible, however. Hardly aware of the ongoing revolution, most of us enjoyed these developments as expected rewards of productivity.

In 1985, I published demonstrations of an AI assistant that guided a human analyst through dozens of hypotheses when faced with a report of a complex UFO event, facilitating its explanation, or documenting its selection for in-person follow-up (see Vallée, J.F.: “Towards the use of artificial intelligence techniques in the screening of reports of Anomalous Phenomena.” American Institute of Aeronautics and Astronautics (AIAA). Los Angeles, 19 April 1986).

What we see today is a huge further step, a natural extension of AI science that is eloquent, visible, intrusive, encompassing, and wide; occasionally crazy or funny too, but always revelatory. Most relevant, the new form is no longer just a servant; it is an intimidating companion with the ability to digest Saint Augustine or Kierkegaard in the same heuristic. It discourages most users from challenging its verdicts. Herein lies the danger, of course: absurdity welcomes routine as reasoning becomes layered, its logic anchored in the apparent chaining of impeccable predicates. It only yields to critical analysis when one returns to the source of its data, piercing the veil of deductive fabric… but who has time for that?

Implications for research and industry are profound. They plug directly into the analysis of problems too complex for limited human projects. The wisdom of the software isn’t bound to a deductive downflow anymore. One could take a massive warehouse of UFO data, such as the one (which remains classified) that I designed for the Advanced Aerospace Weapons Systems Application Program (AAWSAP), and subject its 260,000-odd unexplained incidents to a barrage of tests, probing not only for internal consistency in search of some elusive alien logic, but also for its predictive attributes. And if you can do that, you can ask the AI to challenge it, investigate its structure, or force it to reveal itself.  Is that why Congress has not lifted the classification of the UAP warehouse Americans have paid for?

Two exquisitely challenging domains of scientific intelligence: the unlimited potential of programs like Q*, and the intimidating depth of the repositories of unexplained contact. Viewed separately, both imply potential breakthroughs and unknown dangers. Viewed together, they paint a vast design of the future where science can open new forms of exploration: more anchored in the reality of data, and more rewarding in the richness of discovery. Both deal with non-human intelligence, augmenting our own yet challenging it at the same time.

The similarities that emerge are significant: in both cases, those who sound the alarm are so intimidated they feel it necessary to remain anonymous. In both cases, survival is potentially at risk, and there’s a cross-factor in both developments: each implies the other in practical, logical, and sociologically important ways, which brings us back to disclosure.

Three opportunities for progress have been missed:

  • If the truth about the unexplained UFO data had been told by US authorities as early as the mid-fifties—as it could have been—the problem would have fallen to the world’s best scientists, well-equipped to verify the data and deal with it. That wasn’t done.
  • If the truth (newly buffered up by thousands of well-understood encounters) had been told in the late sixties or seventies, there would have been a political upset, bypassing the scientists left to fend for themselves. The issue would have transcended common affairs, with an impact felt around the world, but it was still manageable. Yet nothing was done: forceful presentations before the UN Political Committee in 1978 were negated by UK and US opposition.

What about the third failure to tell the truth, given the lack of decisive action in Washington last month?

At this late date, any attempt at disclosure can upset religious sensitivities, with a greater risk to social stability than the scientific or political dangers of earlier decades, given the conflicts that divide the world. The young generation of AI scientists eager to release new forms of intelligence, and the survivors of the Pentagon arguments around the UAP “data warehouse” may be wise to remain anonymous: beyond the threshold, any wisdom we may seek from our primitive algorithms is very brittle indeed.

Whatever decision is made, the implications are powerful, and they touch sensitive areas, from science policy (how much research should remain classified?) to threat assessment in defense and to international relations with concerned nations that are not friendly but may have essential data.

The danger then may reside in the consequences of initial decisions that preclude or overwhelm our ability to control the complexity of future actions. And this is not a task any current AI is ready to tackle.

Jacques Vallée is a principal at Documatica Financial and a diversified investor with technology startups in space development and information management. He is the author of several textbooks on computer networking and has maintained a decades-long interest in the scientific study of unidentified aerial phenomena. He divides his time between San Francisco and Paris, and can be found online at his website