AI

U.S. General’s Claim of Divine Ethics in American Defense Technology Misfires

With robots getting smarter and the lines between Star Wars and actual warfare blurring, the ethical conundrum surrounding artificial intelligence blew up recently when a U.S. Air Force General tossed a thought grenade into the battlefield of opinions, arguing that American AI is “more ethical” than its adversaries, thanks to its stellar Judeo-Christian upbringing.

While it paints a picture of an AI system attending Sunday school, the General’s statements oversimplify the AI ethical debate. 

The General’s claim assumes a universal application of the Judeo-Christian ethical standards throughout the U.S. defense ecosystem, which is itself a contentious point. The military and its supporting technological and industrial complex, like any government institution, is pluralistic, involving people of various cultural, religious, and ethical backgrounds.

Thus, attributing its ethical standpoint to one particular religious influence oversimplifies the diverse moral fabric underlying the system. Moreover, the interpretation and application of religious ethics in the realm of AI, a decidedly secular field, remains contentious, opening avenues for potential misapplications and inconsistencies.

The assertion that U.S. AI is more ethical than its adversaries inherently creates an ‘us versus them’ dichotomy based on subjective moral superiority. It does not take into account the universal principles of human rights, transparency, and accountability that should guide any AI system’s ethical parameters. To be fair, picturing AI development in countries like Russia and China, one could be tempted to envision some dark authoritarian motivations being baked into the AI’s code. However, it’s not quite that simple. Even if an AI had an ‘education’ influenced by the surrounding ethical environment, this is not exclusive to nations with shady human rights records. The United States has had its own privacy and surveillance slip-ups.

Additionally, there have been multiple U.S.-based AI systems that have been unleashed upon the internet, and they went full racist pretty quickly. The assumption that AI’s ethical outlook is as black and white as the “good guys vs. bad guys” narrative is deeply flawed. Instead, the better question is whether we can establish a global set of standards that ensure all AI systems adhere to global standards. This emphasizes the need for an international AI ethics framework instead of a relative ethical benchmarking based on religious or cultural roots.

While Judeo-Christian ethics may influence the US’s approach to AI, it’s worth noting that ethics in AI isn’t merely about the intent of the creators but also about the outcomes. Despite good intentions, there have been instances where U.S. defense technologies have inadvertently caused civilian casualties. This highlights the fact that the ethical nature of AI depends more on its design, deployment, oversight, and control measures than on the moral convictions of the society that creates it.

In addition, the U.S. and many other nations lack comprehensive legislation to regulate AI, leading to concerns about privacy, surveillance, and potential misuse. The claim of superior ethicality seems incomplete without a legislative framework that ensures ethical adherence in concrete terms.

Judging the ethicality of an AI system based on its creators’ cultural or religious background might propagate stereotypes, further escalating geopolitical tensions. It’s crucial to avoid such ethnocentric views and foster a more collaborative approach, encouraging shared learning and global dialogue on AI ethics.

The ethicality of an AI system, including the Pentagon’s, is a complex issue that can’t be conclusively linked to a single factor, such as the society’s religious undertone. It involves multiple aspects like design, transparency, accountability, and universal human rights. While the U.S. might strive to instill ethical guidelines in its military AI development, it’s essential to ensure these principles manifest in AI’s real-world applications and are supported by a comprehensive legislative framework.

MJ Banias is a journalist and podcaster who covers security and technology. He is the host of The Debrief Weekly Report and Cloak & Dagger | An OSINT Podcast. Follow him on Twitter @mjbanias.