Meet the Artist Pushing the Canadian Government to Adopt an AI “Bill of Rights”

AI laws
(Credit: Tim Boucher)

In a small town outside Quebec City, a science fiction author and artist named Tim Boucher sits in a shed with his laptop. 

“It’s actually a shed extension I built for $200 using scrap connected to where my chicken coop is,” Boucher explained in an interview.  

As chickens roam around his backyard in the Quebec countryside, Boucher’s workspace, filled with computers and other equipment, is far from rustic and rural. Boucher, who has a professional background in online Trust & Safety for a major social media company, has spent the better part of a decade working for platforms, blockchains, and nonprofits going after extremist rhetoric or fake news and solving other related problems. While his day job continues to be in the technology field, he is also an author and artist who uses AI to tell stories, combat propaganda and disinformation via his art and is critical of Big Tech. 

Sporting an old green baseball cap, a rough beard, and the typical Canadian plaid jacket, he’s the heretical John the Baptist of artificial intelligence, hunched over a computer.

Now, he’s challenging the Canadian government to rethink its new AI laws, and much like John wandering out of the online desert, or rather, the Canadian countryside, he’s taking it to politicians.

AI laws
Tim Boucher in his “office.” (Image: Tim Boucher)

The Canadian Artificial Intelligence and Data Act (AIDA) has emerged as a significant piece of legislation for the future of Canadian technology policy. However, it has recently come under scrutiny by experts, and Boucher, who personally has proposed an alternative “AI Bill of Rights.” 

AIDA is a comprehensive piece of legislation that seeks to regulate the use of AI and data in Canada. It outlines the responsibilities of AI developers and users and sets forth guidelines for data privacy, transparency, and accountability. However, Boucher calls into question the Act’s effectiveness and democratic nature.

“My mission is to present a more compelling and comprehensive alternative to AIDA,” Boucher told The Debrief in an interview. He explained that AIDA has few details in it as to how to manage AI development and is generally quite vague in its approach. Moreover, Boucher explained that most experts don’t think AIDA will come into force until 2025. Additionally, any follow-up regulations would take several more years to become law. 

“This is dramatically too long of a time period to wait, especially in the super fast-moving world of AI developments, where six months means tremendous new advancements,” Boucher explained.

Boucher is not alone. Other AI experts have been critical of AIDA’s vague approach, suggesting that relying on future regulation often comes far too late after an issue arises and that the watchdogs come from the same government office tasked with building up Canada’s tech sector. 

“Policymakers know AI is important, but it seems clear that they don’t really understand the technologies or what users actually need and want from AI providers in order to protect their fundamental rights and freedoms,” Boucher said. “And, as a result, they have no clue how to effectively regulate it.”

However, the push for AIDA has been strong. Deep-learning pioneer Yoshua Bengio and many other industry leaders have signed an open letter calling for Canadian MPs to pass the AIDA as quickly as possible. According to an article in The Globe and Mail, Bengio expressed that future developments in artificial intelligence will quickly change the way Canada–and the world–function. 

“Having a law that leaves some responsibility to the government to react to problems as they go protects us and is going to be better for our businesses,” Bengio told the Globe. Anything too rigid, he explained, will only place burdens upon the industry.

For Boucher, who has long been a vocal critic of the tech industry’s handling of AI, this is one of the key issues.

“Chaining the creation of laws to people who are popularity contest winners by trade is a terrible mix,” he says. This attitude is highlighted by his artistic works, which often explore themes of AI ethics, data privacy, and corporate control. Boucher argues that government regulation of AI falls short in several key areas, particularly in its approach to disinformation and public control.

“The Act,” Boucher says via PRUnderground, “while well-intentioned, fails to adequately address the issue of disinformation. AI algorithms, often controlled by tech giants, have the power to shape public opinion and even influence elections. Yet, the Act does little to curb this power.” Boucher’s critique highlights a growing concern among AI experts: the potential for AI to be used as a tool for spreading disinformation.

Boucher also takes issue with the Act’s approach to public control of AI. He argues that the Act, in its current form, allows too much power to remain in the hands of big corporations. “AI should be a public good, not a corporate asset,” Boucher explains. “The Act needs to do more to ensure that the benefits of AI are shared equitably, and that the public has a say in how AI is used and regulated.”

Boucher’s proposed “AI Bill of Rights” offers an alternative vision for AI regulation. It calls for greater public control of artificial intelligence, stricter regulations on disinformation, and a more democratic approach to AI policy. “AI is too important to be left to the whims of corporations,” Boucher explains. “We need a democratic AI policy that puts the public interest first.”

Boucher’s Bill of Rights can be found on his website, but some key takeaways include: 

  1. System Specifications: Users want AI systems to clearly identify themselves as AI systems and provide information about the AI models or technology being used. They also expect transparency about the system’s limitations and how it works.
  2. Public AI Options: Users want the ability to reject invasive or unnecessary AI systems and have the option to opt out of specific AI functionalities based on their preferences. They also want to be assured that their ethical principles will not be contravened by government or corporate mandates.
  3. Sustainability: Users want AI systems to be environmentally sustainable and not contribute excessively to carbon emissions. They also expect AI technologies to be developed and used in a manner that respects and protects the natural environment without exacerbating climate change.
  4. Impartiality & Neutrality: Users want AI systems to be unbiased and impartial in their responses, ensuring that their views and decisions are not unfairly swayed. They also expect AI systems to promote fairness, avoid reinforcing harmful biases, and provide clear explanations of their decision processes.
  5. Accountability & Transparency: Users want AI systems to use reliable data sources and correct inaccuracies swiftly when identified. They also expect transparency about the verification processes and involvement of third-party fact-checkers. Users want to be able to report instances of discrimination or bias and have regular independent audits to ensure fairness.

The document also mentions the need for AI providers to engage in governance activities, participate in citizens’ assemblies, and contribute to the setting of agendas for addressing concerns about machine intelligence. It highlights the importance of diverse and representative assemblies and the benefit of transparency in follow-up actions by developers and regulators. The document also suggests that Canada could draw inspiration from the EU’s AI Act as a notable example of comprehensive legislation.

“My objective here is not to propose industry-friendly solutions that will be easy for AI companies to adopt. Quite the contrary. I want to push them to offer the highest level of protections possible to human autonomy and creativity,” Boucher said. “I believe that the best protection of human rights will allow their expression to flourish, and if we’re brave enough and imaginative enough, it just might lead to a new renaissance. If we’re not, well, dystopia is the likely outcome.”

AI laws
(Image: Tim Boucher)

Boucher has sent copies of his act to multiple political groups in Canada, including the Prime Minister’s office, all the major political parties, the office of the Minister of Innovation, Science and Economic Development, Policy Horizons Canada, and multiple artificial intelligence ethics labs, academic institutions, and nonprofits.

“Canada is my home, but it’s just a microcosm,” Boucher explained. “The same issues are playing out globally, and what I’m proposing could be debated and potentially adopted in any context. If we don’t take prompt aggressive action, we will be surrendering a great deal of power over our future to private for-profit companies who have no accountability or oversight, as we transition into one of the greatest changes humanity has ever faced.”

Boucher’s critique of the Canadian AI and Data Act raises important questions about the nature of AI policy. Is it enough to regulate artificial intelligence, or do we need to democratize it? And how can we ensure that AI is used for the public good rather than for corporate gain?

These questions are particularly relevant in the context of disinformation. As machine intelligence becomes more sophisticated, so too does its potential for spreading disinformation. This raises serious concerns about the impact of AI on democracy. If AI algorithms, controlled by tech giants, can shape public opinion, what does this mean for the democratic process?

The Canadian AI and Data Act, while a significant step forward, may not go far enough in addressing these concerns. As Boucher argues, we need a more democratic approach to AI policy that prioritizes public control and curbs the power of big corporations.

There is little doubt that AIDA represents a crucial effort to regulate AI and data use. However, as Boucher’s critique suggests, there is still much work to be done. We need to ensure that our AI policies are not only effective but also democratic. We need to tackle the issue of disinformation head-on, and we need to ensure that machine learning is a tool for the public good, not a corporate asset. As we navigate the complex landscape of AI, these are the challenges we must face.

“Ideally, I’d like to see a national – and international – conversation develop around these much more specific and, in some cases, much more extreme proposals that I am putting forward,” Boucher says. “I’d like to expose that the present ‘official’ line of thinking on these issues is simply not enough and won’t get us where we deserve to be as Canadians, and as simply humans.

MJ Banias is a journalist and podcaster who covers security and technology. He is the host of The Debrief Weekly Report and Cloak & Dagger | An OSINT Podcast. Follow him on Twitter @mjbanias.