What We Squander on this Trajectory
The more complexity and sorting and categories and references that get added to Canada’s approach to AI regulation, the more EU frenzy there is, the more the undertaking shapes and forms into something that appears legitimate, if not downright sensible and necessary.
As with most things technical, and as with theft, sometimes doing it blatantly and out in the open is the surest way through. The theft in question with AIDA is soft, latent, and unformed. It’s the theft of adequate time to consider if and how we might automate a significant range of decision-making functions in our lives, in our workplaces, in our society.
It’s the theft of time needed to assert that harm, like alignment, like violence, like rights, mean many things to many people. AIDA is a theft of time that removes our ability to thread the cultural consequences of what this government deems “safe” as AI back to how it will impact our cultures, our livelihoods, the shapes of our days. The theft of time to make connections between the assertion of safe automation and the nature of our relationships with each other.
This theft is happening as so many other troubling things happen. Good intentions. I’ve seen enough of the starry-eyed innovation and the well-meaning government representatives and staff to know this theft lands squarely in the place where the current government thrives. A paternalistic and arrogant place where awe and wonder about science and progress, in commercial form, is understood as separately laudable from a range of ongoing unaddressed consequences.
The Ruins of Public Authority
The methods with which any country or regulator could impactfully intervene in the AI economy to the publics’ interest goes against their market instincts. Canada has not, and will not, on AI. No matter how many times Canada asserts its leadership in AI, this notion is completely illegitimate due to its lack of public involvement. It’s homo oeconomicus or bust for our culture, as Wendy Brown has written. There is nothing to argue for if your argument cannot be heard above the primacy of economics. Balance is the way with issues of democracy and innovation, the government says, but money is always heavier.
The good news: it must be remembered (as I chant this to myself) that intervening in AIDA invites one to get caught up in law-brain. Law is not how a lot of life happens, for better or for worse, nor the most opportune place to impact how culture shifts. It’s important, to be sure, but it’s also but one small piece of the governance puzzle.
We the publics, our authority and power is being ceded here to the AI industry, by design of the government. We know this, it’s not new, and thus we also work and intervene and organize and educate elsewhere. But this does not mean a government should feel confident to participate in public affairs in the manner it has done with AIDA.
The government tabled a shell of a bill. Then it stepped back so those with harm reduction skills could scramble and try to do the government’s homework, in a captured frame, on an indefensible timeline. The government then returned with a response and named it public consultation. It was not, and is not.
A Review of the Advisory Council on AI
During the December 7th 2023 INDU Standing Committee meeting, in response to questions about whether the Committee had received a coherent backstory about the origins of AIDA, it was shared that recommendations from the Minister’s AI Committee (understood to mean the Advisory Council on AI) were a significant piece of the history. A Facebook whistle-blower was also referenced, and these are definitively not the only two pieces of context. But it was helpful to hear.
While the Advisory Council on AI has a clear mandate that aligns with providing advice on regulation (more below), and has a highly-skilled set of members, neither its mandate nor its intent is to be representative of the broader publics in Canada on the subject of AI writ large, particularly on the matter of its adoption. It does not speak for the Canadian publics on this matter, and if I had to guess, would not profess to.
It was also not surprising that this Council was identified as a significant part of the origin story of AIDA. The Co-chair of this Council is Yoshua Bengio. In May of 2023, in the Globe and Mail: “AI pioneer Yoshua Bengio says regulation in Canada is too slow, warns of ‘existential’ threats”. More on the connections between this Council, the government, and its investment strategy in this post.
The Advisory Council on AI, stood up in 2019:
“…provides strategic advice to the Minister of Innovation, Science and Industry and to the Government of Canada to ensure Canada’s global leadership in AI policy, governance and adoption (emphasis mine) while supporting the growth of a robust AI ecosystem, based on Canadian values.”
Terms of Reference
The Council has one listed objective (though it’s multi-part):
“To create more jobs for Canadians; to further Canada’s position as a global leader in artificial intelligence (AI) development and research; to better support entrepreneurs and scale ups; to ensure Canadians have the education and skills they need to succeed in a changing economy.”
The Council has a two-part mandate:
“The mandate of the Government of Canada Advisory Council on Artificial Intelligence (the Advisory Council) will be to build on Canada’s strengths in AI, to identify new opportunities in the AI sector and to make recommendations to the Minister of Innovation, Science and Economic Development and the Government of Canada more broadly including but not limited to:
- How to ensure Canadians benefit from the growth of the AI sector.
- How to harness AI to create more jobs for Canadians, to attract and retain world-leading AI talent, to ensure more Canadians have the skills and training they need for jobs in the AI sector; and to use Canada’s leadership in AI research and development to create economic growth that benefits all Canadians.”
The Council has a lengthy and wide-ranging program of work that definitely includes guidance on policy and governance. To say, again, its input on AIDA is in line with what they were asked to do. Also as stated: “Acknowledging that AI is evolving, the [Council’s] Program of Work will be reviewed annually and updated in light of new challenges and opportunities.
The Council does not have an annual report for 2021–2022, which seems a funny absence for what would have been a critical year of its work. There are meeting summaries during that window. There is more to review and look at. I don’t know if and how its program of work did or didn’t shift over time.
The consequence of the origins and process of AIDA is a bill mostly (though not entirely) shaped by interests that can define as “safe” things unlikely to hurt them or irritate them. Or perhaps things that they like, or that they can avoid using. Or or or. The trick here is not only to focus on what is called harmful or high-risk in AIDA. It’s the inverse that also needs attention. What becomes safe — normal, if you will — as automation under AIDA?
The government had the time and money to think about this topic properly with the broader publics before tabling AIDA. The thing lacking was interest.
There is another whole thing to be said about how the different ways that people are looking at this bill (human rights, consumer protection, etc.) are both necessary but also plausibly out of step with where our public conversations on this matter have gone so far. There is a sequencing problem going on. This is why slowing down would be so helpful.
There’s no good end to this. But it was necessary to remind myself, and any of you reading this, firstly, not to fall too far into law-brain, and secondly, of Ursula Franklin’s work on the impact of technology on our culture, ourselves, and our relationship to democracy. This law will not resolve much of what we have yet to decide, and the ongoing conversations we can and must have about what to do in future will be many.