ISED’s Bill C-27 + AIDA. Part 9: My Remarks to INDU Committee re: AIDA — December 7, 2023

[Video of the remarks and the Q&A session can be found here]

Thank you for the opportunity to speak with you today about AIDA. As far as amendments go, my suggestion would be to wholesale strike AIDA from Bill C-27.

Let’s not minimize both the feasibility of this amendment, nor the strong case to do so. I am here to hold this committee accountable to the false sense of something is better than nothing on this file. It’s not, and you’re the ones standing between the Canadian public and further legitimizing this undertaking, which is making a mockery of democracy and legislative procedure.

AIDA is a complexity ratchet. A nonsensical construct detached from reality — building increasingly intricate castles of legislation in the sky. It’s thinking on AI that is detached from operations, from deployment, from context. ISED’s work in AIDA highlights how open to highjack our democratic norms are when you wave around the shiny orb of innovation and technology.

As Dr. Lucy Suchman writes: “AI works through a strategic vagueness that serves the interest of its promoters, as those that are uncertain about it (popular media commentators, policy makers, publics) are left to assume that others know what it is.”

I hope you might refuse to continue a charade that has had spectacular carriage through the House of Commons on the back of this social/psychological phenomenon of “assuming someone else knows what is going on.” And that this Committee has continued to support a Minister basically legislating on the fly? How are we writing laws like this? What is quality control at the Department of Justice? Or is it just that we’ll do it on the fly when it’s tech, as though this is some kind of thoughtful adaptive approach to law?

No. The process of AIDA reflects the very meaning of law becoming nothing more than a political prop.

The case to pause on AIDA and re-route it into a new and separate process begins at its beginning. If we want to regulate artificial intelligence — if — we have to have a coherent why.

We never received a coherent why for AIDA from this government.

Have you, as members of this committee, received an adequate back-story, procedurally, on AIDA? Who created the urgency? How was it drafted? From what perspective? What work was done inside government to think about this issue across existing government mandates?

If we were to take this bill out to the general public for thoughtful discussion — a process ISED actively avoided doing — it would fall apart under the scrutiny.

Use of AI in a medical setting versus use on a manufacturing production floor versus use in an educational setting versus use in a restaurant versus use to plan bus routes versus use to identify water pollution versus use in a daycare. I could do this all day. All of these create real potential harms and benefit conversations.

Instead of having those conversations, we’re carrying some kind of delusion that we can control and categorize how something as generic as advanced computational statistics will be used in reality, in operations, in deployed case, in context. The people that can help us with those conversations are not, and never have been, in these rooms.

AIDA was created by a highly insular, extremely small, circle of people. Tiny. When there is no high-order friction in a policy conversation, we’re talking to ourselves.

Taking public engagement on AI seriously would force rigor. By getting away with this emergency and urgency narrative, ISED is diverting all of us from the grounded, contextual thinking that has also been an omission in both privacy and data protection thought. That thinking, as seen again in AIDA, continues to deepen and solidify power asymmetries. We’re making the same mistake, again, for a third time.

This is a keep things exactly the same, only faster bill. If this bill were law tomorrow, nothing substantial would happen. Which is exactly the point. It is an abstract piece of theatre — both disconnected from Canada’s geopolitical economic location, and from the irrational exuberance of the venture capital and investment community. This law is riding on the back of investor enthusiasm for an industry that has not even proven its business model out. On top of that, it’s an industry that is highly dependent on the private infrastructures of a handful of US companies.

There nothing inherently wrong with supporting the industry, of course. But it is stunningly disingenuous to use fear, safety, harm reduction, human rights protection, and more to move this piece of abstraction — that won’t much help with those things — through our legislature. If the AI ecosystem we want is one that spawns new cottage industries of auditors, quality assurance firms, legal practices, and more, sure. But that’s not the promise of AI that I keep hearing enthusiasm about.

Democracy is in serous trouble around the world. Freedom to think and disagree and work together to the best outcome is something we’ve got. Can we use it to more productive end to think about putting more public power and control into our approach to technology governance? Absolutely.

Let’s make the space, time, and calm to do this properly. With inclusivity and humility.

Thank you.