“Canada's Federal AI Register, launched in November 2025, functions as an ontological design instrument that shapes what counts as accountable AI use in government. The analysis of 409 systems reveals that such registers are not neutral documentation but active tools that configure boundaries of accountability through what they include and deliberately omit.”
Key Takeaways
- AI registers are active instruments of ontological design, not neutral records of government activity.
- Canada's Federal AI Register analyzed 409 systems, revealing strategic boundaries of accountability.
- Register design choices determine what gets documented, reported, and held accountable in government AI use.
Canada's AI Register reveals how transparency tools can obscure accountability through strategic omissions.
trending_upWhy It Matters
As governments worldwide implement AI registers for transparency, understanding how these tools shape accountability is critical. The findings demonstrate that procedural transparency mechanisms can actually obscure rather than illuminate government AI practices. This has implications for policymakers, civil society, and AI practitioners who rely on these registers to understand and govern AI deployment in the public sector.


