Annette Demers, Law Reference Librarian at UWindsor. (JOEL GUERIN/University of Windsor)
By Sara Meikle
Artificial intelligence is becoming part of everyday life—in our workplaces, our classrooms and even our pockets.
But as these tools evolve at a rapid pace, they raise critical questions. How do we know what’s accurate? And who is accountable when the technology gets it wrong?
For Annette Demers, a veteran law librarian and University of Windsor instructor, those questions were the starting point for something bigger.
“We’re at a tipping point in the profession,” she said. “AI will change the legal profession, so we need the right checks and balances to ensure it changes for the better.”
That mindset helped shape a national initiative: an open-access guide to help legal professionals make informed, ethical decisions when using AI in legal research and writing.
Led by Demers and developed through the Canadian Association of Law Libraries (CALL), the project was approved in 2023 when the association’s executive board struck a dedicated AI Standards Working Group. Over two years, the group brainstormed, refined and tested ideas to produce a practical tool that sets clear standards for evaluating AI systems.
The result is a niche Canadian guide designed to promote responsible AI use across the legal profession.
“These tools are being built to replace much of the research a legal professional would do,” Demers said. “But we can’t lose sight of the skills and judgment that come with human expertise.”
The guide also aligns with ethical and competency standards set by Canadian law societies. Verification is essential, Demers said, noting recent national and international cases where lawyers faced sanctions for submitting hallucinated cases—fabricated sources to the courts. In one instance, a court even relied on such a case before a court of appeal caught the error.
“These examples are cautionary tales,” she said. “They show how high the stakes can be.”
The guide emphasizes that AI-generated outputs must always be verified, and not all tools rely on the same sources.
“You don’t know what you don’t know,” said Demers. “If we’re not careful, we could end up building a hallucinated legal system.”
Each section allows users to evaluate tools based on key criteria, including data transparency, privacy, bias and output reliability. A scoring table and companion spreadsheet help track and compare results.
The goal, Demers said, is to “create savvy users who understand both the benefits and boundaries of AI. Setting standards for responsible AI development is critical.”
Believed to be the first Canadian assessment guide of its kind, the resource is openly available under a Creative Commons license. It’s intended not only for legal practitioners, but also for educators, policymakers and anyone helping shape the future of law and technology.
For Demers, the project is as much about responsibility as it is about innovation.
“This isn’t about resisting technology,” she said. “It’s about making sure AI advances our work, not undermines it.”