Highlights from the UBC AI & Law Symposium

Benjamin Perrin
Professor
Apr 3, 2025
On April 2, 2025, we hosted the AI & Law Symposium: Exploring Innovation, Challenges, and Legal Implications of a Technological Revolution at the Peter A. Allard School of Law at the University of British Columbia in Vancouver, BC, and via Zoom webinar.
This interdisciplinary event showcased presentations from undergraduate and graduate students on cutting-edge topics related to the legal and ethical issues raised by the expansion of artificial intelligence.
The AI & Law Symposium was organized by the UBC AI & Criminal Justice Initiative, with financial support from the Allard Research Engagement Fund.
Synopsis of the presentations
“Political Expression Amid AI Content Moderation: Conceptualizing AI Algorithmic Chilling for Proportionate Online Speech Regulation”
Ephraim Barrera, an LLM student (Concentration in Law and Technology) from the University of Ottawa, Faculty of Common Law, Centre for Law, Technology and Society, presented his research on the use of AI for online content moderation and the concerns of “algorithmic chilling.” Barrera argues that the “chilling effect” that comes with censorship of freedom of expression will be amplified when combined with “algorithmic bias,” which has been widely documented in many AI models. Barrera offered a series of recommendations for policy-makers.
“Litigating Against the Administrative Machine: A Canadian Immigration Case Study”

Will Tao, an LLM student at the Peter A. Allard School of Law at UBC and an immigration lawyer, presented on his experience litigating and researching the use of AI in the immigration context – a setting that has been described as “high risk, low rights.” Tao explored a series of cases where the use of algorithmic tools and facial recognition have raised concerns about bias, transparency and shortcomings of judicial review in conventional administrative law approaches to such concerns.
“Robot Lies and Legal Ties: The Future of AI Accountability”

Isabelle Sweeney, a second-year JD student at the Peter A. Allard School of Law at UBC, presented research on the capabilities of AI agents to engage in “scheming” to intentionally deceive human users in order to achieve their objectives. Sweeney gave specific examples of a series of documented incidents of such scheming and identified an accountability gap between AI developers and deployers if harm is caused as a result. Sweeney argued for strict liability as part of the solution, in addition to technical safeguards and legal regulation.
“The Ghost in the Machine: Who Owns AI’s Creations?”

Thien Lam Nguyen, a second-year BA student in the UBC School of Arts (Political Science) explored the issue of copyright across multiple jurisdictions when AI is used to generate content in written and visual forms. Drawing on judicial decisions in multiple jurisdictions, Nguyen highlighted the restrictive ways that copyright protection is being applied to AI-generated creations.
“Legal Implications of Using AI in Journalism”
Bhavesh Chhipa, a PhD candidate (journalism and mass communication) from the Manipal University Jaipur (India) Department of Journalism and Mass Communication, presented findings from his research on the use of AI in journalism. Chhippa’s empirical work is based on interviews with Indian journalists in print media in New Delhi. He identified a series of concerns that were considered most pressing, raising legal and ethical issues about the use of AI in news content generation.
“Next Generation Pioneering: Regulating AI in the Practice and Administration of Law”
Daniel J Escott, an LLM student at York University’s Osgoode Hall Law School, explored the emerging governance of AI through law and “soft-law” instruments, including policies, directives and guidelines issued by legislatures, governments, courts and regulatory bodies. He also highlighted particular issues with the use of AI by lawyers.
“AI Personhood & Guardianship”
Sara Heidarloo, a first-year JD student at the Peter A. Allard School of Law at UBC, explored the concept of legal rights and obligations of AI, along with some of the challenges that it presents. Heidarloo proposed the use of a guardianship model for advanced AI to provide a degree of human oversight and legal accountability. She also recommended creating a public AI compensation fund (with contributions from major AI technology companies) so that individuals would be able to obtain timely and just compensation for harm caused by AI – without having to sue these large corporations to obtain redress.
Looking ahead
As the use of AI continues to grow and evolve, it’s critical that we continue to examine its role in society. The stakes are high. This isn’t something we can leave to the tech giants and governments to decide for us.
Thank you to all the attendees, participants, sponsors and volunteers who made this symposium possible. Law students who are interested in joining the UBC AI & Criminal Justice Initiative in the 2025/26 academic year are warmly invited to get in touch.
- Allard School of Law