As technology and global systems continue to advance at a rapid rate, so must the legislative and regulatory bodies that govern these systems. It’s been a challenge for the legal profession, and one that second year law student Sancho McCann is interesting in tackling.
Sancho, who graduated UBC with a PhD in Computer Science, recently initiated a reading-group series that brings together students and practitioners across many fields including law, computer science, business and engineering.
We recently chatted with Sancho about the reading group series and what he hopes will come out of it.
Tell us about your most recent session and what was discussed.
Our first session covered examples of algorithmic decision-making currently in use or proposed by the government, discussion on non-arbitrariness and rule-of-law constraints on government decision-makers, discussion about how machine learning can encode bias from previous decisions, and a review of the Treasury Board of Canada’s directive on automated decision-making. Our next session will be focused on a paper about proxy discrimination. I’m hoping that participants will pitch topics or articles that they’re interested in to keep this series going.
Why should we be concerned about the government’s use of algorithmic decision making?
The areas where the government is most actively seeking to deploy algorithmic decision-making are areas of high-volume, high-stakes decisions. One example is immigration. Another is the VPD’s use of predictive policing. But, there are people in both tech and law that recognize the risks of displacing human decision-making in these areas. Google provides a “what-if” tool that lets you explore your algorithm’s behavior in order to identify unintended or inappropriate distinctions. The Treasury Board of Canada’s directive on automated decision-making constrains how algorithms can be used for decision-making in the Canadian government.
What are you looking to do with the information discussed and knowledge gained at these roundtables?
One goal is to better inform lawyers and legal researchers about the risks, limitations, and emerging solutions available from the technology-side. For example, if algorithms can encode discrimination, they can also reveal discrimination. Equally, I would like these sessions to better inform the technology-side about the design goals they should keep in mind when doing this research. I’d like explainability and transparency to become priorities in algorithm design. I would love it if this roundtable resulted in research collaborations. The government needs help setting good policy. A group like this could be the seeds of a helpful voice in this area.
Why do law students need to know about this emerging area of the law?
It is almost inevitable that legal practice and policy development are going to need lawyers to have a greater understanding of technology and automated decision-making in a variety of contexts. This reading group is a way to get exposure to this intersection of law and technology. It’s also a great chance to learn to communicate legal concepts to non-lawyers and to learn from experts from other fields. You might even come away with an idea for a seminar paper or directed research paper.
If you’re interested in learning more about the reading-group series or would like to be a part of it, join the UBC-hosted mailing list by emailing firstname.lastname@example.org with the words “subscribe UBC-ALGORITHMS-RULE-OF-LAW” in the body.