I study Artificial Intelligence and Law at Northwestern University and I am a student in the Qualitative Reasoning Group and a member of Northwestern's joint JD/PhD program. I studied social and cognitive psychology in undergrad at the University of Chicago, and switched to AI because I wanted to spend more time building minds than figuring out how ours work (although I still love learning about and - where I can - helping design psychology research). I got interested in law once I started thinking about AI out in the real world: AI implementations may bring benefits but they also carry risks, some of which will present at a society-wide level and which the law should (and will have to) deal with. I've completed my legal studies, and now I'm dissertating.
I'm broadly interested in any AI research that produces something that looks like reasoning, cognition, or experience. My own work uses symbolic reasoning, analogy, and qualitative representations, and my thesis research aims to develop an AI model of common-law precedential reasoning. My Cognitive Science interests generally concern moral and ethical reasoning, and before I joined the joint program, I was working on creating AI systems that not only reason morally and ethically, but that humans recognize as moral and ethical. My legal scholarship focuses on the impact of AI on the law and society, how AI is currently regulated, and how it can, should, and will be regulated in the future. I also love working on and interacting with virtual characters, although I haven't gotten a chance to do much in that area recently.
I believe that true moral reasoning may prove to be an AI complete process, but that ethical reasoning undergirded by the codes and laws of society is achievable with current technology. I also want to help educate AI researchers about how the law actually works, and legal scholars about how AI actually works, since I think there are frequently misconceptions on both sides. I do not know what my scholarly balance of legal and AI research will be a few years down the line, but I hope to be able to tie these interests together: to develop ethical, legally-bound AI systems, while working to adapt the law to the unique problems that AI systems will present.
Oof, all of that is so serious. So let me say that my loved ones would agree that my greatest passion outside of my work and my family is eating high quality cheese, and that my cats are Herschel and Mika, and each is perfect in their own way. I also enjoy art - particularly narratively-driven art - that makes me think and reflect in new and interesting ways (which is a fancy way of saying "Ask me about my favorite books, movies, TV shows, and video games").
My AI research is focused on getting computers to think like humans and in ways that humans can recognize, understand, and accept. I am particularly interested in aspects of social reasoning and behavior, specifically reasoning and behavior in accordance with codes of conduct such as moral and legal reasoning. My research examines how we can teach computers to reason about the law or some moral constraint, make sure they make decisions morally and within the law, and make them able to explain those decisions to us. I have also worked on commonsense reasoning techniques.
I am a member of Northwestern's joint JD/PhD program, and have found law school and legal scholarship to be tremendously fun and interesting. My legal research has focused on AI's impact on the law and society, and how legal doctrines designed to regulate human (or sometimes corporate) behavior will handle AI behavior. I've written about the consequences for civil rights law of using machine learning systems that are largely uninspectable to make decisions in areas of law that traditionally rely on showings of intent or explanations for behavior, and on the implications for the justice system of automating parts of the judicial process. I'm working now on a piece about the legal risks presented by AI systems that are personalizable by their end users.
In general, I am interested in:
- Systems that exhibit what we recognize, in our infinite fallibility, as intelligence
- How to legally define and regulate AI responsibilities, permissions, obligations, and restrictions
- How to adapt legal schemes developed for humans and corporations to AI systems that act like neither
- Computational models of human cognition
- Knowledge representation and reasoning, especially qualitative reasoning and analogical reasoning, and
- How to make AIs that everyday people can teach and whose decisions they will trust
My thesis has changed based on my participation in Northwestern's JD/PhD program. Before I joined the program, my original plan was to focus on using analogical reasoning to learn about, understand, and make decisions within complex social situations, specifically situations where agents must consider the moral ramifications of their actions. Instead, I will do something similar involving legal reasoning about case law: to use analogical generalization and reasoning to synthesize, understand, and apply legal principles encoded in series of legal precedents. My advisor is Ken Forbus.
I was an organizer and co-chair of the Computational Analogy Workshop at ICCBR-16, and an organizer of the workshop the following year.
I was a winner of the ACM SIGAI Student Essay Contest on the Responsible Use of AI Technology. You can read the essay here.
If you're looking for my Google Scholar page, there it is.
Blass, J.A., (2019). Algorithmic Advertising Discrimination. Northwestern University Law Review, 114(2).
Blass, J.A., (Forthcoming 2022). Observing the Effects of Automating the Judicial System with Behavioral Equivalence. South Carolina Law Review, 74(4) (in press).
Blass, J.A., Forbus, K.D. (2017). Analogical Chaining with Natural Language Instruction for Commonsense Reasoning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.
Blass, J.A., Forbus, K.D. (2016). Modeling Commonsense Reasoning via Analogical Chaining: A Preliminary Report. Proceedings of the Thirty-Eighth Annual Meeting of the Cognitive Science Society.
Blass, J.A., Forbus, K.D. (2015). Moral Decision-Making by Analogy: Generalizations vs. Exemplars. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence.
Ma, D.S., Blass, J.A., Tipping, M., Correll, J., & Wittenbrink, B. (2009). Racial Bias in Shot Lethality: Moving Beyond Reaction Time and Accuracy. American Psychological Association, Toronto, Canada.
Blass, J.A., Rabkina, I., and Forbus, K. D. (2017). Towards a Domain-independent Method for Evaluating and Scoring Analogical Inferences. Computational Analogy Workshop at the 25th International Conference on Case-Based Reasoning.
Blass, J.A., and Forbus, K. D. (2016). Natural Language Instruction for Analogical Reasoning: An Initial Report. Computational Analogy Workshop at the 24th International Conference on Case-Based Reasoning.
Blass, J.A., and Horswill, I.D. (2015). Implementing Injunctive Social Norms Using Defeasible Reasoning. Workshop on Intelligent Narrative Technologies and Social Believability in Games at the 11th Conference on Artificial Intelligence and Interactive Digital Entertainment.
Blass, J.A. (2018). Legal, Ethical, Customizable Artificial Intelligence. Student Program, Artificial, Ethics, and Society Conference, New Orleans, Louisiana, USA.
Spelke, E. and Blass, J.A. (2017). Intelligent Machines and Human Minds. Behavioral and Brain Sciences, 40, E277.
Blass, J.A. and Fitzgerald, T. (2017). The Computational Analogy Workshop at ICCBR-16. AI Magazine, Winter (2017): 91.
Blass, J.A. (2016). Interactive Learning and Analogical Chaining for Moral and Commonsense Reasoning. Doctoral Consortium, Thirthieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Doctoral Consortium, Twenty-Third International Conference on Case-Based Reasoning, Frankfurt Am Main, Germany.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Students of Cognitive Science Workshop at the Third Conference on Advances in Cognitive Systems, Atlanta, Georgia, USA.