
I study Artificial Intelligence and Law at Northwestern University and I am a member of the Qualitative Reasoning Group. I studied social and cognitive psychology for my undergraduate degree at the University of Chicago. I switched to AI from Psychology because I wanted to spend more time building minds, rather than figuring out how ours work (although I still think that's super interesting). I'm broadly interested in any AI research that produces something that looks like reasoning, cognition, or experience (my own work uses symbolic reasoning, analogy, and qualitative representations). I am also interested in how AI can be integrated into our everyday lives, and how people will interact with and respond to those systems. My main interest in human psychology is in moral reasoning, and I want to work on AI systems that not only reason morally and ethically, but that humans recognize as moral and ethical. I also love working on and interacting with virtual characters, although I haven't gotten a chance to do much research in that area recently.
Several years ago I became concerned by the fact that while AI is increasingly making important decisions for and about humans, AI is regulated (if at all) largely by laws that were developed for humans and corporations, not autonomous computer systems. I had the privilege of joining into Northwestern's joint JD/PhD program, and recently finished my full-time law studies. My legal scholarship focuses on the impact of AI on the law and society, how AI is currently regulated, and how it can, should, and will be regulated in the future. I'm interested in using that knowledge to develop ethical, legally-bound AI systems. My thesis research is specifically built around developing an AI model of common-law precedential reasoning. I believe that true moral reasoning may prove to be an AI complete process, but that ethical reasoning undergirded by the codes and laws of society is achievable with current technology. I also want to help educate AI researchers about how the law actually works, and legal scholars about how AI actually works, since I think there are frequently misconceptions on both sides. I do not know what my scholarly balance of legal and AI research will be a few years down the line, but am looking forward to finding out.
Oof, all of that is so serious. So let me say that my loved ones would agree that my greatest passion outside of my work and my family is eating high quality cheese, and that my cats are Herschel and Mika, and each is perfect in their own way. I also enjoy art - particularly narratively-driven art - that makes me think and reflect in new and interesting ways.
My AI research is focused on getting computers to think like humans and in ways that humans can recognize, understand, and accept. I am particularly interested in aspects of social reasoning and behavior, especially moral and legal reasoning: how we can teach computers to reason morally or about the law, make sure they make decisions morally and within the law, and make them able to explain those decisions to us. I have also worked on commonsense reasoning techniques.
I am also a member of Northwestern's joint JD/PhD program, and have found law school and legal scholarship to be tremendously fun and interesting. My legal research has focused on AI's impact on the law and society, specifically the consequences for civil rights law of using machine learning systems that are largely uninspectable to make decisions in areas of law that traditionally rely on showings of intent, or at least have required decision-makers to be able to be able to explain themselves.
In general, I am interested in:
- Systems that exhibit what we recognize, in our infinite fallibility, as intelligence
- How to legally define and regulate AI responsibilities, permissions, obligations, and restrictions
- How to adapt legal schemes developed for humans and corporations to AI systems that act like neither
- Computational models of human cognition
- Knowledge representation and reasoning, especially qualitative reasoning and analogical reasoning, and
- How to make AIs that everyday people can teach and whose decisions they will trust
My thesis has changed based on my participation in Northwestern's JD/PhD program. Before I joined the program, my original plan was to focus on using analogical reasoning to learn about, understand, and make decisions within complex social situations, specifically situations where agents must consider the moral ramifications of their actions. Instead, I will do something similar involving legal reasoning about case law: to use analogical generalization and reasoning to synthesize, understand, and apply legal principles encoded in series of legal precedents.
My advisor is Ken Forbus and my research is supported by the Office of Naval Research.
I was an organizer and co-chair of the Computational Analogy Workshop at ICCBR-16, and an organizer of the workshop the following year.
I was a winner of the ACM SIGAI Student Essay Contest on the Responsible Use of AI Technology. You can read the essay here.
Blass, J.A., (2019). Algorithmic Advertising Discrimination. Northwestern University Law Review, 114(2).
Blass, J.A., Forbus, K.D. (2017). Analogical Chaining with Natural Language Instruction for Commonsense Reasoning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.
Blass, J.A., Forbus, K.D. (2016). Modeling Commonsense Reasoning via Analogical Chaining: A Preliminary Report. Proceedings of the Thirty-Eighth Annual Meeting of the Cognitive Science Society.
Blass, J.A., Forbus, K.D. (2015). Moral Decision-Making by Analogy: Generalizations vs. Exemplars. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence.
Ma, D.S., Blass, J.A., Tipping, M., Correll, J., & Wittenbrink, B. (2009). Racial Bias in Shot Lethality: Moving Beyond Reaction Time and Accuracy. American Psychological Association, Toronto, Canada.
Blass, J.A., Rabkina, I., and Forbus, K. D. (2017). Towards a Domain-independent Method for Evaluating and Scoring Analogical Inferences. Computational Analogy Workshop at the 25th International Conference on Case-Based Reasoning.
Blass, J.A., and Forbus, K. D. (2016). Natural Language Instruction for Analogical Reasoning: An Initial Report. Computational Analogy Workshop at the 24th International Conference on Case-Based Reasoning.
Blass, J.A., and Horswill, I.D. (2015). Implementing Injunctive Social Norms Using Defeasible Reasoning. Workshop on Intelligent Narrative Technologies and Social Believability in Games at the 11th Conference on Artificial Intelligence and Interactive Digital Entertainment.
Blass, J.A. (2018). Legal, Ethical, Customizable Artificial Intelligence. Student Program, Artificial, Ethics, and Society Conference, New Orleans, Louisiana, USA.
Spelke, E. and Blass, J.A. (2017). Intelligent Machines and Human Minds. Behavioral and Brain Sciences, 40, E277.
Blass, J.A. and Fitzgerald, T. (2017). The Computational Analogy Workshop at ICCBR-16. AI Magazine, Winter (2017): 91.
Blass, J.A. (2016). Interactive Learning and Analogical Chaining for Moral and Commonsense Reasoning. Doctoral Consortium, Thirthieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Doctoral Consortium, Twenty-Third International Conference on Case-Based Reasoning, Frankfurt Am Main, Germany.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Students of Cognitive Science Workshop at the Third Conference on Advances in Cognitive Systems, Atlanta, Georgia, USA.
joeblass <at> u <dot> northwestern <dot> edu
I am also just barely on LinkedIn, which is pretty much the only online social network I participate in anymore, but if you're trying to get in touch, use email.