I got my J.D. and Ph.D. at Northwestern University, as a student in the Qualitative Reasoning Group and in Northwestern's joint JD/PhD program. Before that, I studied social and cognitive psychology in undergrad at the University of Chicago, but I switched to AI because I wanted to spend more time building minds than figuring out how ours work (although I still love learning about and - where I can - helping design psychology research). I got interested in law once I started thinking about AI out in the real world: AI implementations may bring benefits but they also carry risks, some of which will present at a society-wide level and which the law should (and will have to) deal with. Now that I've graduated, I am clerking during the 2023-24 term for The Honorable Joshua Deahl, Associate Judge of the D.C. Court of Appeals.
My legal scholarship focuses on the impact of AI on the law and society, how AI is currently regulated, and how it can, should, and will be regulated in the future. While I'm broadly interested in any AI research that produces something that looks like reasoning, cognition, or experience, my own work uses symbolic reasoning, analogy, and qualitative representations. My thesis research involves building an AI model of common-law precedential reasoning: learning generalized legal principles by comparing precedent cases, converting those principles into logical rules, and applying those rules to unseen cases. My Cognitive Science interests generally concern moral and ethical reasoning, and before I joined the joint program, I was working on creating AI systems that not only reason morally and ethically, but that humans recognize as moral and ethical. I also love working on and interacting with virtual characters, although it's been a little while since I've gotten to do any research on that front.
I believe that true moral reasoning may prove to be an AI complete process, but that ethical reasoning undergirded by the codes and laws of society is achievable with current technology. I also want to help educate AI researchers about how the law actually works, and legal scholars about how AI actually works, since I think there are frequently misconceptions on both sides. I do not know precisely what my scholarly balance of legal and AI research will be a few years down the line, but I hope to be able to tie these interests together: to work to help adapt the law to the unique problems that AI systems will present, and to contribute to research on ethical, legally-bound AI systems.
Oof, all of that is so serious. So let me say that my loved ones would agree that my greatest passion outside of my work and my family is eating high quality cheese and making low quality puns, and that my cats are Herschel and Mika, and each is perfect in their own way. I also enjoy art - particularly narratively-driven art - that makes me think and reflect in new and interesting ways (which is a pretentious way of saying "Ask me about my favorite books, movies, TV shows, and video games").
I do research at the intersection of Law and Artificial Intelligence. My legal research has focused on AI's impact on the law and society, and how legal doctrines designed to regulate human (or sometimes corporate) behavior will handle AI behavior. I've written about the consequences for civil rights law of using machine learning systems that are largely uninspectable to make decisions in areas of law that traditionally rely on showings of intent or explanations for behavior, and on the implications for the justice system of automating parts of the judicial process. I'm working now on a piece about the legal risks presented by AI systems that are personalizable by their end users.
My AI research is focused on getting computers to think like humans and in ways that humans can recognize, understand, and accept. More generally, I am interested in aspects of social reasoning and behavior, specifically reasoning and behavior in accordance with codes of conduct such as moral and legal reasoning. My research examines how we can teach computers to reason about the law or some moral constraint, make sure they make decisions morally and within the law, and make them able to explain those decisions to us. I have also worked on commonsense reasoning techniques.
In general, I am interested in:
- How to legally define and regulate AI responsibilities, permissions, obligations, and restrictions
- How to adapt legal schemes developed for humans and corporations to AI systems that act like neither
- Computational systems that exhibit what we recognize, in our infinite fallibility, as intelligence
- Computational models of human cognition
- Knowledge representation and reasoning, especially qualitative reasoning and analogical reasoning, and
- How to make AIs that everyday people can teach and whose decisions they will trust
My thesis, under the supervision of my advisor Ken Forbus, introduced a model of legal precedential reasoning that captures both the mechanisms of precedential reasoning and rule-learning from a body of prior cases. My model uses analogical generalization and reasoning to compare and synthesize precedent cases, to distill what is common about those cases (i.e., the legally significant facts they share) into generalized schemas representing legal principles. It then uses those schemas to reason about unseen cases that may involve the same legal principles. It can apply those schemas to new cases by analogy, or it can convert them to rules to reason about those cases using formal logic. Along the way, I've collected a dataset of Illinois Tort cases and translated them through our group's natural language understanding system, and I've developed a new algorithm for analogical concept learning. My thesis changed based on my participation in Northwestern's JD/PhD program. Before I joined the program, my original plan was to focus on using analogical reasoning to learn about, understand, and make decisions within complex social situations, specifically situations where agents must consider the moral ramifications of their actions. That work would have built on research I did in my first few years in graduate school, on modeling moral and commonsense reasoning.
As part of my thesis work, I developed The Illinois Intentional Tort Qualitative Dataset, a dataset of historical Illinois Tort cases in Trespass, Assault, Battery, and Self-Defense, for use by AI & Law researchers in developing legal reasoning systems.
I was an organizer and co-chair of the Computational Analogy Workshop at ICCBR-16, and an organizer of the workshop the following year.
I was a winner of the ACM SIGAI Student Essay Contest on the Responsible Use of AI Technology. You can read the essay here.
If you're looking for my Google Scholar page, there it is.
This is just a place for me to put fun stuff I come up with. I expect it to update extremely infrequently. (Edit: "never" would be more accurate).
I love coding in LISP and I think LISP loves me too.
LISP Love
(defun LISPLove()
(do ((feltLove '(love) (cons 'love feltLove))
(declaredLove nil (remove nil (cons nil declaredLove))))
(declaredLove feltLove)))
(if (LISP loved you)
(list
(loves LISP thus)
(in everlasting silence)
(accumulating love)
(endless love)
(and (no side effects)
(or distractions))
(awaiting only the chance to
(tell you (LISP loves you)))
(but LISP doesn't want to tell you it loves you)
(it only wants to love you forever))
(and
(if (LISP ever told you it loved you)
(it would give you all of its love)
(and (it wouldn't love you anymore than it did)
(it wouldn't love you anymore)))))
Blass, J.A., (2022). Observing the Effects of Automating the Judicial System with Behavioral Equivalence. South Carolina Law Review, 74(4), 825-854.
Blass, J.A., (2019). Algorithmic Advertising Discrimination. Northwestern University Law Review, 114(2).
Blass, J.A., Forbus, K.D. (2023). Analogical Reasoning, Generalization, and Rule Learning for Common Law Reasoning. Proceedings of the 2023 International Conference on AI and Law.
Blass, J.A., Forbus, K.D. (2022). The Illinois Intentional Tort Qualitative Dataset. Proceedings of the 25th International JURIX Conference on Legal Knowledge and Information Systems.
Blass, J.A., Forbus, K.D. (2022). Conclusion-Verified Analogical Schema Induction. Proceedings of 2022 Advances in Cognitive Systems Conference.
Forbus, K., Hinrichs, T., Crouse, M., and Blass, J. (2020). Analogy versus Rules in Cognitive Architecture. Proceedings of the 2020 Advances in Cognitive Systems Conference.
Blass, J.A., Forbus, K.D. (2017). Analogical Chaining with Natural Language Instruction for Commonsense Reasoning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.
Blass, J.A., Forbus, K.D. (2016). Modeling Commonsense Reasoning via Analogical Chaining: A Preliminary Report. Proceedings of the Thirty-Eighth Annual Meeting of the Cognitive Science Society.
Blass, J.A., Forbus, K.D. (2015). Moral Decision-Making by Analogy: Generalizations vs. Exemplars. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence.
Ma, D.S., Blass, J.A., Tipping, M., Correll, J., & Wittenbrink, B. (2009). Racial Bias in Shot Lethality: Moving Beyond Reaction Time and Accuracy. American Psychological Association, Toronto, Canada.
Blass, J.A., Rabkina, I., and Forbus, K. D. (2017). Towards a Domain-independent Method for Evaluating and Scoring Analogical Inferences. Computational Analogy Workshop at the 25th International Conference on Case-Based Reasoning.
Blass, J.A., and Forbus, K. D. (2016). Natural Language Instruction for Analogical Reasoning: An Initial Report. Computational Analogy Workshop at the 24th International Conference on Case-Based Reasoning.
Blass, J.A., and Horswill, I.D. (2015). Implementing Injunctive Social Norms Using Defeasible Reasoning. Workshop on Intelligent Narrative Technologies and Social Believability in Games at the 11th Conference on Artificial Intelligence and Interactive Digital Entertainment.
Blass, J.A. (2018). Legal, Ethical, Customizable Artificial Intelligence. Student Program, Artificial, Ethics, and Society Conference, New Orleans, Louisiana, USA.
Spelke, E. and Blass, J.A. (2017). Intelligent Machines and Human Minds. Behavioral and Brain Sciences, 40, E277.
Blass, J.A. and Fitzgerald, T. (2017). The Computational Analogy Workshop at ICCBR-16. AI Magazine, Winter (2017): 91.
Blass, J.A. (2016). Interactive Learning and Analogical Chaining for Moral and Commonsense Reasoning. Doctoral Consortium, Thirthieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Doctoral Consortium, Twenty-Third International Conference on Case-Based Reasoning, Frankfurt Am Main, Germany.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Students of Cognitive Science Workshop at the Third Conference on Advances in Cognitive Systems, Atlanta, Georgia, USA.