Logic, Law, and Cryptography

Oct 20, 2023 at 6:00PM

Abstract

Finding a strict set of rules that can automatically prove the validity of a given logical formula has been a goal of logicians for centuries. This dream is coming close to reality: in the last two decades, we have witnessed fascinating progress in the area of automated reasoning. Modern automated reasoning tools are now applied daily to tasks in program analysis, software engineering, hardware verification, and various other domains. The efficacy of such tools allows their application to even large-scale industrial codebases.

In this talk, we first describe a novel application of automated reasoning, legal accountability. Every day modern life and society become ever more influenced by the decisions of machine learning algorithms and other products of artificial intelligence research. While a common goal of these AI algorithms is supposedly to assist and improve human lives, it is a harsh reality that the opposite frequently occurs. Ideally, computer scientists would use automated reasoning to verify these algorithms in order to assess their correctness. Unfortunately, the formal verification of these algorithms is not only theoretically undecidable, but also treads onto contentious questions within law and philosophy with no simple answers, and further requires a degree of computational power that makes the task practically infeasible. While we may not always be able to formally verify the correctness of decision algorithms, our research is focused on how to deploy formal methods to aid in the assessment of accountability when those algorithms inevitably do harm. We use symbolic execution over an autonomous agent’s decision logic, constrained by the log of the agent’s behavior. Using our framework, an expert user can ask tailored counterfactual questions to better understand the functional intention of the decision algorithm in order to, e.g., distinguish accidents from honest design failures from malicious design consequences.


In the second part of the talk, we will present our work on adding a privacy-aware perspective to automated reasoning. When using automated reasoning tools, the implicit requirement is that the formula to be proved is public. For instance, if two parties want to prove compliance of their requirements, they need to convert the requirements into a first-order logic formula, and use solvers to check the formula entailment. In this case, both parties can see the other parties’ formula and thus infer potentially private properties. We propose the concept of privacy-preserving formal reasoning. In our recent work on a zero-knowledge protocol for proving the unsatisfiability of Boolean formulas in propositional logic, we developed a highly efficient protocol for knowledge of a resolution proof of unsatisfiability. We encoded verification of the resolution proof using polynomial equivalence checking, which enabled us to use fast ZKP protocols for polynomial satisfiability.

The talk will conclude with an outline of how to combine these two perspectives and use automated reasoning for regulatory compliance while maintaining the privacy of the involved parties.

Bio

Ruzica Piskac is an associate professor of computer science at Yale University. Her research interests span the areas of programming languages, software verification, automated reasoning, and code synthesis. A common thread in Ruzica’s research is improving software reliability and trustworthiness using formal techniques. Ruzica joined Yale in 2013 as an assistant professor and professor, and prior to that she was an independent research group leader at the Max Planck Institute for Software Systems in Germany. In July 2019, she was named the Donna L. Dubinsky Associate Professor of Computer Science, one of the highest recognitions that an untenured faculty member at Yale can receive. Ruzica has received various recognitions for research and teaching, including the Patrick Denantes Prize for her PhD thesis, a CACM Research Highlight paper, an NSF CAREER award, the Facebook Communications and Networking award, the Microsoft Research Award for the Software Engineering Innovation Foundation (SEIF), the Amazon Research Award, and the 2019 Ackerman Award for Teaching and Mentoring.