BREAKOUT SESSION NOTES

Breakout Discussion: Law Practice 2030: Discussion Summaries


Reporters: Maura Grossman, Marc Lauritsen, James Sherer, Dan Rubins, William Hamilton

1. How AI is Changing the Legal Profession

What do rapidly developing technological capabilities mean for the delivery of legal services? What types of legal services will be most affected? How will the increased use of AI/IA change the way in which lawyers deliver services through collaboration with legal tech, innovation, engineering and marketing professionals? How will law firms respond to these new developments? What will be the challengers or obstacles the profession will encounter along the way? What research may be needed to overcome those obstacles?

  • Lawyers and judges do not presently have the level of understanding they need to use AI technologies; a lot of hope is being placed on the next generation of lawyers to pick up the mantle. However, if the slow pace of adoption of technology assisted review (TAR) methods for electronic discovery is any indication, widespread adoption of AI methods by the legal industry may not be all that significant by 2030 (given that TAR was introduced in 2008 and it is 2021 now without being used widely).

  • It was originally thought that AI tools would level the playing field between small and large firms, so the former could compete with the latter. That did not happen because small firms cannot afford the cost of AI tools, which are only available to the well-heeled. Will AI signal the slow death of the small law firm? Will AI tools be realistically available for the “typical” case (as opposed to only large commercial cases)? These are open questions.

  • The degree of regulation of AI will impact its adoption in the legal field. The UK has had less regulation than other jurisdictions and innovation has flourished. In Brazil and the US, there has been fierce protectionism shown by lawyers (e.g., suits for unauthorized practice of law) with respect to alternative types of legal services, thereby stifling innovation.

  • Will services like LegalZoom and Do Not Pay be able to scale sufficiently? Will they survive or be blocked and suffocated? Will they overtake the law-firm market? There is a lot of uncertainty.

  • A potential split in the market: Law à Consumer (directly) vs. Business (e.g., AI company, Big Four) à Law Firm.

  • Large law firms seem to be building out large tech capacity to compete with alternative service providers and the Big Four.

  • There is too much hype. Most AI systems do not work as advertised. It is the wild west out there. Better vetting and validation processes are needed so that lawyers and judges do not "get snookered" (i.e., are deceived).

  • The need exists for a different law school training model for lawyers to include more tech competency, in order that the next generation can save the world (see above). Some law schools do seem to be responding, but not the upper echelon.

  • Overall, most participants in the group were optimistic, but some of us remain skeptical because of our experience with TAR. Why would new AI products be any different?

  • It is important to look at the incentive structures to determine adoption. Lawyers will not adopt new tech unless they see it as a win for themselves.

2. AI & Access to Justice

Will emerging AI/IA technologies increase universal access to legal systems through new ways to deliver legal services? Will these technologies serve to widen present disparities? Are online courts implementing AI/IA tools and techniques feasible? What other ways might AI/IA be able to increase access to justice, especially for historically disenfranchised populations?

  • Access to justice has become a huge issue, but the general sense is that "true" AI is not yet playing much of a role -- even "good old fashioned" symbolic AI.

  • Rather, there has been a decent amount of conventional programming applied to the problem. Examples in the commercial context include the FlightRight service and related ones aimed at the broad middle class. In nonprofit contexts, examples include Do Not Pay (advertising itself as "the world's first robot lawyer"), and Law Help Interactive Services, both of which are part of an emerging trend in providing alternative legal services. All of these make creative use of "old" tools like document assembly and decision trees. There remain many low-hanging fruit opportunities for such services.

  • The more significant obstacles that remain to true AI adoption are UX ones on the supply side, especially as they relate to the increasing use of mobile devices. Also literacy/language issues on the user side. The underlying processes (usually court-based ones need significant renovation, independent of any technological angle. And legal education would be improved by more courses like those in the Apps4.Justice spirit (where students create apps as part of their course work, ideally ones that can be deployed outside of classrooms to expand access to legal help.

  • Potential priorities for AI leverage in improving access include better tools for explaining the questions and answers currently served by online legal apps, and tools to facilitate/accelerate the development and maintenance of such applications, such as code verification for better quality control.

  • COVID has introduced and facilitated new ways to serve justice remotely. These practices include holding online judicial proceedings with hundreds of individuals participating remotely in watching the proceedings.

  • New forms of access issues do not solely affect the poor. The rights given to EU citizens through the General Data Protection Regulation (GDPR) include the right to inquire about how solely automated methods made its decisions. Satisfying the right to an explanation of AI processes is a difficult task, especially in the case of deep learning and neural nets.

3. The Human Role in AI

In an environment increasingly dominated by large scale language models such as transformers, as well as other AI advances, what role does human interactivity play in solving legal challenges? Or is the human role limited to providing training labels? Can human interactivity and decision-making enhance the results above and beyond that which is capable by purely machine-driven approaches? What new roles may be developed for human interaction with machine learning systems in order to increase their contribution to the overall well-being of society?

  • Our discussion began with an "axe to grind": is the idea of a "human in the loop" only correcting the machine, rather than directing the task in a meaningful way? Are we just following what the machine is doing? Sometimes it seems that only lip service is being paid to human review and human judgment -- where both terms are used without deeper insight into what makes up an iterative process (see also last bullet).

  • In particular, it isn't clear what EU lawmakers are expecting in terms of involving humans in the loop in connection with the GDPR and the more recent draft AI regulations.

  • Individuals will continue to play a crucial role in the "explanation" of AI processes.

  • In the world of translation and user-assisted translation tools, one has to preselect whatever domain you're working on. This is also true with subtitlers for voice recognition systems having to train a machine with different profiles. It isn't clear this will still be the case in 2030 with end users.

  • In a lot of AI systems, removal of friction (i.e., the human element) is the biggest professed goal: if AI ever took over the certain industries, it may rob them of elements of human judgment -- because friction can be a way of getting feedback and improving what is going on.

  • The core philosophical foundation of AI is model induction. The notion that there is some sort of abstract, generalized form from the individual instances on which the model was trained might be called a "Platonic Ideal." This process is akin to a cookie cutter: the cookies it stamps out are the model predictions. If one takes an AI-only approach, the only thing one will get is "Platonic cookies," i.e., cookies stamped out of the generalized, Platonic cookie-cutter. Interjecting human interaction into the process is akin to a human mechanic coming in to make sure that the machine gets oiled, that the cookie forms are not bent too far out of shape, and that things are fixed when they break down. But at the end of the day, even with that "human in the loop" the machine is producing Platonic cookies. The machine, focusing on Platonic "Cookie-ness," may never be able make discovery croissants, pineapple upside down cakes, or chiroti. A machine would not see it, because those things do not fit the Platonic Ideal form for a cookie. Humans can see things and go beyond what a cookie-tuned machine can do.

4. The Evolving Role of Deep Learning in the Legal Space

Given the increasing power of deep learning and the evolving capabilities of transformers that result in a variety of both general and domain-specific deployments, including the legal domain – through applications of text classification, summarization, translation, question answering, as well as increasingly accurate prediction – what should the road map look like with respect to the legal domain’s adoption of these capabilities? Are there any constraints that should exist in the deployment of transformers, from the perspective of technology, legal governance, ethics or other considerations that might limit the scope of their possible applications?

  • There are lots of examples of deep learning applications in the legal space, identifying legal precedent being a good example. However, deep learning offers a completely different paradigm, and how to deal with it is a question that remains to be answered.

  • Explainability is an important goal to be worked on for deep learning models. Ideally, products would be sold only where the origins of the results or suggestions can be understood by customers, through use of visualizations, etc. This may be easier to accomplish with smaller or mid-sized models, but as models become more complex -- with huge numbers of parameters -- this becomes less sustainable.

  • State of the art transformers are getting larger, resulting in not only being harder to explain, but also more costly to run on commercial machines, and more vulnerable in terms of suffering from latency problems. Ultimately, collections of smaller models must be harnessed to overcome all of these challenges.

  • There are privacy concerns that will need to be addressed.

  • Serious issues remain with respect to transformer models making predictions. In the future one may be able to apply predictions at the location of the user.

  • Growing recognition of the importance of clean data (which can mean different things to different people, to some clean is in the sense of fewer security risks).

  • Legal language changes over time, which doesn’t help deep neural networks. To avoid such evolving landscapes, one could train a model with (more stable) historical data. This may end up better suiting consumers of legal services. However, doing so may end up raising other issues, including about the questions/topics not covered in the historical data.

  • Attempting to assemble a taxonomy for law will be a challenge, especially if aspects of law are subject to frequent changes. So would the matter of "jurisdiction" being provided to the deep learning model: it would take effort and plenty of learning. Today’s machines don’t have the structure to do this. If we evolve to exploit graph-based neural networks, we may be able to better approach the problem.

  • Regulators need to be educated about what is relevant for suitable monitoring, and provided with tools to permit them to do their jobs, for example, via graphs, charts, plots, etc.

  • At the end of the day, the biggest challenge is that these models need to be dynamic over time. We’re not there yet.

5. The Future of E-Discovery

What improvements in e-Discovery search techniques are envisionable (e.g., what would TAR 7.0 look like?). How would deep learning techniques assist in facilitating e-Discovery searches? In what other ways will new developments in e-Discovery technologies and applications transform the legal and especially litigation landscapes?

  • One problem for the future of AI is how to differentiate the more positive results of TAR from other search modalities in actual practice. Leaving comparisons to mere "case studies" by vendors is much less influential. In general attorneys are slow to adopt new methods because of a lack of comparisons in their own experience.

  • AI adoption in e-discovery will continue gradually, similar to other legal technologies, and will need to be encouraged from the outside by professional rules, clients, and the courts. Education within the profession should focus on AI helping attorneys "win" cases and successfully represent their clients, i.e. enhancing attorney competitiveness.

  • There will be a shift to the use of AI earlier in the litigation process and not merely to respond to RFP.

    • Transparency and sampling are critical tools. The difficulty now is that metrics are negotiated per case, which can be difficult, time-consuming, and unproductive. We may see the use of publicly-adopted metrics being accelerated.

  • AI must be perceived as only one tool to be used in conjunction with other tools in the litigation process. It is difficult to get attorneys to stop thinking of AI only for responding to requests for production and, indeed, large requests for production.