PANEL SUMMARY &

FUTURE DIRECTIONS

Legal AIIA Workshop, ICAIL 2019, University of Montreal

Notes from Final Panel* & Future Directions Discussion

17 June 2019


* The topical notes immediately below represent the responses of each of the breakout groups and their facilitated lunch-time discussions.


Effectiveness

Any evaluation measure is better than doing nothing

Need exists for outlining the reason for algorithmic decisionmaking

Measuring did the system meet user expectations

Measuring did the system provide useful knowledge

Who is evaluating effectiveness?

Happiness as a measure of effectiveness (human well-being)


AI effectiveness measures should not just be about binary classification (as in e-discovery relevance)

--topic completeness; one incident measures

Need exists to provide good training data

Today, best annotations come from people

Importance of human oversight, vetting of technology

Important to have ability to appeal to authorities if you believe AI was wrong

Other forms of metrics to consider, e.g., business metrics (e.g., time to completion)

Need to report on unpredictability of a system

          • When is a finding less certain?

          • Does this represent a ‘corner case’ or a target?

Competence

Understanding weaknesses

Need for validation of results

Need for explanation of process

Accountability

New kinds of insurance policies may arise

Is AI responsible for its own actions?

Who selected the feature set?

Need for auditing: where and when did the process go wrong? (e.g., Boeing)

Idea of pre-registration of methodology before doing an experiment (publishing the method before publishing the results)

--designers/developers declare strategy they are using

--might not work in agile development schema but would work in waterfalls

Transparency

Who is the target audience?

What is important to that audience?

What level of detail is important for the audience to understand?

Important to be able to express answer within the context of the target audience

--how one got the answer

Is it OK for validation method to be skewed towards that audience?

Do you need to have a ‘qualified expert’?

Notion of “persuasive transparency”

There is no such thing as systemic-neutral transparency (always a bias)

Accounting for a priori bias – what people believe to be true

Is there a need for a transparency police?

High stakes algorithms (Loomis) vs others (pizza delivery coupons)

Tradeoff we are willing to make between transparency and performance

(e.g., lawyers like keywords, they are explainable, but less efficient than black box algorithms)

How to account for randomization features in algorithmic systems?

How ‘deterministic’ does an AI process have to be to be accepted?

Thoughts on the Second Legal AIIA Workshop at ICAIL 2021

Suggested Topics

1. Privacy and security in AI

a. Users wishing for guarantees as to what algorithms are doing with our data

b. Leaks, vulnerabilities in AI

c. Nobody tampered with the system

2. What does it mean for something to be ‘explained’?

a. Use cases

b. Bring in philosophy, psychology, current debates as to what an explanation is

3. Developments in the regulatory world (e.g., GDPR’s influence on AI)

      • Professional standards for using AI systems

4. AI as a legal entity

5. AI moral agency

6. What kinds of legal problems are being solved by AI

      • Developing a taxonomy where stakes are higher or lower

7. All about the evidence and storytelling: how does the use of AI help or hinder?

8. International variations: open coding vs proprietary (Asia and US)

a. Different lens: human rights

b. Countries with regulatory schema and those without it

9. What knowledge/skill set is needed to understand AI?

      • Moral imperative to share what we know

10. What should be our interaction with other workshops at ICAIL?

11. What should be the scope of Legal AIIA workshop?