Discussing UK law. Links: swarb.co.uk | law-index | Acts | Members Image galleries

AI beats lawyers at predictions.

A place to sit about and chat between more important things.

AI beats lawyers at predictions.

Postby Hairyloon » Sat Oct 28, 2017 9:15 pm

UK-based legal tech start-up CaseCrunch challenged lawyers to see if they could predict with greater accuracy the outcome of a number of financial product mis-selling claims. The showdown has now been completed with the results announced at the offices of insurance law firm, Kennedys, last night (27 Oct).

As explained in the statement below, CaseCrunch’s* predictive algorithms and modelling of legal issues came out on top, scoring almost 87% accuracy in terms of predicting the success or failure of a claim. The English lawyers who were beaten got overall an accuracy level of around 62%.


https://www.artificiallawyer.com/2017/1 ... -showdown/
Take me to your lizard...
User avatar
Hairyloon
 
Posts: 10203
Joined: Thu Nov 01, 2012 3:12 pm
Location: From there to here and here to there... Funny things are everywhere.

Re: AI beats lawyers at predictions.

Postby atticus » Sat Oct 28, 2017 10:48 pm

The article says more, to put the bold headline in context.
atticus - a very stable genius
User avatar
atticus
 
Posts: 20108
Joined: Sun Nov 11, 2012 2:27 pm
Location: E&W

Re: AI beats lawyers at predictions.

Postby Hairyloon » Sat Oct 28, 2017 11:50 pm

This is why I include the link. I don't think it appropriate to post the whole article.
Take me to your lizard...
User avatar
Hairyloon
 
Posts: 10203
Joined: Thu Nov 01, 2012 3:12 pm
Location: From there to here and here to there... Funny things are everywhere.

Re: AI beats lawyers at predictions.

Postby atticus » Tue Oct 31, 2017 1:38 pm

something else for those with an interest to read. It seems that the self-serving press release may indeed have been self serving.
atticus - a very stable genius
User avatar
atticus
 
Posts: 20108
Joined: Sun Nov 11, 2012 2:27 pm
Location: E&W

Re: AI beats lawyers at predictions.

Postby atticus » Wed Nov 01, 2017 12:41 pm

Richard Moorhead, Professor of Law and Ethics at UCL has tweeted the following thread this morning.

@RichardMoorhead wrote:1. A few thoughts on the @Case_Crunch robots vs lawyers test this week....

2. One the level of successful prediction is high, but we don't know what to judge that against.

3. If 85% of the claims would be accepted and we guessed 'accept' all the time we'd get a very high success rate with only one decision rule

4. What has been done here has been done before. Ruger et al predicted Supreme Court cases using a 6 variable algorithm published in 2004

5. Their algo also 'beat' lawyers, although if you look closely the 'real' experts in Ruger's test did well.

6. It seems quite clear that the lawyers who took the test were not experts practising in the area

7. So their success rate tells us little

8. The team seem to identify that non-legal factors influenced the decisions and the algo was better at picking those up.

9. This is an interesting claim. What were these factors? What makes them non-legal? Are they not relevant to the legal tests for claims?

10. The algorithm might be revealing problems in the decision-making process of the Ombudsman, or better at tapping into discretion.

11. It also fits with what we know about legal representation, which suggests that knowing the decision-maker's approach is key to success.

12. hence the importance of the expertise of the lawyers. Formal legal training is not a substitute for understanding how the FO decides

13. The predictive power of the test can also be questioned

14. It's not clear what the data was that was supplied as 'the facts' but it might have come from decisions

15. If it was decisions, the language of those decisions might be signalling the result. When we accept a case we use words xyz.

16. Those words might be quite innocuous and not be the kind of thing that humans would pick up.

17. If that were right, then the machine is picking up language in accepted and rejected cases after the event.

18. Again, if right, it would not really be predicting the results.

19. So we'd need to hear more about how the data was put together, how the system trained, etc.

20. The recent research on ECHR cases where a similar approach was taken is another case in point

21. So it's an interesting test, with what looks like an interestingly high success rate, but we need to hear more detail.

22. They have promised a paper, so we should get to learn more. We need a lot more of these kinds of experiments.

23. And greater thought about how predictions would actually be used in reality. Triage, decision support, actual decisions? /ends


What none of the above covers is that we rarely know all the evidence in a case until some way down the line in the particular court or tribunal process. Donald Rumsfeld's 'known unknowns' and 'unknown unknowns' apply. I touched on this in my post pinned to the top of litigation forum on civil litigation, the basiscs. It is rare that you can give an accurate and unqualified prediction of success at the beginning of a case.
atticus - a very stable genius
User avatar
atticus
 
Posts: 20108
Joined: Sun Nov 11, 2012 2:27 pm
Location: E&W

Re: AI beats lawyers at predictions.

Postby dls » Sat Nov 04, 2017 6:19 pm

Practising law is about creating a better solution - refusing to take the given descriptions of law and of the facts in their own terms.
David Swarbrick (Admin) dswarb@gmail.com - 0795 457 9992
User avatar
dls
Site Admin
 
Posts: 12339
Joined: Thu Nov 01, 2012 1:35 pm
Location: Brighouse, West Yorkshire


Return to The Robing Room

Who is online

Users browsing this forum: No registered users and 1 guest