AI and robots in law practice
From Brian Inkster:
AI continued to be a de rigueur slot in legal technology conferences during 2017. But delegates inevitably left these conferences none the wiser as to what they are actually supposed to do with AI in their own legal practices or how much it might cost them.
Despite this law firms have been boasting that they have adopted AI but the reality appears to be that their actual adoption is no better than what they have done to date with document automation. Indeed there are stories circulating of law firms having paid license fees to AI suppliers in 2017 just to be able to say that they are on the AI curve even though they are not actually using what they have paid for in any shape, size or form.
The hype surrounding AI in law was exemplified when CaseCrunch, a legal AI startup, claimed to have made history in the legal profession. They held what they stated to be the world’s first competition to directly pit lawyers against AI in a “Man v Machine” battle. AI won the competition, scoring 86.6 per cent accuracy compared to the lawyers’ 62.3 per cent. This was “fake news” as the reality was that none of the lawyers used in the “battle” knew anything about PPI claims, which was the subject matter of the competition, whereas the computer had a full database to pull from. The organisers when pushed admitted that “the system does not have the experience of a lawyer who has worked in the field of PPI”. Presumably why they didn’t pit the system against such lawyers
Chatbots in law were hitting the news in 2017. They had apparently been taught to make coffee and order pizza. Still not quite sure what value they actually add to your website over and beyond what a good search function and a contact form would. Perhaps its early days and their time will come.
Blockchain was also mentioned in passing at legal technology conferences in 2017 but again clarity on what your average lawyer can do with it was scant.
Twitter v LinkedIn
Brian Inkster continues:
For a good few years I have thought that Twitter was the most effective social media channel for lawyers to spend their time on. I used to refer to LinkedIn as “deadly boring”. It once was.
However, in 2017 I grew to like LinkedIn a lot more. I felt it had evolved and come into its own. It is being used far more effectively as a networking/interaction tool than used to be the case. I notice that posts I put out on LinkedIn invariably get more traction and interaction than the same post on Twitter. The spam that used to come via Groups on LinkedIn is a thing of the past although LinkedIn have recently announced a focus on “re-integrating Groups back into the core LinkedIn experience”. Connections and referrals are being made on LinkedIn in a way that used to happen on Twitter but no longer seems to happen on there in the same way.
My own view is that Twitter’s purpose, other than to make a lot of money for its founders and investors, is very different from LinkedIn’s. It is very much geared towards reporting current developments and reacting to and analysing their importance. Of course, it does depend on which bubble you inhabit as to how deep or trivial are the issues under discussion and how useful or annoying are the replies. For professionals, and lawyers in particular, it offers rich seams of discussion and expert analysis of the sort we used to associate only with meatier articles and blog posts. Two recent developments on the platform have helped.
The maximum tweet length has been increased from 140 to 280 characters. The original restriction encouraged brevity and creativity, but it was so restrictive that it also encouraged less beneficial practices. The longer limit, whilst initially bemoaned by the old school, appears to have been well received.
Twitter “threads” have been officially adopted. Like a number of Twitter features, threads were an innovation by users rather than by Twitter itself. Linking together a sequence of tweets turns out to be a very effective way of developing an argument, telling a story and so on. Threads have very quickly established themselves as a literary form well deployed by lawyers.
By the end of the year we’d learned that much of what we term AI, and certainly much of the AI that is actually being implemented in legal practice, is principally based on machine learning. Give a machine a lot of data and it will learn from it and then apply that knowledge going forward in a virtuous cycle. For example, in the legal sphere we have machines taking over from overworked junior lawyers in conducting document review. So machines are doing the drudge work in an important but fairly narrow field. Is this really intelligence? They are also being used to predict the likely outcome of cases based on precedent. And in the US, AI is risk assessing offenders and even sentencing criminals. What could possibly go wrong?
We learned a fair bit about algorithms in the last year. “Algorithm” is really just a geeky word for “set of rules”. We were previously probably most familiar with the term in relation to Google; its PageRank algorithm was much talked about. In fact Google deploys thousands of algorithms in determining how to rank pages in its results.
Facebook and Twitter use algorithms to decide what to put in your news feed and what ads to show you. Uber uses algorithms to decide which driver to match to your ride and how much to charge when demand exceeds supply. These are all decisions made by powerful companies affecting many aspects of our lives and little is disclosed about how they are made.
Even where we know the rules, we may not appreciate their implications. Leave a decision to an AI machine trained with biased data (which is more than likely) and it will exhibit bias.
So we started worrying about algorithms. From Algorithms and the law on Legal Futures:
“Algorithms are rapidly emerging as artificial persons: a legal entity that is not a human being but for certain purposes is legally considered to be a natural person. Intelligent algorithms will increasingly require formal training, testing, verification, certification, regulation, insurance, and status in law.”
Robots taking jobs
There has been an awful lot of discussion about robots taking jobs. Which jobs, how many, by when? Nobody seems to be able to agree.
In Big Law, AI is doing the drudge work formerly occupying junior lawyers. They believe those jobs can be replaced with more valuable work to generate more profit. That begs the question what will happen to lawyers and paralegals further down the food chain.
Professor Richard Susskind addresses this question in the new edition of Tomorrow’s Lawyers, saying “it is hard to avoid the conclusion that there will be much less need for conventional lawyers.” (For a review of the AI chapter, with extracts, by Ian Lopez, see Corporate Counsel.)
Our 2017 review continues …Tweet