In the film Minority Report, Tom Cruise, as the head of a “pre-crime” unit, apprehends criminals based on foreknowledge provided by psychics. This results in low crime levels, but also in a world where knowing the future limits choice and access to justice.
The reality of case prediction practice is very different from this dystopian vision.
Today, the goal of predictive analytics for civil litigation is to combine rigorous data and statistical discipline with the expertise of the litigation practitioner to improve the quality of prediction, advice and outcomes for the litigator and their client. It is not to replace the role of the litigator.
This is key. Litigation lawyers already predict case outcomes, almost always at the demand of their client and not just on the question of their chances of success but also as to how long the case will take, how much it will cost and what impacts a judge, a barrister or a witness will have on the outcome. The objective of case prediction analytic is to enhance this, not replace it.
Using rigorous statistical models and accurate, extensive and relevant data, it is possible to produce accurate baseline predictions which practitioners can tailor and refine before turning into client advice. There is extensive research to demonstrate that working this way delivers dramatically improved forecasts of outcome and, consequently, better decisions. In Superforecasting: The Art and Science of Prediction, Philip Tetlock and Dan Gardner show how the use of robust baseline data and statistics coupled with self-critical human analysis achieved predictions 60 per cent more accurate than control groups.
How does it work?
Technically, predictive analytics has two aspects: a statistical/data side and a human prediction side. On the statistical side, data scientists take multiple data points (broadly speaking in the thousands), across as many variables as possible. They then run statistical models that seek patterns in those variables that correlate with an outcome. In litigation these are often relatively obvious. For example, the nature of the claim (contract or tort), reliance on factual evidence or points of law, remedy sought, arguments deployed and the barrister and the judge involved can all have an influence on the outcome.
Taking these forward, the degree and relationship between multiple variables determine the outcome. Is the judge more influential than the reliance on expert evidence for instance? It may depend on one of many other variables. From this a model is developed that can be tested against different cases to evaluate accuracy across multiple examples.
If you imagine that all cases fall on a spectrum of predictive success starting with 0 per cent (no chance) to 100 per cent (certain), cases that sit at the top and the bottom ends tend to be easier to predict. As you get closer to the middle it becomes harder to predict with a high level of confidence. For example, it is easy to predict a poor claim is very unlikely to succeed, but once a claim has some merit, success becomes more difficult to predict. A claim with a 40 per cent chance may well succeed, even though it will, on a probabilistic basis, fail 6 out of 10 times. Determining that outcome is very difficult, but knowing that it is 40 per cent as opposed to 60 per cent is meaningful.
At Solomonic we divided all cases into 10 groups of likely outcome between 1 and 100 (so in groups of 10 per cent) and then developed the model to achieve a forecast for each that is as accurate as possible. That allows a user to begin their baseline with the right contextual starting point.
How does this help practitioners?
For the practitioner, recognising that a claim in front of them has a 40 per cent chance of success gives them a starting point for their own prediction. Does the claim require the judge to reject an oft-employed precedent? By how many percentage points? Does the reliability of the factual witnesses move it closer to 50 per cent? Is a commercial interpretation of a contract key to success, so taking it down a few per cent?
The more carefully the practitioner undertakes this exercise, the more accurate their forecast becomes, particularly if they acknowledge and self-critically challenge their own assumptions and cognitive bias.
At the end of this process the litigation lawyer has the basis for sound advice to the client. (Of course the next stage is to gauge the ability to pay, appetite for risk and motivation of the client to stay the course.)
Used in this way, and with rigorous data and analytics to support them, a lawyer’s ability to effectively guide and advise a client is meaningfully increased, particularly where the client is data led in their attitude as well. It helps to persuade a CFO, perhaps reluctant to pursue expensive litigation, to know the probabilities, not just of the outcome, but of the duration, if you can turn that information into data they can recognise and understand.
The value of predictive data and analytics does not end there. For many cases the process of determining strategy and argument relies on a combination of litigator experience and case law research. Litigation analytics can help the user find the best arguments, tactics and insight, by focusing on outcome and on the factors that impact outcome.
Summary judgment application in front of Walker? The data tells you that just over 69 per cent of applications fail in front of Walker J, as opposed to just 31.5 per cent on average for that court as a whole. This data point is key; it may well make you alter your decision. But of as much value is to be able to readily access the relevant judgments that sit underneath to determine why they failed.
The analytics can also give an edge or anchor point for settlement negotiation. If 66 per cent of negligence claims fail at trial, the defending party introducing that data point would put considerable pressure on the claimant to persuade that they would win in court. The analytics can also potentially form the basis for agreeing the settlement value of a claim and coming to agreement.
As a decision support offering, litigation analytics is already widely used in the USA. A recent FT article suggested around 75 per cent of the top 100 US firms already make use of litigation analytics services. There are broadly two main provider groups: the large legal research providers including Bloomberg, Thomson Reuters and LexisNexis (who own Lex Machina) who have provided analytics add-ons to their core offerings; and a range of agile fledgling litigation analytics businesses offering new solutions for US jurisdictions, with companies like Premonition, Gavelitics and Ross Intelligence offering algorithm-powered litigation support.
A key driver of this growth is the increasingly data-led decision making that large corporates and their legal functions use in order to drive more aligned business plans and decisions. This has created a demand for data to support business and risk management strategies where the executive leadership can balance the risks (financial, regulatory and reputational) and resource investments they make.
Unsurprisingly this same driver for more data oriented legal decision-making in commercial organisations is driving increased interest in the UK.
Accessibility of data
While the demand is growing, and we have seen considerable interest ourselves, the availability, consistency and usability of UK case data is substantially lower than for US courts. The UK courts system is not set up to enable fast and straightforward production, publication and distribution of court information including of judgments and other court documents. Provision varies from court to court. As a result, it has only been through significant investment in careful collection that businesses such as ours have been able to build robust, barrister-validated data and judgment analysis.
This variability in the availability and accessibility of data makes the use of advanced machine learning technologies more difficult. The lack of consistent templates, styles and content structure makes it harder for these tools to work and learn as quickly.
The incorporation of litigation analytics data into the key litigation decision-making and research processes and moments will become widespread as clients seek support for their data-led decision making and litigation lawyers look for every area of advantage and a competitive edge. As it has elsewhere, this will serve to support increasing the quality of work as well as to put key legal decisions into the same realm as other measurable risk decisions. The ability to find relevant data more quickly will also help to even out some of the resourcing imbalances you can find, especially when smaller or less well-resourced legal teams go up against the big firms.
Balancing risk and reward
As in other areas, while data brings value and insight, relying on it is not without risk. Broadly, there are two types. The first is in the misinterpretation of data and analysis and its inaccurate or incorrect utilisation. Firms and lawyers will need to master key area of data and statistical analysis in order not to fall into this trap. The smarter litigation analytics firms are both supporting their clients with training and providing helpful gradings, scores and guides to assist with this. Interestingly this investment in new skills and competencies will have a benefit in improving the forecasting accuracy of those that engage with it.
The second risk lies in the tendency to shortcut by assuming the data provides the full answer. Once the data suggest a claim has a 60 per cent probability of success, that chance can become a fait accompli. Without careful management, we risk short-changing claimants and defendants and removing their access to justice. Potential cases on which the law may well need to be challenged may never be brought to court. It will require the thoughtful application of focused insight to ensure this doesn’t happen.
However, these risks are not only manageable but, effectively identified in advance, add to the quality of decision making and advice that the litigator can provide their clients. As such the future for litigation analytics, as a way of enhancing the skills, and abilities of the litigator and the value of the service they provide to clients, has a very promising future.
Edward Bird is the Chief Revenue Officer at Solomonic (www.solomonic.co.uk), a London-based litigation analytics start-up which launched its first product in early 2019. Email firstname.lastname@example.org. Twitter @EdwardBird.Tweet