
In 2018, the term LegalTech has become an integral part of legal services. LegalTech is a collective term for technology that automates simple, legal tasks. It offers legal professionals enormous opportunities to improve their services. For example, it helps them to work smarter and make specialist knowledge more accessible.
Ivar Timmer is a researcher in Legal Management and a senior lecturer at the master’s degree in Legal Management at the University of Applied Sciences in Amsterdam. He is also chairman of the Legal Tech Alliance: the collaboration between 11 study programmes in the field of LegalTech. In this capacity we have asked Ivar to share his vision and findings on LegalTech based on his scientific activities.
From Tuesday 23 October until Friday 26 October we will publish a guest blog of Ivar every day. The following topics will be discussed during the Berkeley Bridge 4Day LegalTech Blogathon:
- Blog 1: Increasing the productivity of legal workers
- Blog 2: Artificial common sense
- Blog 3: Automation of legal advice : the human factor
- Blog 4: Robot judges?

Guest blogger Ivar Timmer
Blog 1: Increasing the productivity of legal workers
Legal professionals are knowledge workers. Every day they analyze and process (large amounts of) information and convert this into legal ‘products’, such as contracts, legal decisions, or legal opinions Because of information’s central role in legal work, it is obvious that – although other aspects require social and communicational skills and may be difficult or impossible to automate – information technology is potentially useful to increase the productivity of legal professionals. In 1999, famous management author Peter Drucker wrote an article on the productivity of knowledge workers, declaring the principles of productivity increase of Frederick Winslow Taylor (1856-1915) still valid and applicable to knowledge work.
Taylor was one of the first management scientists who tried to increase the productivity of manual work in a systematic way. His method seems simple. First, he determined which task a worker performs. He then analyzed the individual steps that made up this task and determined how much time, resources and movements were required for each step. The ultimate goal of this analysis is to find the quickest, cheapest and most effective way to do the job, while imposing the least mental and physical strain on the worker. His method included a redesign of the tools or instruments that a worker used in his task. On this, Drucker remarks:
“Whenever we have looked at any job—no matter for how many thousands of years it has been performed—we have found that the traditional tools are wrong for the task. This was the case, for instance, with the shovel used to carry sand in a foundry (the first task Taylor studied). It was the wrong shape, the wrong size, and had the wrong handle. We found this to be equally true of the surgeon’s traditional tools. Taylor’s principles sound obvious—effective methods always do. However, it took Taylor twenty years of experimentation to work them out.”
In the light of this observation, it is fair to ask whether legal professionals always use the right tools for the right task. Over the last three decades, legal practice has been digitized in many ways, but many ways of working originated in the paper age. Word processing software has become the dominant tool. The first question is whether legal professionals have really mastered this important tool. Legal tech entrepreneur and speaker Casey Flaherty has become renowned for convincingly explaining that the ‘Microsoft Office-skills’ of legal professionals are generally far below par¹.
An even more interesting question is whether word processing software is the right tool for some of the tasks that legal professionals, legal departments and legal service providers perform on a daily basis. Many organizations often produce huge volumes of highly similar contracts, powers of attorney or other legal documents. If we would analyze these processes in the way Taylor did, common conclusions would be that word processing software is not fit for the task and that decision support and document assembly software are more suitable. These tools are not only likely to be quicker and easier, but can often do a better job at securing the quality of the drafting process, for example because they provide a ‘single point of truth ‘ as a starting point, rather than different templates circulating in the organization.
“Continuing innovation has to be part of the work, the task and the responsibility of knowledge workers.”
“Productivity of the knowledge worker is not—at least not primarily— a matter of the quantity of output. Quality is at least as important.”
Unfortunately, Taylor-style analyses of legal tasks are still rare in many organizations. As a result, legal professionals often work ineffectively ways, wasting talent, time and money. All knowledge workers who take their work seriously, would do well, with Taylor and Drucker in mind, to regularly analyze their own work in search of improvements and innovation.
Blog 2: Artificial common sense
Artificial intelligence is a concept that is hard to pin down. A common joke amongst experts is that things are called artificial intelligence (AI) until the software actually starts working. In a broad definition, however, all systems that mimic human intelligence could be called AI. AI is undeniably a fascinating subject leading to spectacular results in various fields. An example is AlphaGoZero by Google DeepMind, which taught itself to play Go starting with only limited instructions and reaching ‘superhuman’ levels in a very short period of time. It is partly stories like this that make people speculate about the future implications for legal practice and the arrival of ‘robot lawyers’. In this debate, the media do not shy from making wild statements. In 2016, based on an interview with a partner in a British law firm, who specialized in tech law, a British newspaper ran the headline that ‘artificial intelligence could put lawyers and doctors out of a job in five years’ time’².

The reference to the medical profession is no coincidence. In this sector, too, speculation is rife about the consequences of artificial intelligence. Realistically speaking, however, we are still far from a world in which robot doctors or robot lawyers will autonomously ‘treat’ clients and patients. For example, handling a legal case from A to Z, from gathering real-world information to deciding on an appropriate strategy, is of a completely different order than the game of Go, no matter how complex this game may be. Still, technology could play an important role in current legal practice. In daily practice, there is a lot of low hanging fruit ready for picking, using readily available technology, without having to wait for robot lawyers.
Niamh McKenna from Accenture wrote an interesting blog about the possibilities of artificial intelligence in the medical sector. She points to practical problems that can be solved using relatively simple technology. Technology could, for example, help reduce the number of no-shows for medical appointments (annual costs in the UK: around £ 1 billion, or the equivalent of 250,000 hip operations). Other examples are tools that ensure that the right people, liquids and medicines are available in operation rooms. Her blog was aptly titled: ‘Crawl before you can walk, before you can run’.

In the field of document assembly and contract automation, the first successful application within organizations is often the automation of the Non-disclosure agreement.

It is interesting to apply this line of thought to legal practice. Relative simple technology sometimes has a big impact. In the field of document assembly and contract automation, for example, the first successful application within organizations is often the automation of the Non-disclosure agreement (NDA, or: confidentiality agreement). Despite being one of the least complex agreements within an organization, it can still usurp quite a lot of time of legal departments. Automation could enable non-legal professionals to easily draft their own sound NDA’s. The legal department could then confine itself to process control and spend the remaining time on more important, strategic work. Other examples include systems that, by means of a short online questionnaire, ensure that procurement professionals use the right contract terms in the right situation. Broadly speaking, these types of applications fall within the domain of artificial intelligence. They play the role of expert (contract) lawyers and could therefore be called expert systems; a branch of artificial intelligence. Although these systems may be low in complexity, they can often realize real savings for legal departments. Picking low-hanging fruit first, before moving on to more complex issues, isn’t it just a matter of ‘artificial common sense’?
Blog 3: Automation of legal advice : the human factor
Legal professionals know that many of the legal issues faced by consumers and organizations share common characteristics. It would be interesting, therefore, to explore to what extent legal advice in such similar cases could be standardized and automated. In doing so, human factors must be taken into account. Especially in consumer practice, many people believe that their particular legal problem is unique. Understandably so, because they are, in the words of legal sociologist Galanter, one-shotters. One-shotters are persons who encounter a certain legal problem just once in their lives, unlike repeat players: professionals who frequently have conflicts or disputes, usually as a result of their business activities.
Repeat players are usually better at approaching a dispute or conflict rationally. Especially when the financial impact is relatively low, they want to receive legal assistance as effectively as possible, at the lowest possible cost. For one-shotters, costs are usually even more important, but for them everything is new. Their legal problem will often invoke strong emotions, especially if their problem is work or family-related. Compared to repeat players, the information asymmetry between the one-shotter and a legal adviser is commonly greater. Repeat players will more or less know how conflicts proceed, what the legal frameworks are and what to expect from their advisers. One- shotters feel like groping in the dark, especially at the start of the dispute.
In various standard cases, it may be technically feasible to provide consumers with sufficiently personalized advice by using digital decision support software. Consumers could use the result to defend their rights. In practice, a human legal adviser, who provides a listening ear and reassures them that the legal analysis of their case is sound (the human factor), will often be indispensable.

Of course, professional legal assistance has other advantages as well. A letter from a professional lawyer is more likely to have an impact than a letter from a consumer. Also, the above does not mean that digital decision support is not useful to reduce the costs for one-shotters. A well-designed digital intake and diagnosis, for example with clear instructions and help videos, can inform the client, clarify questions and contribute to realistic expectations. As a result, the information asymmetry between the consumer and the legal adviser could be significantly reduced before the first meeting with the adviser. Digital tools can effectively support the advisory process , reduce the time spent by the adviser and thus the associated costs.
The full title of Galanter’s essay on one-shotters and repeat players was: “Why the haves come out ahead: Speculations on the Limits of Legal Change”. Technology can help reduce the costs of high-quality legal advice, improving the chances that one-shotters with a fair case will come out ahead.
Blog 4: Robot judges?
The question when a computer will be able to replace a judge features regularly in discussions about legal tech. In these discussions, it sometimes seems as if participants see technology taking over legal practice in the near future, decimating the number of legal professionals. Few appear to realize that over the past decades (search) technology has greatly improved the efficiency of legal professionals, while the overall number of professionals working in legal practice has only increased. At least for this period, it appears that that the juridification of society has outrun technology’s ability to increase the efficiency of legal professionals.
There is no doubt that in relatively simple situations technology can replace humans in legal decision-making. For years, fines for running a red light have been imposed automatically, where humans were needed in the past. The legal principle is clear here (“not stopping at a red light is not allowed”) and the facts can be determined relatively easily by red-light cameras. But even here, there can be complex cases where a human decision is needed, for example in case of cloned license plates. In simple cases, however, the system can replace judicial decisions.
Few will object to technology relieving courts and the judicial system for these kinds of simple situations. However, the majority of cases that end up in courts are not simple, but complex. In these situations, parties dispute the facts and the applicable legal standards have a complex and open character. In civil law countries, many legal standards are in fact deliberately formulated in an open way to make sure that the legal system can adequately respond to new situations. Therefore courts deciding a case on the basis of open standards always create new law, even if it is on a micro scale.
Without going into technical details, it would be fair to say that experts agree that we are still far from a situation where a digital system could, in complex cases, take over all the tasks and functions of a judge. Should we ever reach this technical level, a whole set of new and important questions enter the arena. Do we want technology to create new law? Who or what creates new law in those cases? Will we be replacing the judge by a programmer, or will it, more likely, be a self-learning system that creates new standards by itself? What do we do if the system makes decisions that many people find unacceptable? Do we want technology to be able to impose severe sentences on people, or should this always be a human decision?
Apart from the technological obstacles, these questions are so complex that we can safely assume that in the foreseeable future judicial reviews will remain necessary. Until then, the only real question is: to what extent do we want technology to support judges in making decisions? As long as the question is posed in this way, humans will remain the master of technology. This, in my view, is the only correct starting point.
