Our recent Research & Development (R&D) on using Large Language Models (LLMs) to automate complex data extraction from capital markets trading agreements explains both the exuberance and the disappointment experienced in enterprise adoption of this technology.
AI is Magic
The exuberance comes from the almost magical way that with a carefully crafted prompt, any of the main LLMs can produce excellent results for a discrete section of text. This generally leads to attempts to utilise this powerful new capability to solve a real world problem.
AI Does Not Work
This second step is where the disappointment first creeps in.
In the initial tests, the LLM is being controlled by a person, using their own understanding of the domain and the context of the discrete problem in the wider setting. The person is identifying the section of text to feed the LLM from a much larger document, using human intelligence, and writing the prompt, also using human intelligence.
Human In The Loop
In the initial (exuberant) step, they have unconsciously simplified the problem space significantly from the real-world challenge, by applying domain knowledge, rules and structure. In the enterprise setting, where you are seeking to maximise automation and minimise human involvement, this part is the gap. The gap between the excellent ability of the LLM to extract the data and the requirement to have a human in the loop for managing the LLM. The human is also involved in tuning the prompt when the initial answer is not quite as expected, and also when attempting to solve the same problem using slightly different sample data. Once again more human intelligence is being used to cater for the variations and alter the prompt.
Further knowledge often needs to be codified to ensure output validation.
Symbolic AI – The Unfashionable Sibling
Most of the media hype, and general populations recent experiences of AI (Chat-GPT, Gemini, Co-pilot) are based around LLMs and fall into the category of Neural networks. But there are two paradigms, the other being Symbolic AI (think classical logic, rules, development, coded knowledge).
There are some very informative articles on the topic of Neuro Symbolic AI, the idea that you need both paradigms to realise real world results (see Gary Marcus for example), which more eloquently describe the overall challenges between the two schools of artificial intelligence.
Filling the AI gap
In summary, niche expert systems can unlock the benefits of LLMs, where work has been performed to codify the domain specific tasks that a human performed in the initial ‘honeymoon’ period of testing LLM capability, thus enabling the successful use of LLMs for solving a real world problem. Just don’t be caught out by the gap. It’s there, it requires filling and this takes significant effort.
For capital markets documentation, we’ve filled this gap with years of knowledge built into an expert system, that’s now rapidly unlocking the power offered by LLMs.
