Excerpt from Ray Garcia:
This is an extensive version of a business rules conversion prompt. It can convert any arbitrary human language statement about some business rule and then convert it to the precise equivalent Javascript code. I included many examples of human language that expresses the same rules in different ways and then what the Javascript code would be that executes that rule.
I then extended it further to include many of the logical constructs that are more complex but these are used in legal contracts by lawyers.
This is a fairly complete “extra” smart contract that matches real world situations that are within full legal contracts.
I tested it with a small model, Mistral 7B instruct and it works with that model.
I used large model to generate a lot of these examples, Anthropic Opus which is the most advanced model they have.
What I did afterwards to test it is generate a set of rules for the insurance industry, which is very complex, and then convert it to Javascript code. Since I only did it based on the rules, it lacked a schema, so I did the reverse and generated a JSON schema to support the rules, and then from that I generated code to implement that schema on RocksDB key-value database.
This is interesting in that it reverses the entire process, where instead of trying to figure out the data structure it started with human language describing some rules and then from that it generated the structure to support the data. This closer to how people think.
What is a leap here is that it might be possible to take some legal document, with an LLM parse it into the constituent rules and then implement it as a verifiable software that maintains the rules of the contract.
I have spent a lot of time creating specialized scripting languages to control an LLM to do this but this is the first time that it get beyond my own experiments and into something that could be implemented.
Why this is such a breakthrough is that LLM’s are non-deterministic so even though I have several scripting languages I use for prompting them, they remain unreliable. Converting it to code makes it more reliable.
What was missing is a stack that is lean and would scale. Doing this with python and postgres on fat hosting isn’t practical, it becomes a mess and very costly on the big tech hosting and then it won’t scale due to many factors like code bloat.
By using the LLM as the mediator that bridge between human language and the code it can create solutions fast that are lean if it has a good stack to run on.