Graph Complexity
Nowadays, most software development teams automatically measure one or more “code complexity” data points and reject commits if they are over them. These can be simple ones like lines of code in a function, or in a file, but also things a bit more engaged like NPATH complexity.
At work, we want get out of the way of business experts and have them manage their business flows as much as possible. This means moving things from code to tools like rule engines and decision engines. I am all for it but some alarm bells did go off in my head.
The first was testing. The tools we reviewed do have some way for you to test the workflows/graphs in the same UI, even before you publish them. This is quite straight forward for tools that do no I/O (i.e. all data comes from the input you send them). I’ll leave testing for another post.
The second thing was complexity. How can we help our team manage that complexity? In the software world we are used to creating reusable things, and we also have these tools screaming at us when we reach certain levels of complexity metrics.
Composition was part of Verdetto from the start, but I only just added a complexity measurements. Moving business rules into visual tools like this does have a clear advantage: it helps exposing complexity. If the business rules are mostly in hard-to-find source code files and in hard-to-read tests, the team owning that complexity might not even be aware of it. Seeing a complex graph might be a good way for the whole team to be more aware of the complexity, and better judge if it is actually warranted or not.
The main metric I wanted to have is “n-path”. I like to describe it as a proxy of “In how many ways can we traverse this graph?”. Some nodes (say, a matrix node of NxM) seem like one path, but they hide lots of variants inside. This means that a simple looking graph can still have a high degree of complexity.
Handling complexity is of course part of what we do. Composition is one way to handle complexity. Akin to abstraction. You can have build orchestration which handles a complex workflow to lookup the latest data, wait for updates from external tools, etc… but you might use it in several simple parent orchestrations. I argue that this is a good thing, so for Verdetto I chose to expose both measures, the local complexity and the total complexity of the graph. The local one is the one that matters when trying to grasp what this graph actually tries to do.
In Verdetto, I chose to add a badge at the top that summarizes the complexity (Simple, in this case):

It will tell you the details on the HTML title:

The approximate means this graph references a policy by “latest” version, so, we can’t be sure what the complexity is at runtime, because the embedded policy might have changed. You can hit the small refresh icon to update the values with the current latest policy.
I asked Claude to research what other tools did. I was a bit surprised that this is not a thing most tools worry about. The only example it found was in SAP. They do calculate a Complexity Score - you can read about it in this blog: Understand the Changes to the Customer Journey Complexity Score . Other tools either don’t have it, or at best, the community has some plugins or external tools to aid in this matter.
So, why bother measuring complexity anyway? It seems few tools do this. I still believe measuring complexity and making it a first-class concern of the tool will help teams:
- Model their solutions in a better way (i.e. handle this complexity better, now that it is more obvious).
- Help them question if the tool is the best solution for the problem they are trying to solve.
Not all problems must be solved in the same tool after all.