I've been invited to address the Pacific Northwest Software Quality Conference in November and have agreed to trial my talk as a lightning talk in January. This is a work in progress, see first draft Indexing the Invisible.
motivation
Its hard to know where to look when large systems built by many teams start misbehaving.
In an event retrospective we ask what were we thinking then given the information we had available then?
I once asked an experts, what do you do that you are surprised others don't do? Find seams between code.
systems
A property graph pre-reifies relations between nodes with key-value properties on each. Save in JSON.
Every aspect of every micro-service deserves its own graph, each a corner of the block diagram that exposes the unknown in a seam.
Every unknown uncovered in a retrospective should be in a seam or we must put it there before continuing.
Quality experts own the learning from incidents. They can "pull request" updates to the graph JSON.
process
I built a graph in Neo4j with a half million relations from forty sources. Fifty canned queries could draw thousands of useful diagrams. This work proved hard to duplicate.
First innovation: separate the graph representation from the block diagrams. Generate diagrams on demand. Merge the graphs surrounding the seams.
Second innovation: separate each aspect of a service into separate graph. Graphs will proliferate so build a "recommender engine" to suggest merges.
Build graph JSON in the continuous integration pipeline. Store hyperlinked SVG in the repo with other README files. Run graph recommendation in a client-side browser accessible during incidents.
future
I have built and open-sourced what I have described. This comes from a decade exploring "data wiki" mutually informed by my day job.
I am at your service should you and your teams wish to apply what I have learned and speak openly of your experience.