To achieve a real understanding of software for the purpose of managing and changing we need a deeper level of observation that is as close as possible to what it is software actually does. Software does not log. Developers write logs calls to create an echo chamber.
A proposal for a different approach to application performance monitoring that is far more efficient, effective, extensible and eventual than traditional legacy approaches based on metrics and event logging. Instead of seeing logging and metrics as primary data sources for monitoring solutions we should instead see them as a form of human inquiry over some software execution behavior that is happening or has happened. With this is mind it becomes clear that logging and metrics do not serve as a complete, contextual and comprehensive representation of software execution behavior.
The Good Regulator Theorem states "every good regulator of a system must be a model of that system". But what exactly would such a model look like? What elements should the model contain and how might they be related and reasoned about? The theorem itself does not address this so in this article I present my own research findings covering dramatism, observational learning, experiential learning, activity theory, simulation theory and mirror neurons as well as software activity metering and software performance measurement.
In engineering, and possibly life, there is always this tension between adaptability and structural stability. We want our designs to be highly adaptable, which many confuse with agility. With adaptation, our designs attempt to respond to change intelligently, sensed within the environment, with more change, though far more confined and possibly transient, at least initially. But there are limits to how far we can accelerate adaptation, without putting incredible stress on both the environment and the very system contained within. These restrictions apply to both structural and behavioral adaptation as it is challenging to decouple the two due to system memory.
Our structure is necessary but not because we need rigidity but because such structure is the basis for a system’s memory. I see structure in code and software architecture as a formation of system memory. Sometimes the memory is formed, tainted, in the early development of software in defining modules, classes, and methods. Bones. Other times we create these memory structures in the composing of code and data flows. Blood. If change were indeed accelerated (beyond limits), there would be insufficient time for memory to form around such structures, leading ultimately to a breakdown in the system as prediction itself in response to change becomes chaotic. And it is not just predictability that is impacted but our ability to respond because the structure is the means for response in many systems. Just imagine the cells in your body were always re-organizing in response to a stimulus. You would not be able to effect change in the physical world because you might not have an arm to do so and I have not even discussed the effectiveness of this in an environment with many other participates undergoing their own set of changes. The structure of our body is a kind of memory of evolution but it is also how our memories form after birth.
So structural stability is important in the formation of a system’s memory but it also leads to limits which many will eventually find far too restrictive. Who does not want to be a superhuman or X-men character with powers beyond what is humanly capable? This applies equally to software and the systems that house and operate such software. Over the course of development, deployment and maintenance of software memory structures build up within and around the software. They harden and in so doing start to inhibit the adaptability in other parts of the system, even as we attempt to create lubricating workarounds. Eventually, any change moves from adaptive to forced structural breakage, at which point no one wants to take the pain especially as the risk is even greater for the overall integrity of the system.
But there is some light at the end of the tunnel for software as humans have already discovered and applied a solution to overcoming structural limitations (or injuries to such structure). Simulation. In our dreams, we can perform at a level that is only limited by our imagination (assuming reasoning does not prevail too much in the simulation). Gaming is another form of simulation in which we leave aside our structure and the behavioral patterns and capabilities that form around them. We have extended our physical environments to sense (and invariably) simulate our behavior to assist us.
Why not add similar “super” capabilities to software by having the essence of the software, its behavior, projected to a simulation. Within the simulation (many running in parallel) we extend the lifetime and reach of the software adaptation, which in the typical enterprise setting is how integrated is the software with other newer services. We can imbue the software within the simulation with new system-wide behaviors that would be hard to achieve within the source (reality) software. We have tried this in the past with Aspect Oriented Programming (AOP), but there we still needed to operate within the structure of the software as well as in the same spacetime. With simulation that structure need not be present, only the coded (adapted) response to the simulated behavior. The simulation can replicate across time and space, and with each incarnation of the simulation different laws and rules can be applied allowing newer structures to form. Remember even in gaming worlds there is also a kind of memory formation around spaces and skills.
So if we want to extend the lifetime of software, we must extend the observability and reach of the execution flow contained within it. By projecting such behavior across space and time, we allow ourselves the chance to form new software systems that bridge the past, present and future.
Effectively we move the reaction and response to the simulation, which can be enhanced with newer capabilities, wider connectivity and greater resource capacity.
Composable PaaS (or Cloud) is a very primitive form of mirroring and simulation. The end game should be a mirroring of worlds and universes.