Stenos is to application performance monitoring what Docker is to application deployment. Whereas a Docker container wraps up a piece of software in a complete filesystem that contains everything it needs to run, including code, runtime, system tools, system libraries, a Stenos recording captures the episodic memory of an applications software execution, including threads, methods, frames, (partial) state, resource metering and timing, in order to simulate a playback of the essence of the software behavior for observation, analysis, visualization, diagnostic and post-execution augmentation. Stenos is a time machine for software behavior. It is the past, present and future of application performance monitoring!
No state changes to the external computing environment occur with each repeated playback and simulation of a software memory. If a resource transaction was executed during the original recording the playback will only simulate it but not re-execute the same external environment interaction.
An episodic memory remains completely in tact and is executed near identically across each repeated recall. To alter a memory, such a filtering, one creates a brand new recorded memory of the memory that is being simulated. Stenos can record and simulate in both real and simulated environments.
To effect change, a new change, within an external environment one need only plug in an interceptor into the memory recall and in the course of the simulated playback perform some interaction within an external resource, much like how stateless applications deployed to Docker containers persist state across stateful user/system interactions.
Because Stenos does not record the complete application heap or stack state but instead focuses on the actual software behavior, the pushing and popping of method invocation frames on and off the thread stack, it can simulate the playback of large enterprise application within a few MB’s of memory.
Post-execution simulated playback is done without access to the application code (bytecode). And because it does not execute the code but simulates, mimics and mirrors, the entering of exiting of methods it consumes very little CPU compared with the resource consumption that did take place when the application was executing.
Large scale server applications, in terms of memory footprint and thread size, such as Apache Cassandra or Apache Hadoop can be played back on relatively low end workstations without taxing the hardware or perturbing the accuracy of the timeline reconstruction.
Using the Probes Open API developers can inject code into the simulated playback of a software episodic memory for the purpose of filtering, visualizing, aggregating and integrating with external systems across space (different runtime) and time (different window of execution). What’s more the same code can run within both real or simulated environments unchanged.
Unlike SaaS and traditional application performance monitoring solutions all metering measurements and runtime metadata stored within a Stenos recording are completely accessible to developers and operations via the Probes Open API.
The Stenos Open Binary Storage format also allows for other tools not developed in Java (or other JVM language) to produce and inspect software memory recordings.
Stenos unlocks the dreams and episodic memories of metered software.
Stenos offers a revolutionary new way of isolating integrations with other systems and services by providing a simulated execution environment in which integration code can be executed as if it were in the real application environment. The real application code need not be changed or aware of such integrations as these can be executed in a different time and space.
Many cross-cutting concerns such as auditing, security, business analytics, event notification as well as logging can now be performed completely isolated from the real environment. More importantly the interweaving of such concerns can be done offline in different environments and different time windows.
Stenos offers a mock like environment to test drive integrations with real-world representative workloads before actual deployment to production in offline mode.
Today it is common to replicate data across machine boundaries but what of execution behavior? Whilst remote procedure call (RPC) middleware has allowed us to move execution across process and machine boundaries, at a very coarse granularity, these calls do not necessarily represent the replication of inherent software behavior, but merely a form of service delegation. The type of mirroring I am referring to here is the simulated playback, online or offline, of a softwares execution behavior in which a thread performs a local function or procedure call that is near simultaneously mirrored in one or more “paired” runtimes.
Imagine writing a method invocation interceptor class that is called within a process when a method is invoked by a thread. This is something that has been possible for sometime using various frameworks, such as Spring and JEE/CDI, and AOP technologies, such as AspectJ. Now imagine that same interceptor class is able to intercept method invocations across multiple Java runtimes and threads and not actually be present within each of the runtimes without a single line of code change – a mirrored runtime down to the thread and call stack as well as some environment state. Now lets go even further and imagine the very same interception class being able to receive the same call backs within the same mirrored threads from a past recording, that can be repeatedly run. Again no changes though the space and time aspects of the entire environment has changed.
A proposal for a different approach to application performance monitoring that is far more efficient, effective, extensible and eventual than traditional legacy approaches based on metrics and event logging. Instead of seeing logging and metrics as primary data sources for monitoring solutions we should instead see them as a form of human inquiry over some software execution behavior that is happening or has happened. With this is mind it becomes clear that logging and metrics do not serve as a complete, contextual and comprehensive representation of software execution behavior.