Software Performance Optimization Heuristics: Fast, Frugal and Factual

The following is a graphic I’ve used in the past to frame various software performance optimization techniques. It is not a comprehensive inventory of all software performance optimization techniques (or concerns) but I’ve found it serves a purpose in managing the amount of effort that, in general, should be spent on each technique outside of extreme cases such as trading platforms (or profilers). The left side is your typical localized bottom up approach to speeding up code execution steps. Most developers involved in performance improvement efforts start on the left side because it feels less abstract and much more tangible than system dynamics or software adaptation, both of which require a much higher level of understanding of exactly what does happen outside of the code editor.
Software Regulators! Mirror Outwards, Simulate Inwards.

The Good Regulator Theorem states “every good regulator of a system must be a model of that system”. But what exactly would such a model look like? What elements should the model contain and how might they be related and reasoned about? The theorem itself does not address this so in this article I present my own research findings covering dramatism, observational learning, experiential learning, activity theory, simulation theory and mirror neurons as well as software activity metering and software performance measurement.

Essentially we need a model of human and software understanding based on activities actioned by actors within an environment supporting observation and perception of such acts including the situational context surrounding them, both before and after. An actor, not in the sense of the actor programming model, produces, begins and ends, an action in response to, or in anticipation of, some stimulus (action, signal or event), which could very well be mapped to a service, thread, process, system or human (by proxy).



“They [autoletics] are more autonomous and independent because they cannot be as easily manipulated with threats or rewards from the outside. At the same time, they are more involved with everything around them because they are fully immersed in the current of life.”



To fight current levels of complexity in IT systems we must look to imbue software with the ability to sense, perceive, reason and act locally with immediacy.

Software must adapt not simply react. Feedback signals need to flow freely across machine boundaries as well as man-and-machine interfaces.


In projecting software execution behavior and contextual state across space and time software engineers have the capability to develop new and augmented systems that bridge the past, present and future, allowing software machines to transcend structures formed in the early stages of design and over the course of extemporaneous reactive change.



Your hardware has memory but your software has no memories.

What if software could recall past memories for the purpose of learning?

What if we could observe machine memories to more effectively reason about complex software execution behavior?


Clients are offered expertise in the performance tuning, monitoring and management of JVM runtimes executing Java, Scala, Clojure, JavaScript (Nashorn/Rhino) and Ruby (JRuby) developed applications, with particular experience in scaling and optimizing high frequency low latency request processing systems.


Using self-adaptive instrumentation and measurement tooling, performance and scalability problem identification is all but guaranteed. Within a matter of minutes, measuring a representative workload, various potential bottlenecks and optimization calls sites will be accurately identified.



Efficient data collection coupled with unique software execution visualizations ensures that all parties involved in a performance investigation will gain an unprecedented insight into the execution nature and resource consumption patterns of applications and more importantly, a high degree of confidence in report findings.


Through distributed software recording and simulated playback the time spent in performance measuring an application under observation and analysis is greatly reduced. This allows much of the investigative work to be moved outside of business critical operating windows.