With the proliferation of IoT devices embedded in households under the banner of “smart home” initiatives there is good cause for concern regarding the privacy afforded to consumers especially when such devices share data, intentional or not, with other devices or services. Unfortunately, the backlash coming from this might well hamper the development of truly intelligent systems. But there might be hope via an alternative approach that goes beyond big data feeds.
When it comes to IoT I like to make a distinction between smart and intelligent by way of a somewhat weak analogy with the brain. A smart device is much like a control system it reacts relatively fast to local changes it observes and reasons about such changes over a small time frame and within a very specific context (or coordinate). An intelligent system, on the other hand, adapts relatively slow to remote adaptations it has deployed in response to changes in behavior, and to some extent state (or achievement of goals), over a bigger time frame and a broader context (a global space). For the slow & big brain to develop (a sort of collective intelligence) it needs to receive stimulus from the remote fast & small brains and then over time to turn this into signals that better adapt the reactive mechanisms within the fast and small brain.
Today the stimulus to grow (machine) intelligence takes the form of sensory data, big data, that is transferred between device and cloud. The same data that concerns many consumers. But what if instead of sending data pertaining to such things as a thermostat’s temperature set point what was transmitted mostly concerned the action taken by the embedded the software machine embedded – an episodic memory of the algorithm itself. Instead of sending measurements of state, possibly impeding on privacy, the smart device streamed in real-time its internal actions, function calls, to a backend simulation running in a “cloud” that mirrored many devices simultaneously. This mirrored machine world would then allow the providers of IoT devices to observe the “smartness” of their algorithms in action and to record, recall and learn from such simulations without intruding on privacy at least to the extent feared today. There would still be data sharing of sorts but at a higher order – not state, but the action of control activities.
The feed between the real world and mirrored world need not be uni-directional, with bi-directional streaming between devices and the cloud software, developers could move from largely static and reactive control algorithms to far more dynamic and adaptive ones driven by signals sent back from the mirrored world. Many control algorithms have a number of static factors (hardwired parameters) that could very well be adjusted (adapted) online by the mirrored world via signals, themselves mirrors of factors, sent back to the device. A smart algorithm would move from local to distributed as well as global. A benefit in doing so would be that the software on devices could remain largely unchanged. The smartness would be achieved remotely from data shared across many mirrored devices and from enhancements made to the signal actuators (or generators) within the mirrored simulated device world.
Another benefit with this approach is that any additional sharing of observations and extension of capabilities would be performed from within the mirrored world without giving up the identity of and access to the real device itself. In doing so smart devices would remain lean and focused on the control aspects, with expensive computation and extended capabilities offloaded to the cloud, a mirror world of machine behavior, affording other means of communication with the consumer. What’s more offline machine memories and simulated recollection play more natural to the immutable and isolation needs of computation whilst solving one of the biggest issues I personally find with engineering big data systems – the disconnect between data collection in the form of telemetry data, as promoted by MQTT, and the perception of reality as performed by humans that is grounded in motion and action.
We need to rethink the primary importance we give to big data and see it merely as a means to recreate reality in the form of machine action. When this is done the design of big data structures and storage will radically change.
Being realistic I don’t believe it will be possible to train the big brain in the cloud-based solely on event data describing the sequencing of software execution completely void of context. There will still be a need to transmit parameterized data associated with action and flow but it won’t be “let’s just sending everything we can collect until we figure out how to monetize it [data]”.
For consumers of IoT devices and services, there needs to be an appreciation of what it means to grow real intelligence that serves as an effective cognitive agency for human action (and objectives). The effectiveness of a smart device will always be limited to a point in time and to an average expected behavior and usage deemed by device/software engineers when continuous online stimulus and adaptation is made impossible if not impractical. Consumers should be weary of devices claiming to share little as much as those feared as sharing far too much. Device designers should not draw too much distinction between the appliance and the service, the operation, which is distributed across many machines, minds, memories and mirrors.
The future will be simulated…eventually When I designed and developed Satoris, Sentris, Stenos, Simz and Signals my goal was to drag application performance monitoring & management (APM) out of the stone age where it remains today. Now I see the technology, if not then the approach, as being far more applicable to many challenges facing the industry. What is the next big thing after Big Data? I hope it is Big Behavior – simulation of software episodic memories for recollection and adaptation across space and time.