The adoption of the IIoT will profoundly change the way things work. One significant aspect of that change will be edge analytics.
To paraphrase the opening of a popular movie around the turn of the century: “The world is changed. Much that was known has been forgotten.” Few things will remain the same. The adoption of low cost, discrete capability, commodity-based, consumer grade hardware for remote sensor and distributed computational abilities will invade most areas of production. This will range from the farmyard to the cooktop, the mine face to the manufacturing hub, the ATM and smartphone to the taxation offices and reserve banks. These new capabilities will radically change and improve efficiency, effectiveness and efficacy – but they come with a sting in the tail.
There is a quiet revolution taking place in most sectors of the productive economy that involves the adoption of remote sensor and distributed computing technology to push the decision-making facilities of businesses closer to the ‘data points’ where the work is complete. This is the next wave of Internet of Things sensors and they are popping up in all manner of places.
Generally, the solution involves the development of specific, discrete business rule execution based on the data that is acquired by the sensor. This acquire-evaluate-action loop is local to the sensor and does not necessitate communication from the edge of the network back to a central processor to perform the work.
Communication from the edge sensors back to the central application is only performed when there are significant events that need to be captured, or when the data indicates that conditions have achieved a state that the local business rules cannot handle. The introduction of a layer of capability between the distributed remote sensors and the centralised corporate processing is being implemented to provide the necessary computing power, reduce the flood of data that the sensor array is creating and prevent the wholesale loss of critical data that may result from an over-enthusiastic implementation of edge business processing.
This architecture of processing is commonly referred to as edge analytics. There are several companies currently involved in developing specific devices and applications to assist with the introduction of the capability, including IBM, CISCO, Intel, Dell, and HP among others.
Figure 1 IBM Watson’s representation of edge analytics © IBM
The advantages of edge analytics are substantial: - Smaller data set transmitted to the central processing area - Better use of telecommunications and limited data bandwidth - Faster reaction to sensor data and better remediation of ‘out of bounds’ conditions - Ability to ensure that geographically responsive rules are deployed only where they are required - Reduction in overall hardware costs due to cheaper processing at end-points
Maintaining the central processing facility for large-scale data lakes and full spectrum Business Intelligence (BI) processing using cubes, leverages the scale and cost of these facilities for the value that they can derive. Simply using them to run a fleet of ‘dumb’ sensors is not an efficient nor cost justifiable position.
In many ways, these types of IoT networks mimic the mechanisms that the human central nervous system (CNS) and reflex arc (RA) use.
The BI layer is akin to the CNS – it has lots of ‘expensive’ processing power and can integrate a wide range of data elements, using previous patterns and heuristics to discern meaning and derive new information. Whereas the edge analytics are like our reflexes – they are simple stimulus-response loops developed for an extremely small range of sensed data and created to enact a specific set of actions.
The BI/CNS uses the central application server/brain to perform the tasks required. The edge analytics/reflex uses the edge servers/spinal cord to enact the stimuli-response action.
So where is the challenge in all of this? Is it simply a matter of developing edge analytics that are more like the reflex arc, and leaving the heavy lifting to the central computing facility?
Well, yes and no.
The challenge is coordination of the edge analytics and the stimuli-response pairs that are involved. Like some of the human reflexes, the edge analytics devices need to be trained so they respond to stimuli the correct way. Similarly, edge analytics need to understand that a single instance of stimuli may be ok, but 50 of the same stimuli in rapid succession is not. They may also need to know if any of their cohort devices are responding to the same, or contradictory stimuli. In fact, the more that the deployment of edge analytics progresses the more management facilities will need to be placed ‘near the edge’.
This requirement will have the perverse effect that the cost of the edge sensors will rise as they are required to perform more ‘intelligent’ autonomous and local operations. Yet this will be in opposition to the founding reason why the network was deployed in the first place – lower cost and greater operational efficiency.
For those who have been in the IT industry for a while they may recognise this pattern as mimicking the ‘centralised’ versus ‘decentralised’, and ‘mainframe’ versus ‘distributed computing’ models of decades past. The answer to the IIoT conundrum of edge analytics may end the same way that the older models did, in a mixture of deployments that are based on the fitness for purpose of the implementation.
So for those industries that are on the cusp of the IIoT revolution – and that is just about everything – the need for careful architecture and design of the solution must start at the very beginning as making changes to an operating business’ nervous system are difficult and expensive.