IoT system architects face the challenge of finding the optimum balance between edge and cloud computing as they scale up their deployments.
There are many reasons why cloud computing might be preferred. Superficially, cloud-based intelligence seems to have several clear advantages:
- Cloud-based intelligence offers a more transparent view of past system behaviour since every piece of system sensor data can be stored for post-analysis and offline modelling. For example, digital twinning enables continuous optimisation of a system by digitally emulating the behaviour of physical assets.
- Sensor data can be integrated across different IoT sub-systems in a simple and efficient manner.
- Cloud-based intelligence allows IoT devices to be relatively simple with few hard-coded decisions. This feeds a perception that simple devices are cheaper, because they are easier to validate, test and deploy.
Simplicity seems to be the key motivation here. But simplicity is not always optimum, especially when IoT systems begin to scale.
'Data lakes' can grow to 100TB and beyond at a surprisingly fast rate. Meanwhile the cost of connecting additional sensors increases in a more linear fashion, alongside the growing number of system nodes.
But value creation does not necessarily increase in the same proportion as a growing sensor population or 'data lake', and so operating margins start to be compressed.
Intelligent edge computing can go some way to reverse this trend.
More intelligent devices, with the ability to make local decisions, may also be able to throttle back their data backhaul as an IoT system matures.
Perhaps the analogy with a rookie employee is apposite. In the very early days, the supervisor will tend to require a high bandwidth dialogue, but as the employee’s skill and autonomy grow this will gradually decay to an optimum point. Within an IoT system, we might draw a comparison with devices that share adaptive statistical models of their sensor data sets, rather than flooding cloud-hosted 'data lakes' with raw numbers.
Enter the world of machine learning at the edge.
Take one example. Predictive maintenance algorithms are trained by machine learning at the edge to discriminate real time fail-safe events from routine wear-and-tear conditions that may require attention at the next scheduled service interval. This is achieved using a combination of low-latency, edge compute tasks to drive local decision making, supervised by higher-order statistical models trained with a superset of sensor data.
It is easy to grasp the benefits of machine learning in heavy industry, but what would be the cost to add similar degrees of intelligence into a very simple IoT device?
In today’s world, Moore’s Law has driven up CPU capability inside the lowliest of devices, to a point where very substantial edge computing tasks may be offloaded from the cloud, whilst incurring no additional hardware cost in the IoT device. Leading CPU companies like Arm continue the march, by adding advanced machine learning features across their portfolio.
With low-cost machine learning platforms being available at the edge, it seems that an optimum approach to defining highly scalable IoT systems will come from intelligent partitioning of cloud-centric and edge-centric sub-systems.
In addition to lowering the connectivity bandwidth requirements, there are of course other reasons why intelligent compute sub-systems at the edge might be deemed highly desirable:
- Edge compute inside wireless sensor nodes can improve the in-field battery life, since the nodes are required to transmit much less data
- Edge compute provides low-latency decisions that are resilient to connectivity service outages
Wherever such characteristics conspire to drive some competitive advantage, there will naturally be a greater drive towards edge-computing.
The most obvious beneficiaries may include high-mobility, latency intolerant use cases (e.g. connected vehicles), or those suffering a relatively high cost of connectivity (e.g. satellite IoT deployments), but the following IoT system, deploying a large population of simple battery-powered sensor nodes, can also be used to illustrate the business case for edge computing.
In this example, nodes are configured to backhaul 20 x 100 byte payloads every day, permitting a 10-year battery life for sensors located within 1km of the IoT gateway. However, sensors sitting outside a 5km radius cannot manage even 4 years on a single charge. Simple remedies to this problem might be (i) using a bigger battery (additional capex) or (ii) implementing an in-field servicing regime (additional opex).
This IoT system uses supervised learning in year 1 to construct an updated firmware image for the nodes. The new firmware image requires each node to backhaul significantly less data, in exchange for running a lightweight edge compute task on its local data set. The system is now reconfigured to extend its in-field service life to 10 years. (In fact, the firmware is tweaked once again during year 5 - note the small but discernible energy cost when this happens).
Whatever the driver, it seems that architects must embrace the edge to enhance the performance of their connected systems, in a way that is intelligent and scalable.
A ‘head-in-the-clouds’ strategy for IoT is a partial one, at best. Possibly, the default preference for cloud computing even carries risks. Edge computing means that smart devices will be playing a valuable role in IoT systems with an intelligence of their own.