Edge computing is easy to sell, but difficult to define. Edge and cloud are more of a philosophy than any individual architecture, as they are on a spectrum, whereby the current cloud service model often depends on the processing in the browser and even the most unusual deployments depend on a central infrastructure.
Edge’s philosophy, like most Reg Readers know, doubts know, it is to drive as much processing and calculation as possible to the collection and recycling locations.
If biology is a guide, edge computing is a good evolutionary strategy. The octopus has a central brain, but each tentacle has the ability to analyze its environment, make decisions, and react to events. The human gut takes care of itself with roughly the same processing power as a deer, while both eyes and ears do local processing before returning data. All of these natural systems give efficiency, robustness and flexibility: attributes that IT edge implementations should also expect.
But these natural analogies also illustrate another of the most important aspects of edge – its diversity. 5G is often referred to as the epitome of Edge Case. It owes most of its potential to development based on edge principles, which shift decision-making about setting up and managing connections to distributed control systems. The combination of high bandwidth, low latency, traffic management through prioritization, across all moving targets, cannot work unless as much processing as possible takes place as close as possible to the radios (and therefore the users).
But another high-profile edging application, transport, requires a completely different approach. An airplane can generate one terabyte of performance and diagnostic data on a single flight, which exceeds the capabilities of on-board data communication.
Spread that across a fleet in constant global flux, and centralized control is not an option. Autonomous on-board processing, prioritization of instant safety information such as moment-to-moment engine parameters for available real-time connections and efficient retrieval of bulk data when possible lead to design decisions that are far removed from 5G technology.
Every sector that is touted as a natural fit for edge – IoT, digital health, manufacturing, energy, logistics – opposes the idea of edge as an individual discipline. As an Openstack project says in his paper “Edge Computing: Next Steps in Architecture, Design and Testing”: “Although the interest in edge computing is undisputed, there is little consensus about a standard edge definition, solution or architecture.”
However, if we focus on the similarities, their benefits and challenges, we can see where the journey could go.
Edge computing requires scalable, flexible networks. Even if a particular deployment is stable over a long period of time in terms of size and resource requirements, to be economical it must be built from universal tools and techniques that can meet a wide variety of requirements. To that end, Software Defined Networking (SDN) has become a focus for future edge developments, although a number of recent research has identified areas where it is not yet entirely compatible with the task at hand.
The characteristic approach of SDN is to divide the task of networking into two tasks of control and data transfer. It has a control plane and a data plane, the former being managed through dynamic reconfiguration based on a combination of rules and supervision. This seems like a good addition to edge computing, but SDN usually has a centralized plane of control that expects a global view of all network activity. Researchers at Imperial College London point this out in a current paper [PDF], this is neither scalable nor robust, two important edge requirements. Different approaches – multiple control levels, increased intelligence in edge switch hardware, dynamic network partitioning as required, geography and flow control – are examined as well as the interactions between security and SDN in edge management.
The conclusion here, as elsewhere, is that there is very active research in this area, and although the potential has not yet been exhausted, these techniques will be the basis for efficient edge networking.
The reason for this last conclusion leads to another aspect of edge development: What are the general rules for developing and managing infrastructure, services and apps?
Development and management on the edge
As edge architectures continue to evolve, extending DevOps principles to infrastructure provides more insight into how things work, adopting popular open source components and approaches that have proven themselves, and the practical benefits of quick reconfiguration and deployment.
With the “everything as code” approach, of which SDN and container management / deployment tools such as Kubernetes are exemplary, the whole variety of edge architectures from highly centralized to highly distributed can be managed with the same tools, an important aspect when the technologies mature and take their place in the market.
Kubernetes offers a common level of abstraction in addition to physical resources such as computing, storage and networking and thus enables standard deployment anywhere, even on heterogeneous edge devices in different infrastructures. This pairs well with the increased performance of cross-platform development tools and enables a device-independent approach that fits well with Edge’s economics and its need to cultivate diverse ecosystems.
All of this must be followed by monitoring and testing. One approach to creating maintainable edge deployments is Artifact reviewwhere everything that has been created as part of an overall system is documented well enough to be tested and built on by others with reproducible results.
In general, all DevOps best practice ideas – inter-team communication, standardization of practice, automation wherever possible, instrumentation – need to be reinforced to cope with the new size, ever-changing makeup, and diverse business needs that the edge brings with it brings to meet.
The problem of managing edge deployments, which in many cases like IoT have end nodes of different ages, skills and technologies, quickly leads to very complex permutations of different configurations. Mobile app developers know this all too well as they are constantly deciding what minimum configuration to support, how to deal with different geographic areas, and how to support customers who deviate from the norm. They are a good, albeit unknowing, test bed for the reality of some aspects of Edge.
Edge standards are being developed to get this under control. ETSI, the European telecommunications standards organization, and the 3GPP Mobile Standards Group were created work together [PDF] Integration of cloud services and cellular networks at the edge, including the handling of the detection of edge services by applications. Internet-based systems like DNS assume that the entities they provide will stay where they are. edge, especially mobile edge, doesn’t work that way.
Another major hub of activity is LF Edge, the Linux Foundation’s Edge group that has just been released EdgeX 2.0 Ireland, an important update to its nascent standard package. These include secure APIs for connecting devices and networks and for managing data channels as well as the Open Retail Reference Architecture (ORRA), a common delivery platform for managing apps, devices and services.
Although the EdgeX standard has changed in some ways, it is intended to use this as the basis for a Long Term Support (LTS) version later in 2021. The standard package is available in a Docker container and underscores the consensus that Edge needs to build on DevOps lines in order to be viable.
Edge’s hidden vices
For Edge to provide a good business case, it has to be the most efficient way to solve a problem. For the flagship children – 5G, transport, IoT – it is often the only viable solution. In more general cases, however, it needs to be more efficient than the cloud-first-device-second model. The big cloud providers with their hyper-efficient internal management systems and economies of scale are in tough competition here.
In one review [PDF] The technological, economic and industrial future of edge computing across the European Union states that Google claims its administrators monitor 10,000 servers at a time, compared to one administrator per hundred servers in standard enterprise-class data centers, and Amazon’s data centers as three and a half times more energy efficient in a similar comparison. If you want your edge deployment to remove much of the computing from the cloud, you may have to combat these economies of scale. These are the raw economies that make clouds so dominant, and they don’t change.
Security is also a major challenge. Moving data center workloads to the edge removes physical safeguards against theft and vandalism, and managing security credentials for thousands or hundreds of thousands of nodes when connectivity or power to some can be intermittently interrupted is not an issue. With caution, Edge can be more secure than standard approaches – many IoT sensors have few free resources for strong encryption, but a local control node can add this before sending.
However, an edge deployment increases the surface area of a system, so monitoring and active log scanning for signs of problems must be scaled accordingly.
The future of edge computing depends more than most of the emerging technologies on everyone in the company. From high-level academic researchers to the DevOps machine room, every level of the industry needs to know what each other is doing. For Edge to work, a whole network of existing ideas in the areas of infrastructure, management, development, monitoring, security and understanding of architecture must explore the options together.
No organization, not even the tech giants, can push it where it doesn’t belong, and no organization can hold it back if it embarks on workable innovations. Even without the hype, life on the fringes becomes interesting. ®