The common misunderstanding that cloud the EDGE
by Delta India Consultant
As edge computing becomes more important to enterprises, there are myth surrounding the edge. Let us understand the common misunderstanding and how they stack up against reality.
MYTH 1: The edge will eat the cloud.
Distributed computing has been so ascendant that enterprise investors began to shift their priorities accordingly, with some issuing drastic forecasts. One such notable prediction occurred in a 2017 talk titled “Return to the Edge and the End of Cloud Computing,” where it was declared that because of the machine-learning and IoT-driven shift of computing from cloud to the edge, the prediction was made to see the cloud dissolving in “the near future.” Another analyst said that “The edge will eat the cloud.”
REALITY: Edge and cloud will enhance each other.
A recent IDC study predicts that by 2025, 30% of the world’s data will need real-time processing. Like self-driving cars and connected vehicles (ones that communicate data with other vehicles but do not make decisions for the driver). They’re intuitive edge use cases. If a connected or a self-driving car’s sensors acquire that children are playing in the road and another vehicle is likely to blow through a nearby red light, this information needs to be processed quickly. We don’t have milliseconds of latency to spare to send those insights back to the cloud for processing. The data needs acting upon right at that moment.
One can say that the processing of this life-critical data—often via machine learning—will need to happen at the endpoints. But contradictory to this one has to admits that “important information still gets stored in a centralized cloud” and depicts the cloud as becoming a learning center of sorts to enable machine learning en masse, which requires a great deal of data and aggregating insights at the edge
So no, the edge will not overtake the cloud. Instead, it will prompt the cloud to extend its fabric to the edge.
The hyperscale data center model continues to work well for applications that benefit from centralization: large-scale archiving, content distribution, application storage, and fast prototyping, among others.
Edge computing is bringing the best of the cloud and the best of telecommunications together. The best of the cloud because it’s taking all these cloud services and bringing them closer to the user, and the best of telecommunications because it brings immediacy, always-on, always-connected, which is what telcos are known for.”
MYTH 2: There’s only one edge.
After all, that’s how we refer to it—in the singular person—no?
REALITY: There are many edges.
There are a growing number of networks and, therefore, an increasing number of outer network boundaries containing endpoints that run applications of interest to users.
Purpose-built edges are definitely a thing in the near future. With time, the edges will become cloudified: customization will happen, but likely only as a software layer. The universality of access and the simplicity of developer applications that were part of the cloud might have to become a must in any edge. If someone develops an app that works on one edge, it ought to be able to be deployed in any network.
MYTH 3: Shrink the cloud —you have the edge!
It should be understood that some storage and processing will need to take place at the edge. Certain attributes of the cloud environment would at least be desirable to replicate across a variety of edges: equal network access and compatibility of an app developed in one edge network across different edge networks. Doesn’t that make each edge a little cloud?
REALITY: The edge is not a tiny cloud.
Remember that it was data and its needs that gave rise to the edge(s)— not another way round. This means it’s determined by use cases that produce and process data close to end-users.
And these use cases vary widely. We’re talking utility regulation in smart cities, virtual reality scenarios, monitoring of aging bridges, robots making clothes in factories through virtual assistants, etc. The data these scenarios produce, which needs processing at the edge, is also diverse. That’s why the edge infrastructure depends on the application.
As noted, the edge will have no room and no time for certain types of data. Archival data or data needed to churn out machine learning processes (data lakes, big clusters of data that teach ML algorithms) in the hyperscale data center, per Lavin—will be of no use at the edge.
Finally, the edge is not a mini cloud because it’s a remote lights-out automated operation marked by physical proximity to the user. Unlike the cloud, the edge is identified by a location, and how near it is to data. Contrary to the centralized, homogenous, general-purpose data center hub, each edge focuses on solving a specific problem.
Created on May 19th 2020 03:32. Viewed 131 times.