Skip to main content
[vc_single_image image=”4230″ img_size=”large” alignment=”center” label=””]

Communication service providers have started showing an inclination towards edge computing technology to reap its obvious advantages. In our previous blog post, we discussed in detail the reasons for this inclination and risks and rewards associated with communication service providers by investing in the edge enterprise. In this blog post, we will talk about the evolving edge infrastructure and its use cases as the enterprises begin to mainstream their edge functionalities.

Edge computing as a concept has been in discussion for a long time, but its physical definitions continue to evolve as it takes varied forms and comprises of multiple layers. Edge is generally considered as a mini data center managing compute, processing, and storage near the population centers in something like a mobile phone mast or retail outlet. But, with the mainstreaming of application-specific integrated circuits, the devices are more intelligent than ever and the edge is expected to encompass a much wider variety of devices and endpoints, placing them in proximity to the consumer and their device sensors.

While devices, infrastructure, and connectivity continue to represent the groundwork of an efficient edge deployment; orchestration, operational management, application development, and service creation have evolved as the strong framework in the edge ecosystem. The edge ecosystem is being formulated in 2 main types, viz., the near edge and the outer edge. While the near edge consists of the conventional architecture including servers, storage, or hyper-converged infrastructure (HCI) devices that can be remotely managed, with physical hardware presence; the out edge comprises of the gateway devices, which are completely managed and or self-managing units, connected through the 4G and 5G networks.

A middle edge is also starting to appear for scenarios where the compute, storage, and networking are placed in a remote location, but the form factor of the given facility is much smaller. Placed between the fully managed near the edge and the self-managed outer edge, the middle edge will combine remote management with an immutable operating environment.

Use Cases of Edge Computing

Various potential use cases are being identified that include automated machinery and processes (Industry 4.0), intelligent and autonomous vehicles, augmented and virtual reality (AR/VR), streaming high-quality video and gaming, and AI/ML applications.

  • Data-intensive usability โ€“ The applications/cases where it is just not feasible to transmit the data to-and-fro over the network from the user location to the central cloud location. Some examples of such use cases include – smart cities, smart factories, smart homes/buildings, restricted connectivity, virtual reality, high-definition content distribution, high-performance computing, etc.
  • Human-latency sensitive cases โ€“ The applications where even the slightest latency in data delivery can hamper the user-experience or make the use case void, language processing, smart retail, etc.
  • Machine-to-machine transmission – Machines are characterized by faster processing and are highly sensitive to latency, for use cases with speed as a distinguishing factor, like arbitrage market, smart grid, smart security, real-time analytics, low-latency content distribution, and defense force simulation.
  • Life-critical โ€“ The applications that directly impact human safety and health with any compromise on speed and reliability like smart transportation, digital health, autonomous cars, autonomous robots, etc.

The edge computing design allows applications and services that can benefit from proximity to the customer to be hosted in a multi-vendor, mobile-edge computing environment. It helps us understand how edge computing has the potential to avoid any bandwidth shortcomings and bring to life high-value and revenue-generating services.

Leave a Reply