With the widespread deployment of 5G (fifth generation technology standard for broadband cellular networks) in all parts of the world, the quantity of digital data circulating is expected to continue to increase in the future (Figure 1).
Figure 1: Worldwide mobile subscriptions
Source: Ericsson Mobility Visualizer
The amount of digital data used by IoT devices has more than quadrupled already compared to 2015 (Figure 2).
Figure 2: Data used by companies for analysis
Source: Ministry of Internal Affairs and Communications (MIC) (Japan)
With the spread of 5G, digital transformation initiatives are expected to accelerate globally in the future. Edge computing capable of processing data in real-time and with a balanced load will become increasingly important as the vast quantities of digital data generated by various IoT devices need to be processed and operated efficiently and safely at the right time.
- What is edge computing?
- Why do we need edge computing?
- The call for edge computing
- Safety and security
- Real-time analysis, decision making and immediate action
- eMCOS RTOS software platform
1. What is edge computing?
In traditional IoT systems, most of the data collected by IoT devices is sent over the Internet to the cloud where it is aggregated and processed (cloud computing). In edge computing, the IoT devices at the “edge of the network” and near the user (edge devices) process and manage the collected data on their own and send only the required data for cloud processing. As shown in Figure 3 below, edge computing also uses a distributed system architecture to process data locally in a distributed manner rather than just centrally.
Figure 3: Cloud computing versus edge computing
2. Why do we need edge computing?
With the expanding number of edge devices, the quantity of data to be processed inevitably increases. With this growth in data, traditional cloud computing would place a heavy load on the network, have lower real-time capability and security, and ultimately generate very high communication costs. Edge computing solves these challenges by pre-processing data at the edge of the network and sending only the data needed to the cloud. By distributing data processing at the edge, it is possible to process data in real-time with a low load on the Internet and at a much lower communication cost. Edge computing is becoming essential in the current trend toward digital transformation.
Technical challenges of digital transformation
Edge computing solution
Network traffic and communication costs
Using automated driving as an example, we estimate that automated driving generates 4TB of data per day (not including driver monitoring, see Figure 4). Sending and receiving such large amounts of data puts a huge strain on the network. Even if we assume that 5G increases bandwidth by 20 times and improves cost efficiency by 20 times, the cost of transmitting 4TB of data would be 9600US$ per month.
If data processing is undertaken at the edge of the network and only the necessary data is sent to the cloud, network usage can be greatly reduced, which also greatly reduces communication costs.
The latency between collecting the data, transmitting it to the cloud, processing the data in the cloud, and sending the results back to the edge device is critical in IoT systems where mission-critical, real-time performance is required.
Processing data at the edge of the network without involving the cloud can achieve low latency and ensure safe real-time performance.
Table 1: Why do we need edge computing?
Figure 4: Autonomous vehicles must perform 4TB data per day
Source: Self-Driving Vehicles (SDVs) & Geo-Information
3. The call for edge computing
At present, we can observe an increase in power consumption due to cloud computing. If the traditional computing technology does not change, current and future generations would face the problems of this gigantic energy waste (Figure 5).
Figure 5: Expected case scenario
Source: Total Consumer Power Consumption Forecast
A variety of edge devices used for example in automobiles, industrial or medical equipment, are becoming increasingly intelligent, and so is the amount of data they are processing. In these intelligent systems, in addition to the real-time performance and low power consumption required by traditional embedded systems, it is essential to process large quantities of data at the edge (instead of in the cloud), while ensuring low latency and high security.
In addition, the future of edge computing in the dynamic IoT will require the evolution of edge devices - from their traditional role of uploading collected data to the cloud, to intelligent edge devices that can autonomously analyze collected data, make real-time decisions and take immediate action.
Edge computing will require High Performance Computing (HPC). The advanced data processing that used to take place in the cloud must now take place at the edge of the network, which requires sophisticated and powerful hardware. However, processor technology is reaching its limits in terms of increasing performance through frequency (Pollack's rule).
The following measures are effective solutions to these problems:
Different kinds of processors can achieve the best computational performance/stream efficiency for different types of computations: CPU, DSP, DFP (dataflow processor), FPGA, GPU, and IPs for specific AI algorithms. For highly efficient HPC, effective use of the different architectures is required.
To achieve high performance, tens to hundreds of processors can be implemented as a single piece of hardware, with executed in parallel on many of these processors simultaneously. The best way to optimize performance and energy efficiency is heterogeneous multi/manycore computing, where multi/manycore technology is combined with different processor architectures.
Looking at heterogeneous computing from a software perspective, a traditional single microkernel operating system architecture designed for the single-core era cannot handle different processor architectures in an integrated manner and thus cannot deliver optimized performance. To realize highly efficient HPC with heterogeneous multi/manycore computing, an operating system with a modern multikernel architecture (distributed microkernel) is required, where the operating system itself has high parallelism and each processor has its own independent microkernel optimized for heterogeneous computing.
4. Safety and security
Aggregating various functions at the edge and processing data at the edge also requires functional safety and security. The operating system must isolate software modules from each other to ensure that if there is a problem in one module, the other modules continue to function, and the system as a whole remains safe.
In addition to functional safety and security against hacker attacks, the following functions are important to consolidate multiple functions and house multiple operating systems on a single system:
Isolation of the memory
When running multiple functions on a system, it is necessary to separate the memory area so that the functions do not interfere with each other. By separating the memory used by each module, it is possible to prevent any abnormal behavior by one module from spreading to other modules.
By setting an upper limit on the processing time that each module can use the CPU, you can prevent a faulty program from making other programs unusable.
5. Real-time analysis, decision making and immediate action
Future dynamic IoT systems must evolve and transform from devices that collect data in the cloud, to intelligent devices that autonomously analyze and assess the data they collect and act on it in real time – and with confidence.
A real-time operating system that meets these requirements from a software perspective should be able to achieve true parallel processing and support established hard real-time software standards such as POSIX, AUTOSAR, and ROS.
It must have the ability to maximize performance while maximizing isolation security in heterogeneous multicore applications. As a result, it must be based on SOA (Service Oriented Architecture). All of this together enables real-time analytics, decision making, and immediate intervention at the edge.
In recent years, IoT systems have been required to ensure overall system safety and support higher levels of functionality and intelligence. It is necessary to build a highly scalable platform.
6. eMCOS RTOS software platform
eSOL's scalable RTOS eMCOS (embedded multi/manycore operating system) ensures safety through load balancing and separation in edge computing. It is based on the advanced multikernel architecture (distributed microkernel), which is different from traditional real-time operating systems, and a unique scheduling technology: patented Semi-Priority-based Scheduling. This enables both the high throughput and safe real-time performance required for embedded systems. In addition to the high number of cores, it supports heterogeneous hardware configurations with different architectures, including single-core processors, multi/manycore processors, on-chip flash microcontrollers, GPUs and FPGAs.
With eMCOS, distributed computing can be realized securely in the digital transformation era, meeting edge computing requirements such as highly efficient true parallel processing and complete separation of mixed-criticality applications, while ensuring real-time performance and security.
The advantages of using eMCOS for edge computing are:
- High throughput due to the high real-time performance and high parallelism of the operating system itself
- POSIX-compliant multiprocess environment that resembles a general-purpose operating system and allows reuse of already available general-purpose operating system (e.g. Linux) source code
- Ready-to-use platforms support such as ROS, Autoware and AUTOSAR
- High scalability due to multikernel architecture (distributed microkernel)
- High reliability and functional safety as anomalies in the kernel of one core do not propagate to the kernels of the other cores
Marketing Communications team