In this article, Tomonori Kaneko, Technical Sales Director, and concurrently RTOS Product Manager, Software Division, at eSOL, will introduce common benchmarks for comparing and considering RTOSes and explain the new benchmarks that should be added based on recent trends in embedded system development.
Lacking aspects of conventional benchmarks
When considering the introduction of an RTOS and comparing multiple candidates, the following benchmarks are commonly used:
- Network/Storage throughput
- Footprint (ROM/RAM size)
- Boot time
- API performance (time from API call to return).
However, recently, it seems that relying solely on these long-standing evaluation criteria may be insufficient to find the most suitable RTOS for one's system.
As a background, one of the main factors is the enhancement and cost reduction of hardware, such as CPU and memory. The use of high-performance hardware at a lower cost has eliminated performance bottlenecks. Consequently, the previously mentioned benchmarks have become issues of 'sufficient if within the average, with slight variations in speed considered within the margin of error.'
Furthermore, with the spread of multi-core CPUs, the 'composite-item score' has become more of an issue as a bottleneck in systems than the 'single-item score,' such as the processing speed of APIs, etc., within a pure single thread. 'Composite-item scores' refer, for example, to the overall speed of complex, interleaved collaborative processing across multiple threads.
Moreover, there is an increasing trend toward development opportunities in non-real environments, such as model-based development and development in virtual or simulator environments. Consequently, there is a growing demand for consistency in behavior between non-real and real environments. Developers want to ensure that if they thoroughly test their systems in a simulated environment, they will operate seamlessly in the actual production environment. To meet such needs, we need to guarantee logical equivalence in the behavior of applications in both environments.
New benchmarks
In that case, from what perspectives should we consider additional benchmarks? Some aspects may be challenging to quantify immediately, but based on recent business discussions, let me provide examples.
・Deterministic behavior
One example is determinism.
In general, determinism is often described as the ability to execute a process within a specified time (the same meaning as "real-time" in the context of an RTOS), having a pre-emption function (the OS forcibly takes away execution rights during the execution of a low-priority thread and gives them to a high-priority thread) as a mechanism to achieve this, or consistently producing the same output for the same input.
However, let's take a step further here and define deterministic behavior as the ability to consistently operate in a predetermined order and timeframe at any given moment for a specific processing sequence. Especially in collaborative processing among multiple threads or the interaction sequence between devices (hardware) and drivers, ensuring that the execution order follows a predetermined pattern becomes crucial in scenarios like the aforementioned non-real environment testing.
For the mechanism supporting this deterministic behavior, specific benchmarks could include the OS API's jitter-free or constant rate performance (example: the time from API call to return is always constant, excluding jitter from hardware sources), as well as the performance of the process in question not being temporally or spatially interfered by other processes (FFI, Freedom From Interference) (example: the aforementioned time remains constant even when another core is in an overloaded state).
・Boot process customization flexibility
Another example is the performance that can be referred to as boot process customization flexibility.
Boot time performance is a benchmark that often does not lend itself well to apples-to-apples comparisons. It is natural that the more OS components (functions) activated at boot time, the longer the boot time will increase. Therefore, the ideal boot time values disclosed by an OS vendor under optimal conditions may not be very effective when introducing an RTOS into a large-scale system. What is more critical is the flexibility to insert preferred processes at any point in the OS's boot process sequence. For example, prioritizing the activation of CAN communication functionality over TCP/IP functionality can be crucial for the user who needs it.
In this article, I have listed two examples of new benchmark perspectives that should be considered when introducing an RTOS. Other examples include parallel processing performance on multi-core CPUs and the ability to use various types of scheduling in addition to the priority-based scheduling typical for an RTOS. These benchmarks should be collectively evaluated when choosing the most suitable RTOS.
At eSOL, we offer the following RTOS products:
- eMCOS®
A high-performance and scalable RTOS and hypervisor platform designed for the next generation of software-defined computing. - eT-Kernel™
A microkernel-type RTOS offering high performance and safety, based on the TRON T-Kernel API.
Please feel free to contact eSOL whenever you are considering RTOS options. We will provide you with optimal proposals tailored to the systems you develop.
Tomonori Kaneko,
Technical Sales Director