eSOL Blog

What is Essential Scheduling for Embedded Systems? A Practical Understanding of Multitasking Programming

Jun 18, 2025 10:00:00 AM

The use of multicore processors is widely accepted in the world of embedded systems.

Among these, CPUs and OSes that support load-distributing SMP (symmetric multiprocessing) are commonly employed.

Just taking software developed in a single-core environment and running it in an SMP environment, however, will not automatically lead to improved performance. You could even find your program not working properly at all.

In a multicore processor environment, multiple processes are executed fully in parallel. Properly running software therefore requires optimal exclusive access and synchronization controls.
But this does not mean that multicore support requires special new technology. The key to this, rather, is “multitasking programming.”

This article takes a look at scheduling, one of the fundamentals of multitasking programing.

*This article is a reproduction of the article written by eSOL and featured in Chapter 1 of the “Parallel Processing Technology in the Age of Multitasking/Multicore” special edition of the November 2007 edition of Interface, published by CQ Publishing.

Table of Contents

  1. What is multitasking programming

  2. Criteria for and categories of scheduling

  3. The main scheduling algorithms

  4. What is “multitasking programming” for embedded systems?

  5. Conclusion

What is multitasking programming

Historically, computer OSes were single-tasking. They could only perform one process at a time. If you wanted to perform multiple processes, your only choice was to wait until one process was finished, before starting on the next process.
Multitasking is, by contrast, a feature that lets a single computer system (processing unit) execute multiple processes at the same time, in parallel.

For example, you could view a website and write a report using a word processor while listening to music from a CD using a single computer.
In this example, because the computer would be running the music playback software, the web browser and the word processor software all at the same time, you would be using a multitasking environment.
It has become standard for OSes in recent years to be multitasking, not only on personal computers, but also in embedded systems. This is because there is growing demand for performing multiple processes simultaneously.
Next we will look at what principles multitasking programming uses to simultaneously execute multiple processes.

The easiest way to implement multitasking is to “have just enough processors for the number of tasks.”
If you have a number of processors (cores) greater than the greatest number of tasks that you will simultaneously execute installed in the computer system you want to implement multitasking on, then simply by assigning the tasks to be executed to open processors, you can achieve multitasking programming (see Figure 1). 

Blog_scheduling_ENG_1

Figure 1: If you have lots of processors, you can easily implement multitasking


With that said, due limitations of economy, energy consumption and physical size in typical computers and embedded systems, there are essentially no cases where numerous processors are installed.
Multitasking programming is therefore executed while quickly switching between multiple tasks to give the appearance that they are running simultaneously.

Specifically, by breaking up the processing time of one processor and assigning it to multiple tasks, systems achieve simultaneous execution of multiple tasks.
Managing which batch of processor time is apportioned to which task is called “scheduling,” and is an important concept in multitasking programming.
In other words, scheduling is “telling the processor which process to do out of the multiple processes before it.” You can think of it as having the same meaning as “setting your schedule” in day-to-day life.

Criteria for and categories of scheduling

Various methods have been proposed for how to schedule multitasking programming (scheduling algorithms). It would be impractical, of course, to understand every possible scheduling algorithm.
For that reason, we will go over the major scheduling algorithms and the key points as to why you would choose between them.
To begin with, Table 1 shows the criteria we use when comparing scheduling algorithms.

Table 1: Scheduling criteria

 CPU utilization   CPU running time ÷ system running time
(System running time = CPU running time + idle time) 
 Throughput   Number of tasks completed per unit of time 
 Turnaround time   Time from task execution request to completion
(CPU time + various waiting times) 

 Waiting time 

 Time spent waiting in an executable state until completion 

 Response time 

 Time from task execution request to first response
(This is not the response output time) 


The reason that there exists a variety of approaches among scheduling algorithms is that the requirements placed on multitasking programming differ depending on the system being built.
A variety of algorithms have been devised to fulfill those requirements.
When engineers select scheduling algorithms, they look for algorithms that fulfill the system’s requirements, based on these criteria.
In the case of Table 1, supposing we just want to make the most efficient possible use of the CPU, we would adopt a scheduling algorithm that focuses on CPU utilization. For embedded systems, we often adopt algorithms focused on response time.

Next we will take a look at the categorization of scheduling algorithms.
Scheduling algorithms can be divided into those that are preemptible and those that are not preemptible, as in Table 2.

Table 2: Scheduling categories

Non-preemptible scheduling algorithms 

Once a processor is assigned to a task, the processor cannot be assigned to another task until the first task releases the processor

Preemptible scheduling algorithms  Even while a processor is in the middle of executing a task which it has been assigned, the processor can be preempted by another task, and so assigned to that other task 

 So, what is this “preemption” that we have just encountered?

Let us begin our explanation from the simple case of non-preemption.
In non-preemptive scheduling, the OS manages the order in which tasks are executed, but the timing by which tasks are switched is managed by the task being executed.
Other tasks cannot preempt processing as long as the task itself does not release the processor.
In other words, “processing cannot be interrupted by other tasks part way through.”

Conversely, under preemptive scheduling, because the OS manages even the timing by which tasks are switched, the OS can preempt other tasks while a task is being executed.
Preemptive scheduling is a condition where “processing can be stopped midway by another task, and the task made to wait.”

The main scheduling algorithms

Now that we have understood the criteria and categories for scheduling, let us take a look at the main scheduling algorithms presented in Table 3.
The scheduling algorithms presented here are just the basic ones.

A multitasking OS does not simply support one from among these.
Instead, it adds improvements based on the scheduling approaches presented here, and combines different scheduling algorithms to produce various algorithms.

Table 3: The main scheduling algorithms

 Non-preemptive

First Come First Served (FCFS) Scheduling
  • Processors are assigned to tasks in order of when they began waiting for  execution 
  • Once the task being executed releases the processor, the next task is executed
  • If individual tasks take a long time to process, this does not differ from single-task processing 
Shortest Job First (SJF) Scheduling 
  • Processors are assigned in order of which task has the shortest processing time
  • Once the task being executed releases the processor, the next task is executed
  • This minimizes the average turnaround time
  • It is difficult to accurately determine processing time 

 Preemptive 
Priority Scheduling   
  • Degrees of priority are assigned to tasks, and processors are assigned to the highest priority task from among the tasks waiting to be executed 
  • High-priority tasks can be executed in preference to others even if they arise while the program is running
  • Low-priority tasks spend a longer time waiting 
Round-Robin Scheduling   
  • Fixed amounts of time set by the system (time slices/time quantums) are assigned equally to tasks waiting to be executed 
  • If a task finishes processing within this time, it releases the processor
  • Quality varies significantly depending on the time slice set:
    - If the time slices are optimized, the average throughput improves
    - If time slices are too large, this approach is no different to FCFS
    - If time slices are too small, the task switching overhead grows 


In broad terms, the Windows or UNIX OSes used on personal computers employ preemptive round-robin scheduling, and embedded OSes employ preemptive priority scheduling.
The software (or module) that actually performs the scheduling is called a “scheduler.” Such schedulers are a core feature of multitasking OSes.

 What is “multitasking programming” for embedded systems? 

 So far, we have looked at scheduling as a basic overview of multitasking programming.
Having got an understanding of the basics, we will next go over multitasking programming in embedded systems.

It is exceptionally important in embedded systems to create a sense that the software is working in real time.
This is thus also a focus on real time performance in multitasking programming, meaning such devices opt not for mere multitasking OSes, but real-time multitasking OSes.
From here, we will take the concrete example of μITRON, which is widely employed in embedded systems, to look at multitasking programming in embedded systems, taking into consideration real time performance.

 Most real-time multitasking OSes employ event-driven preemption scheduling.
This new term, “event-driven,” means “taking events occurring as a trigger.”
Events in computers could include things like a key being pressed, a mouse being clicked or some pin going from “H” to “L.”
In other words, being “event-driven” means causing some action to occur in response to such events occurring.

Event-driven preemption is a scheduling method in which events occurring are taken as triggers, and then high-priority events preempt low-priority events (see Figure 2).
In this method, time limits based on events are an important element in achieving real-time performance.

Blog_scheduling_ENG_2

Figure 2: Event-driven preemption

In time-sharing methods which assign processor time equally, however, tasks corresponding to events are not processed until their turn comes, even if an event occurs.
Furthermore, if the task is not completed within the apportioned processor time, the task incurs waiting time until the next processing period is apportioned (see Figure 3).
This does not allow for achieving real-time performance.

Blog_scheduling_ENG_3

Figure 3: Real-time performance is not attainable through a time-sharing method

In event-driven preemption, an event first serves as a trigger before task switching is performed. At this point in time, high-urgency tasks are set as high-priority tasks to enable them to be executed immediately.

This also allows tasks to continue being executed until processing is complete, as long as no higher priority task is executed (see Figure 4).
Event-driven preemption scheduling is thus essential for real-time multitasking OSes.

Blog_scheduling_ENG_4

Figure 4: Even if an interruption occurs, preemption does not occur
if the priority of the current task is higher

Event-driven preemption is a scheduling method for achieving real-time performance. However, in practice, without proper task division and priority setting, it will not be possible to achieve this.
That is to say, you must consider “what processes should I process as a priority,” and “how should I divide that processing up into task units to achieve that?”

For example, let us suppose that we aggregated two processes that have to be executed in series and that have different levels of priority into one task, task A, and we set the degree of priority for this task collectively to that of the higher priority process, as with process A and process C in Figure 5.
In this case, while the program is executing the lower priority process within this task, a different task which is set to have a lower priority as a task, but that has a higher priority than the process being executed, cannot be run.
To avoid this sort of situation, we need to perform processing by separating out process A and process C into separate tasks and assigning appropriate priorities to each of them.

Blog_scheduling_ENG_5

Figure 5: Task division and priority assignment are important

In this example, process A and process C are divided into task A and task C, respectively.
Through this operation, the processes that were executed in order of A→C→B are now executed in order of A→B→C.
However, this does not mean that it is always good to just divide up processing as finely as possible. If the number of tasks grows too large, task switching will produce significant overheads.
For this reason, it is important to perform optimal task division, based on an accurate understanding of the processing requirements.

 Conclusion 

 In this article, we have presented the basics of scheduling as a foundation for multitasking programming.

At eSOL, we leverage our years-long track record in real-time OS development to offer support services for frictionlessly migrating your software assets developed in a single-core environment to a multicore environment.

If you are investigating the optimal multitasking design and implementation for developing embedded systems that need real-time performance, please do not hesitate to reach out to us.


 

T.M.
Technical Sales