[Return to Bookshelf] [Contents] [Previous Section] [Next Section] [Index] [Help]


2.7 Thread Scheduling

Threads are scheduled according to their scheduling priority and how the scheduling policy treats those priorities. To understand the discussion in this section, you must understand the concepts in the following sections:

To specify the minimum or maximum priority, use the appropriate symbol - for example, PRI_OTHER_MIN or PRI_OTHER_MAX. Priority values are integers, so you can specify a value between the minimum and maximum priority using an appropriate arithmetic expression. For example, to specify a priority midway between the minimum and maximum for the SCHED_OTHER scheduling policy, specify the following concept using your programming language's syntax:

pri_other_mid = (PRI_OTHER_MIN + PRI_OTHER_MAX) / 2

You should avoid using specific numerical values because the range of priorities can change from implementation to implementation. Values outside the range of minimum to maximum result in an error.

To show results of the different scheduling policies, consider the following example: A program has four threads, called A, B, C, and D. For each scheduling policy, three scheduling priorities have been defined: minimum, middle, and maximum. The threads have the following priorities:


A      minimum

B      middle

C      middle

D      maximum


On a uniprocessor system, only one thread can run at any given time. The ordering of execution depends upon the relative scheduling policies and priorities of the threads. Given a set of threads with fixed priorities such as the previous list, their execution behavior is typically predictable. However, in a multiprocessor (SMP) system the execution behavior is much less determinable. Although the four threads have differing priorities, a 4-processor SMP system may execute all four simultaneously.

When you design an application to use thread priorities, it is critical to remember that scheduling is not the same as synchronization. You cannot assume that a high-priority thread can access shared data without interference from low-priority threads. Even if one of them is a FIFO policy thread at the highest priority and another is of background policy with lowest priority, they may run at the same time. On the other hand, you cannot even assume that on a 4-processor system, your four highest priority threads will be executing at any given time.

The following figures show uniprocessor execution flows depending on whether the first-in/first-out (FIFO), round-robin (RR), or throughput (Default) scheduling policy is in effect. Assume that all waiting threads are ready to execute when the current thread waits or terminates and that no higher priority thread is awakened while a thread is executing (during the flow shown in each figure).

Figure 2-5 shows a flow with FIFO scheduling.

Figure 2-5 Flow with FIFO Scheduling

Thread D executes until it waits or terminates; then Thread B starts because it has been waiting longer than Thread C, and it executes until it waits or terminates; then Thread C executes until it waits or terminates; then Thread A executes.

Figure 2-6 shows a flow with RR scheduling.

Figure 2-6 Flow with RR Scheduling

Thread D executes until it waits or terminates; then Threads B and C are timesliced, because they both have the same priority; then Thread A executes.

Figure 2-7 shows a flow with Default scheduling.

Figure 2-7 Flow with Default Scheduling

Threads D, B, C, and A are timesliced, even though Thread A has a lower priority than the others. Thread A receives less execution time than Thread D, B, or C if any of those are ready to execute as often as Thread A. However, the default scheduling policy protects Thread A against being blocked from executing indefinitely.

Because low-priority threads eventually run, the default scheduling policy protects against the problems of starvation and priority inversion, discussed in Section 3.5.1.



[Return to Bookshelf] [Contents] [Previous Section] [Next Section] [Index] [Help]