Xentara v2.0.4
User Manual
Scheduling Tasks

Xentara provides a sophisticated mechanism for scheduling and coordinating tasks. Tasks are provided by different elements in the Xentara model.

Tasks can be executed in two different ways:

  • at regular intervals using a timer, or
  • in response to an event, like a change in the value of a data point.

Execution Pipelines

Structure of a Pipeline

Tasks are grouped together into pipelines. Each pipeline is attached to a single timer or event, and executed when the timer fires or the event is raised.

An execution pipeline consists of a number of check points connected by segments Each segment consists of one or more tasks, executed in sequence. The simplest possible pipeline just consists of a single segment joining two check points:

Image 1: simple pipeline

Pipelines support parallel execution of segments. A pipeline using parallel execution may look like this, for example:

Image 2: parallel pipeline

Pipelines can have multiple check points. A more complex pipeline may look like this:

Image 3: complex pipeline

Pipeline Segments

A pipeline segment ist just a bunch of tasks that are executed one after the other in a single thread. A pipeline segment always starts and ends at a check point.

In image 3 above, there are three pipeline segments:

  1. Segment 1: Task 1a đź – Task 1b đź – Task 1c
  2. Segment 2: Task 2a đź – Task 2b
  3. Segment 3: Task 3a đź – Task 3b

Individual segments (like Segment 1 and Segment 2) can be executed in parallel. The tasks within each segment are always executed sequentially, however.

Segments are executed using a thread pool located the corresponding execution track (see below). The number of threads in the thread pool determine how many pipeline segments can be executed in parallel. If the thread pool does not have enough threads, some pipeline segments will be executed sequentially, even if the structure of the pipeline would otherwise allow them to be executed in parallel.

Check Points

A check point is a synchronization point that waits for a number of pipeline segments to all be finished. It then starts another group of segments. In image 3 above, there are three check points:

  1. Check Point 1: This check point starts Segment 1 and Segment 2 in parallel
  2. Check Point 2: This check point waits for Segment 1 and Segment 2 to finish, and then starts Segment 3
  3. Check Point 3: This check points waits for Segment 3 to finish

After the last check point (Check Point 3) has been reached, the pipeline is finished, and the next pipeline can start.

Segments can skip check points, if desired:

Image 4: segment that skips a check point

In the image above, Segment 4 extends between Check Point 1 and Check Point 3, skipping Check Point 2. Check Point 2 will start Segment 3 even if the new Segment 4 has not completed yet. Check Point 3 will then wait for Segment 3 and Segment 4 to finish, before ending the pipeline.

Execution Tracks

The individual pipelines are divided into one or more execution tracks. The pipelines within each track are alwas executed one after the other, and are never allowed to overlap. Pipelines in different tracks are allowed to execute simultaneousy.

Image 5: separate execution tracks

The image above shows two separate execution tracks, each with a timer. The pipelines of the two timers are allowed to overlap, because they are in different tracks. The event pipeline, however, does not overlap the timer in the same track. Instead, the event pipeline is delayed until the timer pipeline has finished. Similarly, the next execution of the timer pipeline is delayed until the event pipeline has finished.

Thread Pool

Each execution track contains a thread pool that is used to execute the pipeline segments of the thrack’s pipelines. The number of threads in the thread pool determine how many pipeline segments can be executed in parallel.

Normally, the number of threads in the thread pool of an execution track is automatically calculated, to allow enough threads to allow all parallel pipeline segments to execute in separate threads. You can however, reduce the number of threads to force some pipeline segments will be executed sequentially, even if the structure of the pipeline would otherwise allow them to be executed in parallel. This can prevent pipelines with a lange number of parallel segments from using up too many CPU cores at the same time.

You can also specify a thread cunt greater than necessary, but this only consumes resources and does not provide any advantage.

Timing Precision (Linux only)

Under Linux, the threads in an execution track’s thread pool can be configured to use one of three timing precision settings. The timing precision setting is ignored under Windows.

Relaxed Timing

Relaxed timing just uses the default behaviour of the Operating System.

The only performance optimization done for execution tracks with relaxed timing is to request a CPU wakeup latency of 0. This will essentially prevent all CPUs from going into sleep mode. See also Power Management Quality of Service for CPUs in the Linux Kernel documentation.

Note
Xentara requires special privileges to be able to request the desired CPU wakeup latency. If you run Xentara using the provided systemd Service, then the necessary privileges are automatically granted, and no further action is needed. If you want to run Xentara from the command line, or from a script not started by systemd(1), then Xentara must be given realtime privileges as described under Giving Xentara Realtime Privileges under Configuring Realtime Linux.

Precise Timing

Precise timing does everything relaxed timing does, but additionally sets the so called “timer slack” of the threads in the thread pool to the minimum possible value (1ns). The timer slack is a value that tells the Linux kernel how precise wakeups due to timers and events should be. Setting the timer slack to 1ns will instruct the Kernel to wake up the threads in the thread pool as close to the scheduled time as possible.

Realtime Timing

Realtime timing does everything precise timing does, but additionally applies optimizations needed for realtime tasks. For realtime execution tracks, Xentara sets the priority of the threads in the thread pool to the maximum supported realtime priority. This will cause pipelines in this execution track to have priority over all other threads in the system, including non-realtime execution tracks. Xentara will also disable all Linux signals for realtime threads, preventing signal handlers from interrupting the scheduled pipelines.

In order for Xentara to be able to set realtime priority, the system must be running with a Linux kernel containing the optimizations from the PREEMPT_RT patch. See Configuring Realtime Linux on how to install a PREEMPT_RT kernel, and how to configure the system for realtime operation.

Note
Xentara requires special privileges to be able to set realtime priority for threads. If you run Xentara using the provided systemd Service, then the necessary privileges are automatically granted, and no further action is needed. If you want to run Xentara from the command line, or from a script not started by systemd(1), then Xentara must be given realtime privileges as described under Giving Xentara Realtime Privileges under Configuring Realtime Linux.

CPU Affinity

CPU affinity allows you to select one or more CPUs on which the tasks in this track will be executed. CPU affinity is available on both Linux and Windows, but is most useful on Linux systems running a realtime kernel, in combination with core isolation, and realtime timing. Realtime execution tracks that are bound to isolated cores on a realtime Linux system will never be interrupted by other programs, by kernel tasks, or by pipelines from other execution tracks, and thus enable strong realtime timing guarantees.

Note
Core isolation is a feature of the Linux kernel and is not available under Windows. The Windows operating system also has a feature called core isolation, but this is a security feature and has nothing to do with isolating CPU cores.

Xentara allows you to specify CPU affinity on a per execution track basis, meaning that different execution tracks can be bound to different sets of CPU cores. This allows you to isolate time critical tasks from non-time critical tasks even within Xentara, and also allows you to isolate different time critical Xentara tasks from each other. This makes the Xentara CPU affinity feature more powerful than system tools like the Linux taskset(1) command, which only allow you to specify a single CPU affinity for the entire Xentara process.

Timers

A timer allows you to execute tasks at regular intervals. You can define any number of timers, each with different scheduling parameters.

Timing Parameters

Timers are scheduled using two parameters:

  • The period determines how often the timer will fire.
  • The offset determines when exactly the timer will fire relative to other timers.

The offset is used to define exactly when the timer will fire. A timer with a period of 1 second, for example, might always fire at the full second, so at 12:00:00.000, 12:00:01.000, 12:00:02.000, etc. Or, it might fire on every half second, so at 12:00:00.500, 12:00:01.500, 12:00:02.500, etc. Both these timers would have the same period, but different offsets.

A timer with an offset of 0 will be scheduled as if it had started at exactly midnight, January 1, 2001, UTC. For example, a timer with a period of 1 second and an offset of 0 will always fire on the whole second, a timer with a period of 1 minute and an offset of 0 will always fire on the whole minute, and a timer with a period of 1 hour and an offset of 0 will always fire on the whole hour. A timer with a period of 200ms and an offset of 0 will always fire on the whole second, and at 200ms, 400ms, 600ms, and 800ms after each full second, and so on.

A timer with a positive offset will fire that much later than the same timer with an offset of 0. For example, a timer with a period of 1 second and an offset of 10ms will always fire 10ms after each second, so at 12:00:00.010, 12:00:01.010, 12:00:02.010, etc. A timer with a negative offset will fire before the same timer with an offset of 0. For example, a timer with a period of 1 second and an offset of -10ms will always fire 10ms before each second, so at 11:59:59.990, 12:00:00.990, 12:00:01.990, etc.

A timer triggers the execution of a pipeline of tasks when it fires.

Execution Sequence of Concurrent Timers

If two timers in the same track are scheduled to fire at the same time, their pipelines are not executed in parallel, but sequentially one after the other.

By default, timers with shorter periods are executed first, and timers with longer periods afterwards. For example, if a track contains one timer scheduled to fire every second, and another timer scheduled to fire every 10 seconds, then the timers will fire together once every ten seconds. In that case, the pipeline of the one-second timer will be executed before the pipeline of the 10-second timer, because it has a shorter period.

You can influence the order in which timers are run by assigning a sequence number to the timer. Timers with lower sequence numbers are always executed before timers with higher sequence numbers, regardless of period. For example, timers with a sequence number of -1 will run before timers with sequence number 0, which will in turn run before timers with sequence numbers of 1 or higher.

Sequence numbers are 32-bit signed integers, and can be negative, zero, or positive. By default, timers have a sequence number of 0.

Timers in different tracks are independent of each other, and their pipelines will run in parallel if they are scheduled for the same time.