When a User Mode AST becomes deliverable to a DECthreads process,
the OpenVMS scheduler makes an upcall to DECthreads, passing
the information that is required to deliver the AST (service
routine address, argument, and target user thread ID). DECthreads
stores this information and queues the AST to be delivered to the
appropriate user thread. That thread is made runnable (if it is not
already), and executes the AST routine the next time it is scheduled
to run. This means the following things:
- A per-thread AST will interrupt the user thread that
requested it, regardless of which virtual processor on which
the thread is running.
- The AST will be run at the priority of the target thread,
so that low-priority threads' ASTs do not preempt or interfere
with the execution of high-priority threads.
- The AST routine executes in the context of the target
thread, so that the danger of surprise stack overflows is
diminished, and stack-walks and exception propagation work as
they should.
In addition to per-thread ASTs, there are also User Mode ASTs
that are directed to the process as a whole, or to no thread
in particular, or to a thread that has since terminated. These
"process" ASTs are queued to the initial thread, making the thread
runnable in a fashion similar to per-thread ASTs. They are executed
in the context of the initial thread, for the following reasons:
- The initial thread has an expandable stack, unlike the
other threads, so this minimizes the danger of stack space
problems.
- Any code that is making assumptions about particular
characteristics of AST delivery is most likely running in the
initial thread, so delivering the AST to the initial thread is
least likely to cause problems.
- The initial thread gets a boost to the top scheduling
priority, to ensure that the process ASTs are executed promptly.
Since these ASTs cannot be ascribed to any particular thread,
their priority cannot be assessed, so it is important that they
be delivered promptly in the event that a high-priority thread is
waiting to be signalled by one of them.
- Note
- In OpenVMS Version 7.0, all
ASTs are directed to the process as a whole. In future releases,
AST delivery will be made per thread, as individual services are
updated.
The following implications must be considered for application
development:
- If an application makes heavy use of ASTs, it may (to a
certain extent) starve the initial thread, since only that thread
executes the ASTs that are directed to the process as a whole (as
opposed to the pre-Version 7.0 behavior of starving all threads
equally).
- There are also implications for controlling AST delivery.
$SETAST generates an upcall similar to the one for AST delivery.
This allows DECthreads to note the request by a thread to block
(or unblock) AST delivery. When a thread has requested that ASTs
be blocked, it will not receive delivery of any per-thread ASTs;
nor will the process receive delivery of any process ASTs. This
is, in effect, the same behavior as in pre-Version 7.0, except
that a second thread cannot undo a block requested by a previous
thread. Avoid using any mechanism other than $SETAST to block
ASTs; it will interfere with the process as a whole and may
produce undesirable results.
- Another implication is that it is possible for a thread
to be executing on one virtual processor at the same time that
an AST is executing on another virtual processor. In general,
this should not pose a significant problem for multithreaded
applications. Such applications should have already minimized
their AST use, since ASTs and threads can be difficult to use
together reliably. In addition, AST routines should already be
performing only atomic operations, since thread synchronization
is not available to code executing at AST level. Any "legacy"
code (such as a nonthreaded application using threaded libraries)
is executed in the initial thread, where the normal assumptions
about AST delivery are maintained. If a piece of code cannot
tolerate concurrent execution with an AST routine, it should
disable AST delivery during its execution.