The Connection Management Module (CMM) is essentially a switch that connects the various components of the Digital UNIX ATM subsystem. The CMM provides no queuing mechanisms in either the transmit or receive data paths. This means that each component is free to implement a queuing policy that is appropriate for its protocol or hardware. This chapter discusses guidelines for ATM component developers to consider when designing queuing policies for a component.
Device driver writers must implement a queuing policy on the transmit path and might implement a queuing policy on the receive path. The following sections describe the characteristics of each.
Device drivers must do all queuing of outgoing data. Under normal conditions, convergence modules pass data to the device drivers as the data comes down from the protocol stacks. The device driver must either queue the data or reject it, but should not drop it unless an error occurs after the driver has accepted the data for transmission. There is no mechanism other than the flow-control mechanism for a driver to request more data from a convergence module.
Most ATM devices provide at least one hardware queue or ring in which a small number of packets or raw cells can be queued to the hardware. The size of this queue is usually limited by the hardware. Some adapters might be able to schedule the processing of cells from these queues based on quality of service (QOS) parameters associated with each queue to provide QOS-based servicing of virtual circuits (VCs). Simpler hardware generally implements only a single first-in, first-out (FIFO) queue. Because of the limited size of these queues, the drivers should provide a software queue that provides some buffering of data from the CMM to the hardware queues. The software queues permit the driver to better handle bursts of data without having to use flow-control techniques on the sender.
Device drivers can completely hide their queuing mechanisms and policies or advertise some or all of them to the CMM. For example, a device driver could implement the following queuing policies:
Regardless of the method implemented, device drivers should always provide a software queuing mechanism to handle conditions when the hardware queue is full in order to delay using flow-control techniques on the sender.
The driver advertises virtual queues, which can be implemented in any number of ways internally, to the CMM. The CMM handles only the virtual queues, viewing each queue as being able to support a single QOS and being serviced by the driver based on the QOS parameters that the CMM set.
In addition to its no queuing policy, the CMM does not enqueue data to the driver. When the CMM has data to transmit, it calls the driver's transmit routine. This routine must place the data passed to it on the correct queue. Thus, the transmit queuing policy is contained entirely within the device driver. This applies to both drivers that advertise single queues and those that advertise multiple queues.
The CMM assists drivers with multiple queues by assigning the VCs to the queues. In this case, the queue to which the CMM has assigned the VC is contained in the atm_vc_services structure referenced by the atm_vc structure. The driver uses this information to determine how to queue each packet of data.
If a device driver advertises only a single queue when it registers with the CMM, the CMM expects the device driver to provide its own internal scheduling of packet transmission.
If a device driver advertises more than one queue, the CMM tries to assign VCs to each queue in an effort to achieve the QOS requested for each VC. The CMM informs the driver of its intention to send data to a specific queue so the driver need not schedule the queue for servicing until notified by the CMM that the queue will be used. The CMM selects one queue for queuing best-effort available bit rate (ABR) traffic as VCs with this QOS are created. All ABR traffic is assigned to this queue. Queues for VCs that require other than an ABR QOS are chosen by the CMM as the VCs are created.
When the CMM needs to use a previously unused queue, it sends the driver an ATM_DRVMGMT_SETQ command through the driver's management interface. The argument to this command is a pointer to an atm_vc_services structure that contains all the information about the various QOS parameters for the queue. The driver might maintain a reference to this structure as long as the queue is in use by the CMM. The driver uses the information in this structure to schedule its servicing of its active queues. The driver must schedule the services of the queues so that the QOSs specified for all queues are met. The CMM also uses this command to change a queue's parameters, for example, when another VC is attached to the queue and the aggregate reserved bandwidth for the queue must be increased.
When the CMM closes the last VC associated with a queue, it notifies the device driver that the queue is no longer in use by sending it the ATM_DRVMGMT_CLEARQ command through the driver's management interface. The argument to this command is the number of the queue to clear; queues are numbered from 0 to the maximum number the driver supports minus 1. When the driver receives this command, it can stop servicing the queue because the CMM will not send any more data to that queue.
Device drivers might supply a small amount of queuing on the receive path to reduce interrupt overhead, batching up several receive packets for processing on a single interrupt. However, driver writers should be careful not to introduce significant latency on incoming data unless they are capable of scheduling receive processing in a way that gives priority to continuous bit rate (CBR) types of traffic.
Like device drivers, convergence modules can implement any queuing mechanism on both the transmit and receive data paths. You can tailor the queuing mechanism implemented to the protocol's needs and the QOS requirements of the convergence module. Convergence modules are not required to queue in any direction, but doing so might improve performance by reducing data losses when there is congestion on the outgoing line.
Since device driver writers must implement some sort of queuing on the transmit path, you can write convergence modules to leave all queuing of transmit data to device drivers. However, if a device driver queue becomes full, convergence modules are responsible for deciding the disposition of any data that cannot be queued to the driver. If a driver cannot accept more data for transmission because of a full queue, the transmitting convergence module can simply elect to discard the data and try again later. However, this approach is not appropriate for all protocols, and may even cause performance problems for protocols that permit the arbitrary dropping of data. In such cases, the convergence modules must provide some queuing mechanism to hold the outgoing data until the driver is ready to accept more data for transmission (see Chapter 10 for an explanation of flow control in the ATM subsystem).
Since device driver queues are not visible to convergence modules, convergence modules must queue on a per-VC basis. At the convergence module interface, all flow control takes place on a per-VC basis, so convergence modules have to handle queuing for each VC (and can even elect to provide queuing on only certain connections and discard data on others). When a convergence module receives an ATM_CAUSE_QWARN or ATM_CAUSE_QFULL return from the atm_cmm_send call, this means that the driver's queue is almost full or full, respectively, and it cannot accept the data. At this point, the VC is considered flow controlled. The convergence module must either queue subsequent transmit data or discard the data until it receives a flow-control notification from the CMM.
The convergence module's implementation must determine the size of the module's per-VC queues. This implementation depends on the amount of data expected to flow on the VC and the ability of the protocol to retransmit data. When the device driver is ready to accept more data for transmission, it notifies the CMM, which in turn notifies the convergence module. When the convergence module receives the notification, it can start sending queued data until its queue is either drained or until it receives another indication from the CMM that the driver's queue is full. Once the queue has been cleared, all further data can be sent directly to the CMM for transmission, avoiding the overhead of queuing and dequeuing.
Convergence modules are generally passed incoming data while the processor is still running off the device driver's interrupt stack (from the receive interrupt). This means the convergence module receive routines are running at a high priority and should process the data and return to the CMM as quickly as possible. Processing of receive data in this fashion is allowed so that the CMM receive path introduces no additional latency that may be unacceptable for CBR traffic.
The Digital UNIX ATM subsystem attempts to deliver the data from the adapter to the convergence module as quickly as possible, with a minimum of extra processing and no additional latency. That way, convergence modules that handle CBR traffic are passed the incoming data as quickly as possible without interference from the ATM infrastructure or from the scheduling latencies of the operating system. Therefore, convergence modules must implement a receive queuing mechanism that is both appropriate to their protocol and QOS needs and that makes the modules cooperate in a multiprotocol, multi-QOS environment. For example, a convergence module that handles ABR traffic should not hold the processor for longer than necessary just because it can and the module writer might want that module to perform well at the expense of other protocols.
In general, convergence modules should queue incoming data as soon as they can and schedule a kernel thread to process the data at a later time. Of course, modules that process CBR traffic should not queue data, but should do as much processing as is appropriate. Too much processing on the interrupt stack, however, could lead to lock conditions on the processor.
Convergence modules cannot use flow control on device drivers. The only methods available to enforce flow control on the sender is either protocol-specific flow control or ATM flow control (as yet undefined). Device drivers will use ATM flow control when they start running out of receive buffer space.