The Peripheral Component Interconnect (PCI) Local Bus is a physical interconnect mechanism for use between highly integrated peripheral controller components and processor/memory systems.
For detailed information on PCI bus architectures, see the PCI Local Bus Specification Revision 2.1 and the PCI to PCI Bridge Architecture Specification.
The PCI bus is a 32-bit or 64-bit address and data bus with support for 8-, 16-, 24-, 32-, and 64-bit cycles. The following describes PCI bus hardware architecture topics relevant to the device driver writer:
PCI Local Bus Specification Revision 2.1 defines bus commands a master device can transmit to a target device to indicate the type of bus transaction it is requesting. A bus command consists of a number of bus states that are required to complete a given transaction.
A processor's power-up, self-test (POST) software builds a consistent address map before starting the Digital UNIX operating system. POST software first determines how much system memory exists and how much address space the system's I/O controllers require. It then maps the I/O controllers into reasonable locations and proceeds with system startup. By placing the base registers for this mapping in the predefined header portion of configuration space, POST software can map them in a device-independent, platform-dependent fashion. The PCI bus configuration code then determines which devices are actually present, builds a consistent address map, and matches each device to the appropriate driver. (See Section 5.3 for a discussion on how the Digital UNIX operating system matches devices to drivers.)
PCI bus address space is used in the following manner. Note that, on Digital UNIX Alpha systems, any transaction to a device's PCI bus configuration, memory, or I/O space is mapped into the processor's address space.
Configuration space consists of 256 bytes of configuration registers. The PCI Local Bus Specification Revision 2.1 divides this space into a predefined header region and a device-dependent region. Fields in the predefined header region uniquely identify the device and allow it to be generically controlled.
Multifunction devices provide a configuration space for each function.
A device's configuration space is accessible at all times, not just during system startup. Typically, only configuration, initialization, and catastrophic error-handling software access configuration space. Other code, such as a driver's probe and slave interfaces, can read a copy of the predefined header region from the pci_config_hdr data structure (discussed in Section 4.1) that the PCI bus configuration code passes to the driver.
Memory space consists of mapped device control status registers (CSRs) and mapped device buffers, as well as system memory space.
I/O space consists of mapped device CSRs for use by POST software. It is strongly recommended that driver writers use the CSRs that are mapped in memory space.
As described in Section 3.2.1, and Section 3.2.2, the PCI bus configuration code passes to the driver's xxprobe and xxslave interfaces a pointer to a software pci_config_hdr data structure (discussed in Section 4.1). The probe and slave interfaces read this structure to obtain I/O handles that allow them access to the base addresses of a device's memory, I/O, and configuration spaces.
The PCI bus supports 8-bit, 16-bit, 24-bit, 32-bit, and 64-bit data sizes.
The PCI bus uses the little endian byte-ordering format. Digital systems also use the little endian format.
According to PCI Local Bus Specification Revision 2.1, the PCI bus defines one interrupt line (INTA) for single-function devices and up to four interrupt lines (INTA, INTB, INTC, and INTD) for multifunction devices. Digital UNIX allows a single-function device to use any of the interrupt lines.
The contents of the intr_line member of the pci_config_hdr data structure do not necessarily indicate to which input of the system interrupt controllers the device's interrupt pin is connected. You must use the ihandler_id_t key returned from the handler_add interface to enable, disable, and delete an interrupt service interface in the operating system.
See Writing Device Drivers: Tutorial, and Writing Device Drivers: Reference for information on interrupt handler registration-related interfaces.
Each function of a multifunction device is considered to be a separate device connected directly to a PCI bus and is therefore given a unique configuration space and unique data structures. For example, a multidevice module, such as a quad-Ethernet card, is viewed as four separate PCI-to-Ethernet interfaces on a bridged PCI bus, each of which is associated with a separate pci_config_hdr data structure.
Each device function on a multifunction controller typically uses a discrete interrupt pin (INTA, INTB, INTC, or INTD). If multifunction device functions share an interrupt line, it is the interrupt service interface's responsibility to distinguish among the device functions when an interrupt is asserted.
There is no fixed relationship of the interrupt line (INTx) a device uses (or the PCI slot in which the device resides) to the interrupt priority at which the device's interrupts are serviced in the system, relative to interrupts from other devices. Typically, INTA of slot x has a higher priority than INTB of the same slot x. Additionally, slot x typically has higher service priority than slot (x + n). However, the relationship of the service priorities of interrupt lines from slot x to slot (x + n) is not guaranteed. If these relationships are critical to proper device operation, consult the appropriate system technical manual. docroff: ignoring superfluous symbol pci_multifunc
Before writing device drivers that operate on the PCI bus, you need to consider the following topics associated with the PCI bus software architecture:
Although the registers of devices on the PCI bus are typically mapped in both I/O space and memory space, it is strongly recommended that device drivers use the mappings in memory space. A device driver's xxprobe or xxslave interface can locate the base location of a device's registers in memory space by obtaining the I/O handle from the appropriate barx member of the pci_config_hdr data structure. (See Section 4.1.9 for further discussion.) Typically a driver uses the I/O handle (with an appropriate register offset) in calls to the kernel interfaces read_io_port and write_io_port to read from or write to a device register.
See Writing Device Drivers: Tutorial and Writing Device Drivers: Reference for more information on how a driver accesses device registers and how it calls these kernel interfaces.
Digital UNIX provides generic kernel interfaces to the system-level interfaces required by device drivers to perform a direct memory access (DMA) operation. These generic interfaces are typically called ``mapping interfaces.'' This is because their historical background is to acquire the hardware and software resources needed to map contiguous I/O bus addresses and accesses into discontiguous system memory addresses and accesses. Because these interfaces are designed to be CPU and bus hardware independent, their use makes the driver more portable across different CPU architectures and more than one CPU type within the same architecture, as well as across I/O buses.
Table 2-1 summarizes the interfaces all PCI device driver writers should use to perform DMA data transfer operations.
See Writing Device Drivers: Reference for reference page descriptions of these generic DMA-related interfaces.
Kernel Interface | Summary Description |
dma_map_alloc | Allocates resources for DMA data transfers. |
dma_map_load | Loads and sets allocated system resources for DMA data transfers. |
dma_map_dealloc | Releases and deallocates resources for DMA data transfers. |
dma_map_unload | Unloads system DMA resources. |
dma_get_next_sgentry | Returns a pointer to the next sg_entry. |
dma_get_curr_sgentry | Returns a pointer to the current sg_entry. |
dma_get_private | Gets a data element from the DMA private storage space. |
dma_put_private | Stores a data element in the DMA private storage space. |
dma_kmap_buffer | Returns a kernel segment (kseg) address of a DMA buffer. |
dma_min_boundary | Returns system-level information. |
dma_put_curr_sgentry | Puts a new bus address/byte count in the linked list of sg_entry structures. |
dma_put_prev_sgentry | Updates an internal pointer index to the linked list of sg_entry structures, and then places a new bus address/byte count pair in the linked list. |
For devices attached to certain I/O buses, you can define the device interrupt handlers for single binary module device drivers in the /etc/sysconfigtab database file by supplying a sysconfigtab file fragment. The information in this file is added to /etc/sysconfigtab during driver installation.
For devices attached to a PCI bus, you must register the interrupt handlers within the driver's probe routine by calling the handler_add, handler_delete, handler_enable, and handler_disable interfaces.
See Writing Device Drivers: Tutorial for examples on how to use the interrupt handler registration-related interfaces. See Writing Device Drivers: Reference for reference page descriptions of these interfaces.