Compaq Multimedia Services
for OpenVMS Alpha
Programmer's Guide


Previous Contents Index

4.2.1 Querying Video Capture and Playback Devices

Before capturing and playing back video data, determine the capabilities of the video hardware present in the system. Video capability varies from one multimedia computer to another. Do not assume that video hardware is present in any given system. An application must determine the number of video devices present in a system and then query each device for its capabilities. The following sections describe these tasks.

4.2.1.1 Getting the Number of Video Devices

Use the videoGetNumDevs function to determine the number of video capture and playback devices installed in a system. Video devices are identified by a device identifier (ID). The device ID is determined implicitly from the number of video devices present in a given system. Device IDs range from 0 to the number of video devices present minus 1. For example, if there are three video devices in a system, valid device IDs are 0, 1, and 2.

See the description of the videoGetNumDevs function in Section 4.7 for more information about determining the number of video devices installed in a system.

4.2.1.2 Getting the Capabilities of a Video Device

Use the videoGetChannelCaps function to obtain a description of the capabilities of a video device channel. Channel capabilities include overlaying video, scaling images with the source and destination rectangles, and clipping images with the source and destination rectangles.

The videoGetChannelCaps function has an input argument that is a pointer to a CHANNEL_CAPS data structure, which the function fills with information about the capabilities of a specified video device channel.

See Section 4.4.1 and the description of the videoGetChannelCaps function in Section 4.7 for more information about getting the capabilities of a video device channel.

Note

Use the videoOpen function before calling the videoGetChannelCaps function. You must open a channel before calling the videoGetChannelCaps function.

4.2.2 Opening a Video Device

To use video device capabilities, a device must already be open. Open a device before attempting to open channels on that particular device.

Video devices are not guaranteed to be shareable, so a particular device might not be available when you request it. Use the videoOpen function to open a channel on the specified video device. The channel can be VIDEO_IN for capture and VIDEO_OUT for playback.

Each function that opens a video device takes a device ID, a pointer to a memory location (which is filled with a device handle), and flags for opening the device. Use the handle to identify the open video device when calling other video functions.

See the description of the videoOpen function in Section 4.7 for more information about opening a video device.

Note

Use the videoOpen function before calling the videoGetChannelCaps function. You must open a channel before calling the videoGetChannelCaps function.

4.2.3 Configuring Video Capture and Playback Devices

Use the videoConfigure function to set and retrieve configurable video device options. The configurable video device options include image format and palette.

The following sections describe how to set and retrieve these configurable video device options and provide a procedure for setting up format and palette information for 8-bit X image format images.

4.2.3.1 Setting and Obtaining Video Capture and Playback Format

Use the videoConfigure function with the msg argument set to DVM_FORMAT to indicate that video capture or playback format information is being sent to or retrieved from a video device channel.

Note

The desired video format must be set with the videoConfigure function before performing any video capture or playback operations, including setting up the palette.

The video capture format defines the attributes of the images transferred from the capture hardware frame buffer to system memory buffers through the VIDEO_IN channel. The video playback format defines the attributes of the images transferred from system memory buffers to the playback hardware frame buffer through the VIDEO_OUT channel. Attributes include image dimensions, color depth, and the compression format of the transferred images.

An application must specify one or more of the following flags to indicate the purpose of the DVM_FORMAT message:

VIDEO_CONFIGURE_QUERY
Determines if the video device supports the DVM_FORMAT message.

VIDEO_CONFIGURE_QUERYSIZE
Requests the size of the format (BITMAPINFOHEADER) data structure.

Note

The BITMAPINFOHEADER data structure is defined in Chapter 7 and can have standard or extended format. BITMAPINFOHEADER refers to whatever format is appropriate for the data type specified by the biCompression field (see Example 7-1).


VIDEO_CONFIGURE_SET
Indicates that format values are being sent to the video device.

VIDEO_CONFIGURE_GET
Indicates that the application is requesting the current format from the video device.

To determine if a video device supports the DVM_FORMAT message, an application sends the DVM_FORMAT message with the VIDEO_CONFIGURE_GET and VIDEO_CONFIGURE_QUERY flags set. The DV_ERR_OK error code is returned if the DVM_FORMAT message is supported. The DV_ERR_NOTSUPPORTED error code is returned if the DVM_FORMAT message is not supported.

To determine the amount of memory to allocate for the format, an application sends the DVM_FORMAT message with the VIDEO_CONFIGURE_GET and VIDEO_CONFIGURE_QUERYSIZE flags set. The format data structure size (in this case, BITMAPINFOHEADER) is returned in the lpdwReturn argument of the videoConfigure function. In some cases, this call may be unsupported.

To set the desired format, an application sends the DVM_FORMAT message with the VIDEO_CONFIGURE_SET flag set and passes pointers to the appropriate BITMAPINFOHEADER data structures.

See the description of the videoConfigure function in Section 4.7 for more information about setting and obtaining the desired video format.

See also the sample video programs in the MMOV$EXAMPLES:[VIDEO] directory. Routines in the viddualrecord.c , vidstreamin.c , and vidframein.c programs show how to set up video formats.

4.2.3.2 Setting and Obtaining a Video Capture and Playback Palette

Use the videoConfigure function with the msg argument set to DVM_PALETTE to indicate that video capture or playback palette information is being sent to or retrieved from a video device channel.

The videoConfigure function gives an application the ability to control and modify the palette used for video sequences. The DVM_PALETTE message applies only to the VIDEO_IN and VIDEO_OUT channels.

An application must specify one or more of the following flags to indicate the purpose of the DVM_PALETTE message:

VIDEO_CONFIGURE_QUERY
Determines if the video device supports the DVM_PALETTE message.

VIDEO_CONFIGURE_QUERYSIZE
Requests the size in bytes of the palette (RGBQUAD) array.

VIDEO_CONFIGURE_SET
Indicates that a palette is being sent to the video device.

VIDEO_CONFIGURE_GET
Indicates that the application is requesting the current palette from the video device.

To determine if a video device supports the DVM_PALETTE message, an application sends the DVM_PALETTE message with the VIDEO_CONFIGURE_GET and VIDEO_CONFIGURE_QUERY flags set. The DV_ERR_OK error code is returned if the DVM_PALETTE message is supported. The DV_ERR_NOTSUPPORTED error code is returned if the DVM_PALETTE message is not supported.

To determine the size of the palette, an application sends the DVM_FORMAT message with the VIDEO_CONFIGURE_GET and VIDEO_CONFIGURE_QUERYSIZE flags set. The palette size is returned in the lpdwReturn argument of the videoConfigure function.

Note

The palette or colormap used with the video functions must always be a 256-element RGBQUAD array.

If the DVM_PALETTE message is sent with the VIDEO_CONFIGURE_SET flag set, the lpData1 argument points to the RGBQUAD array that contains the new palette. The dwSize1 argument specifies the size of the memory allocated for the RGBQUAD array. The dwSize2 argument is set to the number of colors to use when setting the palette. This use of dwSize2 is a Compaq extension to the videoConfigure function. Normally, this field is not used.

If the DVM_PALETTE message is sent with the
VIDEO_CONFIGURE_GET flag set, the lpData1 argument points to the RGBQUAD array used to retrieve the palette. The dwSize1 argument specifies the size of the memory allocated for the RGBQUAD array. The queried video device writes the palette to the RGBQUAD array. Check the RGBQUAD array returned from VIDEO_CONFIGURE_GET to determine the number of colors actually used by the device.

Note

For 8-bit X image format images on the FullVideo Supreme and FullVideo Supreme JPEG devices, a call to the videoConfigure function is necessary to set the palette. The device then uses as many of the colors as possible and fills in more if possible and necessary. Then an application must retrieve the palette information in order to set up the screen (graphics display buffer) colormap.

See the description of the videoConfigure function in Section 4.7 for more information about setting and obtaining the video capture palette.

See also the sample video programs in the MMOV$EXAMPLES:[VIDEO] directory. Routines in the viddualrecord.c , vidstreamin.c , and vidframein.c programs show how to set up the palettes and the X Window System colormaps.

4.2.4 Setting and Obtaining the Video Input Standard Type

Compaq has extended the API specification to allow applications to specify the video input standard type. Use the videoGetStandard and videoSetStandard functions to obtain and set the current video input standard type. The supported video input standard types are NTSC, PAL, SECAM, S-video 525, and S-video 625.

To set the video input standard type, call the videoSetStandard function immediately after the call to the videoOpen function that opens the video device. If the videoSetStandard function is not called immediately after the videoOpen function, the specified input standard type is not set and the DV_ERR_BADFORMAT error code is returned.

See the descriptions of the videoGetStandard and videoSetStandard functions in Section 4.7 for more information about obtaining and setting the current video input standard type.

4.2.5 Controlling Video Data Frame and Field Modes

Compaq has extended the API specification to allow applications to better control frame and field modes, and in field mode, to specify which field is the target of operation. Use the videoGetFieldMode and videoSetFieldMode functions to obtain and set the current video field mode.

See the descriptions of the videoGetFieldMode and videoSetFieldMode functions in Section 4.7 for more information about obtaining and setting the video field mode. Also, see the sample code vidstreamin.c and vidscreenout.c for examples of these functions in use.

4.2.6 Setting and Obtaining the Video Device Port Number

Compaq has extended the API specification to allow applications to specify the video device port number. Use the videoGetPortNum and videoSetPortNum functions to obtain and set the current video device port number. These functions can also be used for video option modules that allow the S-video port to be used as two separate ports.

See the descriptions of the videoGetPortNum and videoSetPortNum functions in Section 4.7 for more information about obtaining and setting the video device port number.

4.2.7 Transferring Data to or from the Video Frame Buffer

Use the videoFrame function to transfer a single frame from or to a video device channel. The videoFrame function has an input argument that is a pointer to a data structure, which the function uses to identify a video data buffer. The data structure is VIDEOHDR and is described in Section 4.4.2.

Transferring data from a video frame buffer is the simplest form of video capture. Applications can use this form of video capture to record animated sequences created frame-by-frame or to capture a single still image such as a photograph.

Transferring data to a video frame buffer is the simplest form of video playback. This can be used to playback single frames of animated sequence on an X window screen. The following sequence of operations occurs when an application requests the transfer of a single video frame:

  1. The application allocates memory for the data buffer using the mmeAllocBuffer function.
  2. For capture, the application sets a pointer to the empty data buffer in the VIDEOHDR data structure. For playback, the application sets a pointer to the filled buffer in the VIDEOHDR data structure.
  3. The application calls the videoFrame function, which sends a pointer to the VIDEOHDR data structure. (The destination channel must be a VIDEO_IN or VIDEO_OUT channel.)
  4. For capture, the data buffer is filled with information from the frame buffer and the VIDEOHDR data structure is updated. (Note that the buffer might not have been prepared.) For playback, the application has to fill in the data, which gets played on the video device.
  5. The application reuses the buffer or frees it. For video capture, the buffer may be processed before reuse or being freed.

See Section 4.4.2 and the description of the videoFrame function in Section 4.7 for more information about transferring data from the video frame buffer.

4.2.8 Streaming Video Data Capture and Playback

An application uses the video streaming functions to stream full-motion video to the application from a VIDEO_IN or VIDEO_OUT channel. The following sequence of operations occurs when streaming video data between a video device channel and an application or vice versa:

  1. The application allocates memory for the video data buffers with the mmeAllocBuffer function.
  2. The application requests that the data buffers be prepared with the videoStreamPrepareHeader function.
  3. The application initializes the data stream with the videoStreamInit function and sets up a callback function.
  4. The application sends the empty or filled data buffers to the video device channel with the videoStreamAddBuffer function. The data buffers are placed in the device channel input/output queue.
  5. The application starts the video streaming operation with the videoStreamStart function. For capture, the data buffer is filled and returned to the callback function with the dwFlags of the VIDEOHDR data structure set to VHDR_DONE. For playback, the data buffer is played and returned to the callback function with the dwFlags of the VIDEOHDR data structure set to VHDR_DONE. The buffer is then released from the queue, and the application proceeds to add the next buffer in the queue.
  6. The callback function is used to see if the data in a buffer is ready to be passed back to the application for processing.
  7. When the data buffer is returned to the application, the application empties the contents of the buffer if needed (the data will correspond to the data that is played back or being captured), clears the VHDR_DONE flag in the dwFlags field of the VIDEOHDR data structure, and requeues the buffer to the device channel queue with the videoStreamAddBuffer function.

Once data streaming starts, the data buffers are filled at the rate specified by the application in the call to the videoStreamInit function. The buffers are filled without waiting for any synchronization signal from the application as long as buffers are available and streaming is not paused or stopped by the application. The buffers are filled and returned in the order in which they are placed in the queue. (If a device runs out of buffers, it will set an error flag. An application can use the videoStreamGetError function to test for this condition. See Section 4.2.8.7 for more information about returning video streaming errors.)

The data buffers are received back by the application in the order in which they were queued. When the application is ready to capture or playback more data, it waits to enter the callback and checks the VHDR_DONE flag of the next buffer it expects to receive. If the VHDR_DONE flag is set, the application continues the streaming operation with that buffer.

Video streaming continues until the application stops it. The following sequence of operations occurs when the application is finished capturing or playing back data:

  1. The application stops the video streaming with the videoStreamStop function.
  2. If the application wants to restart streaming, it calls the videoStreamStart function. If the application is finished streaming, it requests that the data buffers be unprepared with the videoStreamUnprepareHeader function.
  3. The application releases the data stream with the videoStreamFini function and frees the memory allocated for the video data.

The following sections describe the steps involved in video data streaming in more detail.

4.2.8.1 Initializing the Video Data Stream

Before beginning video capture or playback, initialize a video device channel for streaming using the videoStreamInit function. This function must precede all other video streaming functions for a video device channel.

For VIDEO_EXTERNALIN channels, the videoStreamInit function enables the capture of images into the capture hardware frame buffer. VIDEO_EXTERNALIN channels ignore the dwMicroSecPerFrame, dwCallback, and dwCallbackInst arguments. For VIDEO_IN channels, the videoStreamInit function sets the capture rate and the callback information. For VIDEO_IN and VIDEO_OUT channels, a dwCallback is required.

See the description of the videoStreamInit function in Section 4.7 for more information about initializing a video data stream.

4.2.8.2 Allocating and Preparing a Data Buffer for Video Streaming

The data buffer for video streaming is pointed to by the lpData data field in a VIDEOHDR data structure. The VIDEOHDR data structure must be allocated with the mmeAllocMem function, and the data buffer must be allocated with the mmeAllocBuffer or the mmeAllocBufferAndGetShminfo function before being passed to the videoStreamPrepareHeader function to be prepared for video streaming.

Use the videoStreamPrepareHeader function to prepare a data buffer for video streaming. Once a data buffer is allocated and prepared for video streaming, it can be sent to the video device channel with the videoStreamAddBuffer function.

See Chapter 2 for more information about the memory allocation functions and optimizations. See the description of the videoStreamPrepareHeader function in Section 4.7 for more information about preparing a data buffer for video streaming.


Previous Next Contents Index