MORE INFORMATION
Token Ring performance depends heavily on the type of machine/bus/net
card used for the measurements. Using a PS/2-IBMtok (MCA) as a server
increases the total throughput by 56 percent over that of a
Compaq386/25-IBMtok (ISA). Other studies on this subject indicate that
the total throughput of the server can increase as much as 9.6 times
by using Advance Micro Channel (AMC), instead of ISA. Therefore, given
a choice, one should use higher bandwidth, DMA bus master card, and
choose better bus architecture, such as AMC or ISA. These results
should be verified in-house.
[Sniffer data shows that back-to-back maximum frame (2000 bytes)
transmission is 3 milliseconds. This translates into the total maximum
throughput of 651 KBPS (kilobytes per second). Considering other
delays involved in transmitting data, this number justifies the
maximum effective throughput obtained (536) for all the six
workstations. Sniffer's timer granularity of 1 millisecond could have
also contributed to the error.]
Shared Memory
The IBM Token Ring card has 64K RAM on board. The card can be
configured to use 8K, 16K (with RAM paging 64K), 32K, and 64K of RAM.
It is better to use a larger size of RAM because this allows the user
to specify a larger buffer (xmit/recv) sizes. It was discovered during
these measurements that whenever there is less memory available than
the total buffer size specified:
- No error is indicated during driver load/bind time.
- "Net3100 The operation failed because of a network software error
occurred" is displayed while doing any network activity.
Transmit Buffer Size
This parameter dictates the frame size used. Token Ring allows a
maximum frame size of 4K for 4 MBPS (megabytes per second) and 17952
bytes for 16 MBPS. The frame size used on a session is equal to the
minimum of the transmit buffer sizes specified (or used) for the
server and the client. Default transmit buffer size is a minimum of 25
percent of RAM or the maximum size allowed.
An interesting fact is that NetBEUI currently (LM2.0C) uses the
maximum frame size of 2000 bytes. Therefore, specifying xmitbufsize
greater than 2K results in memory wastage. In the future, this limit
will be changed to 18000 bytes.
Changing the xmitbuffer from 256 bytes to 1K increases the throughput
by 36-153 percent; changing it from 1K to 2K results in a 16 percent
throughput increase. Changing the xmit buffer size beyond this value
does not increase the throughput. At higher transmit buffer sizes,
write (from clients to server) performance degrades because less
memory is available for receive buffers. This degradation occurs only
when less RAM is specified/used.
Number of Transmit Buffers
Increasing this value from the default of 1 to the maximum value (2)
increases the server throughput by 13-18 percent for large I/O.
Therefore, it's better to use two xmit buffers if there is enough
memory onboard.
Receive Buffer Size
Changing this from 256 bytes, 1K, to 2K did not make any significant
difference in the total throughput. The write performance for large
sizes degrades 8-9 percent for larger receive buffer sizes on
workstations. For single workstations, higher values of receive buffer
size (2000) improves the read performance by 7-9 percent. Specifying a
recvbufsize value higher than 2000 bytes results in a network software
error while doing any network activity (DCR).
Number of Receive Buffers
It is OK to leave this value at the default (2) because all the memory
available is used for receive buffers.
Early Release
This parameter is supposed to increase the overall throughput by
releasing the token after transmitting a data frame. Performance tests
conducted with six workstations did not show much difference between
early releases and without it. However, this will definitely become a
significant factor in cases where the number of machines on a network
is large (causing larger token/frame forwarding overhead and also
longer propagation delay). Even with this setup, early releases write
performance can degrade because a large number of back-to-back frames
may cause the server to drop some frames. This problem may not occur
for higher RAM (64K) and faster net cards (MCA, busmaster).
Machines in Test
Servers:
Compaq 386/25 (9 MB, 4 MB Cache), IBMtok
16/4A (16 MBPS), OS/2 1.21, LM2.0C
PS/2-80 (12 MB, 6 MB Cache), IBMtok 16/4A
MCA (16 MBPS), OS/2 1.21, LM2.0C
Wkstas:
PS/2-30, IBMtok 16/4A (16 MBPS), IBM DOS
3.30, LM2.0C
(using 6 and 1 workstation PF test suite)
small exclusive read and write: 64 and 1K
large shared read and write: 16K and 62K