Experiment : Improved MAC scheduling to enhance throughput  in 802.11

(Zhibin Wu)


1. Experiment Design
2. How to sync the test nodes and other test methodology...
3. How to analyze  the load of the link?
4. How to change the MAC driver for adding scheduling mechanism?
5. Test Result Analysis
6. Test program source code
7. Test data


Experiment Design


All the testing nodes are working with 802.11b and in ad-hoc mode.
All the testing nodes are sharing one same channel.
Test scheme: One mastering node and 2-3 slave nodes. Only one-way communication: Slave node ----> Master node. Which means slave nodes are not talking to each other.
All nodes are using RTS/CTS mechanism.

Space environment: with only around 10 feet distance. One slave is 3 ft far from Master. While another Slave node is 10 ft away from the slave.

Packets generation model: Each mode periodically generated a packet with a random size  with 1-1500 bytes, and with totally random contents. Mean value: 750 bytes
Test Period : 60 Seconds


Experiment Steps

1. Determine the throughput in good-link-quality environment

In this step, saturate the link with packets from FNs. It means maximally utilized the channel to send as more packets as possible as link quality is good. normally, it cannot achieve 100% percent of 11Mbps, only 40%-50% percent. Then, we would know a parameter how many packets per second )need to maximize the throughput. In this packet rate, any more packets cannot be transferred. Anymore, it means if a packet is takes a little long time to transmit (such as re-transmission occurs). The total throughputs will be decreased immediately.
"zero-tolerance" to re-transmission
2. See the performance degraded with the link-quality BER) decreasing. Then the test system is tuned to an state which is very sensitive to link quality. Any re-transmission will compromise the performance (throughput)

2.  Once found the "optimal packet rate". All following experiment will be done at this rate.
      I intentionally decrease the link quality of one forwarding node. By the means of " increasing the distance"
      and " decrease the transmit power". The goal is to let the BER dropped in an " unreliable" region.
      What's the unreliable region?
      For PER = 0.9
      if packet length is 10 bytes, it amounts to a BER = 1.4 e -3.
      if packet length is 100 bytes, it amounts to a BER = 1.4 e -4.
      if packet length is 1500 bytes, it amounts to a BER = 1.4 e -5.
      if packet length is 2000 bytes, it amounts to a BER = 0.88 e -5.

    When PER is below 0.9, re-transmission is sure to happen. This will degrade the performance of throughput.
     What's the relationship between the BER and  Throughput, when one FNs link quality is descending?

3. See if the MAC scheduling scheme helpful to enhance throughput in above-mentioned Sensitive state. 



Source codes

The program including two parts: test program in User space and driver program in Kernel space. There are 3 files:
1. C program in Master Testing Node. ( test_rx)
2. C program in Slave Testing Node.  (test_tx)
3. C Program in Device Driver for Aironet 340 Card. (airo.c) .

See Detail Design.....


Data Results are listed as

1. Packet Throughput with one slave --- master simplex communication (always good Link quality and BER )
Rx Profile in  Master node (10.0.0.1)
Tx Profile in  Slave Node 1 ( 10.0.0.5)
2. Packet Throughput with 2 slave --- master simplex communication (always good Link quality and BER )
Rx Profile in  Master node  (10.0.0.1)
Tx Profile in  Slave Node 1 (10.0.0.5)
Tx Profile in Slave Node 2  (10.0.0.4)
3. Packet Throughput when one slave node encounter bad link-quality ( low BER)
Rx Profile in  Master node  (10.0.0.1)
Tx Profile in  Slave Node 1 (10.0.0.5)
Tx Profile in Slave Node 2  (10.0.0.4)
4 Packet Throughput when one slave node encounter bad link-quality  (low BER, but MAC scheduling , RSSI-based discarding..)
Rx Profile in  Master node  (10.0.0.1)
Tx Profile in  Slave Node 1 (10.0.0.5)
Tx Profile in Slave Node 2  (10.0.0.4)