作者:Dan Singletary [email protected]
譯者:陳敏劍 [email protected] 最新版本:2002-09-26
轉為 Wiki: Ping (ping 'at' pingyeh 'dot' net), 2003 年 11 月 27 日
這份文檔描述如何將 Linux 設定成擁有帶寬管理功能的路由器,有效地管理ADSL和其它bandwidth
設備(cable modem, ISDN, 等等)
1. 介紹
文檔的目地是提供一個可行的方法管理ADSL(cable mode)出站的通信.
1.1. 文檔的最新版本
您可以在 http://www.tldp.org 找到這份文檔的最新版本.
1.2. 郵件列表
有關ADSL Bandwidth Manage 的問題和信息請訂閱: jared.sonicspike.net
1.3. 聲明
如果採用了這份HOWTO當中的方法而對設備或造成任何隕失,無論是作者, 散佈者或對這份HOWTO有貢獻的人都將拒絕承擔任何責任.
1.4. 智慧財產權和許可
此HOWTO的智慧財產權為Dan Singletary所有:
This document is copyright 2002 by Dan Singletary, and is released under the terms of the GNU Free Documentation License, which is hereby incorporated by reference.
1.5. 反饋與修正
如果您對HOWTO有什麼問題或看法,請在有空的時候給作者來e-mail: [email protected]
2. 背景
2.1. 必要條件
要點: 這些方法儘管沒有在其它的發行版中試驗過,我想它正常工作大概沒什麼問題.下面是運行的環境:
Red Hat Linux 7.3
2.4.18-5 完全支持 QoS 的核心版本 (模組也可以) 包含以下的patches (補丁)(可能會最終加入到最新的核心當中):
HTB queue - http://luxik.cdi.cz/~devik/qos/htb/ 注意: Mandrake(曼德萊克8.1, 8.2)的核心自 2.4.18-3 起就有了HTB 的 patches.
IMQ device - IMQ device - http://luxik.cdi.cz/~patrick/imq/
iptables 版本v1.2.6a 或更新的(version of iptables distributed with Red Hat 7.3 is missing the length module)
Note: Previous versions of this document specified a method of bandwidth control that involved patching the existing sch_prio queue. It was found later that this patch was entirely unnecessary. Regardless, the newer methods outlined in this document will give you better results (although at the writing of this document 2 kernel patches are now necessary. Happy patching.)
2.2. 佈局
化繁為簡,所有的設定依照下面這個佈局進行:
-
-- 128kbit/s -------------- -- 10Mbit -- Internet -------------------- | ADSL Modem | -------------------- 1.5Mbit/s -- -------------- | | eth0 V ----------------- | | | Linux Router | | | ----------------- | .. | eth1..ethN | | V V Local Network
2.3. Packet Queues(數據包隊列)
Packet Queues是一個容器, 當數據不能被網絡設備立既送走的時候, Packet Queues 負責暫時收留它們. 除非被設定成另外一種方式,否則數據包是按 FIFO (first in, first out 最早進入Queues的數據將被最快發送走) 進行排隊.
2.3.1. The Upstream(向上傳輸)
ADSL的帶寬由不對稱的 1.5Mbit/s downstream(向下傳輸)和128kbit/sec upstream(向上傳輸)組成. Linux 路由器(主機)同ADSL modem的連結速率在10Mbits/s左右.如果 Linux 路由器同 Local Network(本地網絡) 的連結速率也在10Mbits/s左右,路由器和Local Network(本地網絡)的Queues(隊列)就不會存在.但以10Mbits/s到達ADSL modem的數據包卻要以128kbit/sec 傳輸到Internet.因此數據包將在ADSL modem形成Queues,ADSL modem將不能應付而產生數據包丟失現象. TCP就是用來控制類似這樣的情況,它調整傳輸窗口的大小以達到利用帶寬的最佳效果.
TCP控制Queues(隊列)以利用帶寬. 較大的FIFO Queues將延長數據包的傳送時間.
另一種同FIFO有點相似的Queues(隊列)是 n-band priority queue, 它取代FIFO只有一個隊列的做法, 數據包分級別排出多個FIFO Queues(隊列), 每一個Queues都有優先級別的設定, 總是從級別高的Queues將數據dequeued(出列). 使用這種方法,FTP和telnet同時上載數據包的時候, telnet的數據包將得到更高的優先級別.單獨的telnet數據包將被立既發送.
Linux 使用一種新的Queues: Hierarchical Token Bucket (HTB 譯為分級型式的隊列容器). 它有點像n-band priority queue, 但n-band priority queue在每個級別中只有限制數據通訊的能力. HTB有一項更加先進的功能:在已有的級別之上能夠建立一個新的級別通訊.更多的資訊請參照: http://www.lartc.org/
2.3.2. The Downstream(向下傳輸)
從Internet發送至ADSL modem的數據包入站和數據包出站的Queues大至相同. 不管怎樣, queue 會集在您的ISP那裡. 因為這樣您大概不能直接控制數據包如何排隊或以哪種形式分配優先權. 只有一種方法來縮短這裡的反應時間:期望向您發送數據包的時候不要太快. 不幸的是,您無法直接控制數據包的到達速度. 這裡有一些方法將發送者的速度減慢:
故意將入站數據包丟棄. TCP is designed to take full advantage of the available bandwidth while also avoiding congestion of the link. This means that during a bulk data transfer TCP will send more and more data until eventually a packet is dropped. TCP detects this and reduces it's transmission window. This cycle continues throughout the transfer and assures data is moved as quickly as possible.
操縱advertised receive window(廣告接收窗)- During a TCP transfer, the receiver sends back a continuous stream of acknowledgment (ACK) packets. Included in the ACK packets is a window size advertisement which states the maximum amount of unacknowledged data the receiver should send. By manipulating the window size of outbound ACK packets we can intentionally slow down the sender. At the moment there is no (free) implementation for this type of flow-control on Linux (however I may be working on one!).
3. 工作原理
有幾個步驟可以優化upstream bandwidth(向上傳輸的帶寬).第一是將Linux路由器至ADSL modem的傳輸帶寬降低到 ADSL modem至Internet的帶寬以下.在 Linux 路由器形成數據包隊列.
第二,在路由器設定隊列的優先權和組織方法.
我們將從telnet , 多人連線遊戲以及交互軟體來考查隊列的優先權.
使用 HTB 控制隊列,我們可以同時設定帶寬控制和隊列優先權,並且優先級別不會相互制約.
第三,設定防火牆使用fwmark區分數據包的次序.
3.1. Throttling Outbound Traffic with Linux HTB(使用HTB控制出站通訊)
我們將使用HTP控制數據包到達 ADSL modem 的速率, 為了縮短反應時間,我們必需保證不在 ADSL modem 形成哪怕是只有一個數據包的隊列.
Note: previous claims in this section (originally named N-band priority queuing) were later found to be incorrect. It actually WAS possible to classify packets into the individual bands of the priority queue by only using the fwmark field, however it was poorly documented at the writing of version 0.1 of this document
3.2. Priority Queuing with HTB(使用 HTB 設定隊列優先權)
現在,我們仍不知如何完善性能, 我們只是將隊列從ADSL modem 轉移到Linux路由器上而巳. 如果現在有100個 數據包的普通隊列出現在當前的設定中,我將不敢想像它的結果, 但這只是一時的危機而巳.
HTB當中每個相鄰的隊列可以分配到一個優先權.在不同的級別當中設定不同的類型.自從我們可以為每個級別設定一個最小保證值, 我們就擁有了控制數據包的出列和發送次序能力. HTB可以很好地做到這點並且不會讓優先級相互制約..
設定了級別以後,我們使用過濾器將通信進行級別劃分.有幾種方法可以實現,但我們只介紹常用的iptables/ipchains. 我們將使用iptables設定一些規則將不同的通信劃入到不同的級別當中.
3.3. 使用iptables 劃分出站的通訊
Note: originally this document used ipchains to classify packets. The newer iptables is now used.
這裡有一個簡單的描述,出站的數據包如何從0x00的等級開始,劃入4個不同的等級當中:
將所有數據包的級別設為0x03,這是最低的級別.
將ICMP的數據包級別設為0x00, 想讓ping的反應更快,就必需得到最高級別的優先權.
將所有發往目標端口為25的數據包級別設定為0x03,如果有誰發送的e-mail 帶有一個很大的附件, 我們的通訊就會像陷入沼澤一樣寸步難行, 當然,我們並不想那樣.
將所有發往遊戲服務器的數據包級別設定為0x02,這將給遊戲一個適中的反應時間. but will keep them from swamping out the system applications that require low latency.
將所有發往目標端口為1024或更低的數據包級別設定為0x01,表示給telnet,SSH等類型的系統服務提供優先權. Ftp的端口也在這個範圍之內.
將任何較小的數據包級別設定為0x02,Outbound ACK packets from inbound downloads should be sent promptly to assure efficient downloads. This is possible using the iptables length module.
當然,它還可以依據您的需求來設定.
3.4. 還可以再挖掘一下
要加快反應您至少要做兩件以上的事情. 首先, 將最大傳輸單元(MTU)設定在1500bytes以下, 降低這個值就會縮短平均等待時間, 這會減輕網絡的負載(恢復了實際可用的吞吐量),因為每個數據包中有40bytes的IP和TCP資訊. 另外加快反應的方法是將隊列長度縮短至100以下,這可以省去ADSL10秒相當於清空一個1500byteMTU的時間.
3.5. Attempting to Throttle Inbound Traffic(控制入站的通訊)
通過使用 Intermediate Queuing Device (IMQ)隊列中間件, 我們可以像處理出站數據包一樣將入站數據包送入隊列當中. 這個案例中的數據包優先權非常簡單. 將不屬於TCP範圍內的通訊級別設定為 0x00, 屬於TCP範圍內的通訊級別設定為 0x01, 也可以將較小的TCP數據包通訊級別設定為 0x00,我們將把標準的FIFO隊列級別設定為 0x00 , 我們把Random Early Drop (RED) 隊列級別設定為0x01 RED將在數據包看起來失去控制的時候(隊列將要溢出), 減慢傳輸或將數據包丟棄. 我們將最大化入站速率(速率小於實際能夠達到的).We'll also rate-limit both classes to some maximum inbound rate which is less than your true inbound speed over the ADSL modem.
3.6. 為什麼入站的通訊限制看起來不怎麼樣
我們必需限制入站的通訊,以防止ISP的隊列飽和, 這樣相當於緩衝5秒的數據, 問題是現在唯一的控制途徑是將數據包丟棄.這些數據包以經從ADSL modedm那裡得到了一些帶寬. 但是這些數據包卻被丟棄了,這些被丟棄的數據包最終會吃掉更多的帶寬. 當我們限制通訊的時候, 我們限制了來自本地網絡的數據包傳送比率. 因為因為我們丟棄的那些數據包所以實際入站的傳送比率在此之上. 我們實際上限制的入站比ADSL modem實際能達到的比率還要低. 在實際當中, 我將自己的1.5mbit/s downstream ADSL 限制在700kbit/sec ,使它能並發5個下載的連結. TCP會話越多,浪費在丟棄數據包的帶寬就越多,並且數率比您的限制還要低.
更好的途徑來控制TCP通訊是操作 TCP window, 但是這個好像離題了(我知道有一種...)
4. 執行
4.1. Caveats
限制發送至DSL modem的數據速率不像看起似的那麼簡單. 大多數 DSL modems 以經真正地在您的ISP閘道和 linux box 之間建立了傳輸數據的以太網橋接. 大多數的 DSL modems 使用ATM作為發送數據的連接層. ATM 總是以53bytes/單元 的形式發送數據.這些數據當中的 5bytes 是信息頭 ,餘下的48bytes才是傳輸的數據.既使您發送1byte的數據,也將因為ATM 總是以 53bytes/單元 的形式發送數據而消耗53bytes的帶寬. 這表示您將發送一個由 0 bytes 數據 + 20 bytes TCP 報頭 + 20 bytes IP 報頭 + 18 bytes 以太網報頭 組成的TCP ACK數據包. 實際上,既使您發送的以太網數據包只有40bytes的有效負載 (TCP and IP header), 最小的以太網數據包有效負載數據是46bytes,所以另外的6bytes是空的負載. 這意味著實際以太網數據包加上報頭是 18 + 46 = 64 bytes. 在ATM的規則中,如果發送64bytes的數據,您將發送兩個總共佔據106bytes帶寬的ATM cells(單元). 這表示每發送一個TCP ACK 數據包, 您會浪費掉42bytes的帶寬. 如果 Linux 計算 DSL modem 使用的封裝就沒什麼問題了, 但是, Linux 只計算 TCP header, IP header, 和 14 bytes 的 MAC 地址. (Linux 不計算 4 bytes 的 CRC 因為這是用來控制硬體層的). Linux 不會將以太網數據包的最小值計算為 46 bytes, 也不會去計算固定的 ATM 單元的大小.
這些所有的都表示您限制的出站帶寬比實際上的要低一點.您必需找到最適合您自己的限制值. 但是當您下載一個大文件時網絡的反應時間就會暴漲至3秒以上. 因為Linux在帶寬消耗計算的誤差, 所以這很可發生.
I have been working on a solution to this problem for a few months and have almost settled on a solution that I will soon release to the public for further testing. The solution involves using a user-space queue instead of linux's QoS to rate-limit packets. I've basically implemented a simple HTB queue using linux user-space queues. This solution (so far) has been able to regulate outbound traffic SO WELL that even during a massive bulk download (several streams) and bulk upload (gnutella, several streams) the latency PEAKS at 400ms over my nominal no-traffic latency of about 15ms. For more information on this QoS method, subscribe to the email list for updates or check back on updates to this HOWTO.
4.2. Script: myshaper
下面是我用來控制自己路由器的script. 出站的通訊依據類型放入至7個隊列當中. 入站的通訊放入至兩個與TCP數據(如果入站數據超出速率,TCP數據包就被丟棄)有關的隊列中(lowest priority). script 當中給出的速率看上去工作得很好,這是適合我自己的設定,對於您來說結果可能不大相同.
這個 script 是在 ADSL WonderShaper 的基礎上寫出來的,請參照: LARTC website.
-
#!/bin/bash # # myshaper - DSL/Cable modem outbound traffic shaper and prioritizer. # Based on the ADSL/Cable wondershaper (www.lartc.org) # # Written by Dan Singletary (8/7/02) # # NOTE!! - This script assumes your kernel has been patched with the # appropriate HTB queue and IMQ patches available here: # (subnote: future kernels may not require patching) # # http://luxik.cdi.cz/~devik/qos/htb/ # http://luxik.cdi.cz/~patrick/imq/ # # Configuration options for myshaper: # DEV - set to ethX that connects to DSL/Cable Modem # RATEUP - set this to slightly lower than your # outbound bandwidth on the DSL/Cable Modem. # I have a 1500/128 DSL line and setting # RATEUP=90 works well for my 128kbps upstream. # However, your mileage may vary. # RATEDN - set this to slightly lower than your # inbound bandwidth on the DSL/Cable Modem. # # # Theory on using imq to shape inbound traffic: # # It's impossible to directly limit the rate of data that will # be sent to you by other hosts on the internet. In order to shape # the inbound traffic rate, we have to rely on the congestion avoidance # algorithms in TCP. Because of this, WE CAN ONLY ATTEMPT TO SHAPE # INBOUND TRAFFIC ON TCP CONNECTIONS. This means that any traffic that # is not tcp should be placed in the high-prio class, since dropping # a non-tcp packet will most likely result in a retransmit which will # do nothing but unnecessarily consume bandwidth. # We attempt to shape inbound TCP traffic by dropping tcp packets # when they overflow the HTB queue which will only pass them on at # a certain rate (RATEDN) which is slightly lower than the actual # capability of the inbound device. By dropping TCP packets that # are over-rate, we are simulating the same packets getting dropped # due to a queue-overflow on our ISP's side. The advantage of this # is that our ISP's queue will never fill because TCP will slow it's # transmission rate in response to the dropped packets in the assumption # that it has filled the ISP's queue, when in reality it has not. # The advantage of using a priority-based queuing discipline is # that we can specifically choose NOT to drop certain types of packets # that we place in the higher priority buckets (ssh, telnet, etc). This # is because packets will always be dequeued from the lowest priority class # with the stipulation that packets will still be dequeued from every # class fairly at a minimum rate (in this script, each bucket will deliver # at least it's fair share of 1/7 of the bandwidth). # # Reiterating main points: # * Dropping a tcp packet on a connection will lead to a slower rate # of reception for that connection due to the congestion avoidance algorithm. # * We gain nothing from dropping non-TCP packets. In fact, if they # were important they would probably be retransmitted anyways so we want to # try to never drop these packets. This means that saturated TCP connections # will not negatively effect protocols that don't have a built-in retransmit like TCP. # * Slowing down incoming TCP connections such that the total inbound rate is less # than the true capability of the device (ADSL/Cable Modem) SHOULD result in little # to no packets being queued on the ISP's side (DSLAM, cable concentrator, etc). Since # these ISP queues have been observed to queue 4 seconds of data at 1500Kbps or 6 megabits # of data, having no packets queued there will mean lower latency. # # Caveats (questions posed before testing): # * Will limiting inbound traffic in this fashion result in poor bulk TCP performance? # - Preliminary answer is no! Seems that by prioritizing ACK packets (small 64b) # we maximize throughput by not wasting bandwidth on retransmitted packets # that we already have. # # NOTE: The following configuration works well for my # setup: 1.5M/128K ADSL via Pacific Bell Internet (SBC Global Services) DEV=eth0 RATEUP=90 RATEDN=700 # Note that this is significantly lower than the capacity of 1500. # Because of this, you may not want to bother limiting inbound traffic # until a better implementation such as TCP window manipulation can be used. # # End Configuration Options # if [ $1 = status ] then echo [qdisc] tc -s qdisc show dev $DEV tc -s qdisc show dev imq0 echo [class] tc -s class show dev $DEV tc -s class show dev imq0 echo [filter] tc -s filter show dev $DEV tc -s filter show dev imq0 echo [iptables] iptables -t mangle -L MYSHAPER-OUT -v -x 2 /dev/null iptables -t mangle -L MYSHAPER-IN -v -x 2 /dev/null exit fi # Reset everything to a known state (cleared) tc qdisc del dev $DEV root 2 /dev/null /dev/null tc qdisc del dev imq0 root 2 /dev/null /dev/null iptables -t mangle -D POSTROUTING -o $DEV -j MYSHAPER-OUT 2 /dev/null /dev/null iptables -t mangle -F MYSHAPER-OUT 2 /dev/null /dev/null iptables -t mangle -X MYSHAPER-OUT 2 /dev/null /dev/null iptables -t mangle -D PREROUTING -i $DEV -j MYSHAPER-IN 2 /dev/null /dev/null iptables -t mangle -F MYSHAPER-IN 2 /dev/null /dev/null iptables -t mangle -X MYSHAPER-IN 2 /dev/null /dev/null ip link set imq0 down 2 /dev/null /dev/null rmmod imq 2 /dev/null /dev/null if [ $1 = stop ] then echo Shaping removed on $DEV. exit fi ########################################################### # # Outbound Shaping (limits total bandwidth to RATEUP) # set queue size to give latency of about 2 seconds on low-prio packets ip link set dev $DEV qlen 30 # changes mtu on the outbound device. Lowering the mtu will result # in lower latency but will also cause slightly lower throughput due # to IP and TCP protocol overhead. ip link set dev $DEV mtu 1000 # add HTB root qdisc tc qdisc add dev $DEV root handle 1: htb default 26 # add main rate limit classes tc class add dev $DEV parent 1: classid 1:1 htb rate ${RATEUP}kbit # add leaf classes - We grant each class at LEAST it's fair share of bandwidth. # this way no class will ever be starved by another class. Each # class is also permitted to consume all of the available bandwidth # if no other classes are in use. tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 0 tc class add dev $DEV parent 1:1 classid 1:21 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 1 tc class add dev $DEV parent 1:1 classid 1:22 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 2 tc class add dev $DEV parent 1:1 classid 1:23 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 3 tc class add dev $DEV parent 1:1 classid 1:24 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 4 tc class add dev $DEV parent 1:1 classid 1:25 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 5 tc class add dev $DEV parent 1:1 classid 1:26 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 6 # attach qdisc to leaf classes - here we at SFQ to each priority class. SFQ insures that # within each class connections will be treated (almost) fairly. tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 tc qdisc add dev $DEV parent 1:21 handle 21: sfq perturb 10 tc qdisc add dev $DEV parent 1:22 handle 22: sfq perturb 10 tc qdisc add dev $DEV parent 1:23 handle 23: sfq perturb 10 tc qdisc add dev $DEV parent 1:24 handle 24: sfq perturb 10 tc qdisc add dev $DEV parent 1:25 handle 25: sfq perturb 10 tc qdisc add dev $DEV parent 1:26 handle 26: sfq perturb 10 # filter traffic into classes by fwmark - here we direct traffic into priority class according to # the fwmark set on the packet (we set fwmark with iptables # later). Note that above we've set the default priority # class to 1:26 so unmarked packets (or packets marked with # unfamiliar IDs) will be defaulted to the lowest priority # class. tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 21 fw flowid 1:21 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 22 fw flowid 1:22 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 23 fw flowid 1:23 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 24 fw flowid 1:24 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 25 fw flowid 1:25 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 26 fw flowid 1:26 # add MYSHAPER-OUT chain to the mangle table in iptables - this sets up the table we'll use # to filter and mark packets. iptables -t mangle -N MYSHAPER-OUT iptables -t mangle -I POSTROUTING -o $DEV -j MYSHAPER-OUT # add fwmark entries to classify different types of traffic - Set fwmark from 20-26 according to # desired class. 20 is highest prio. iptables -t mangle -A MYSHAPER-OUT -p tcp --sport 0:1024 -j MARK --set-mark 23 # Default for low port traffic iptables -t mangle -A MYSHAPER-OUT -p tcp --dport 0:1024 -j MARK --set-mark 23 # iptables -t mangle -A MYSHAPER-OUT -p tcp --dport 20 -j MARK --set-mark 26 # ftp-data port, low prio iptables -t mangle -A MYSHAPER-OUT -p tcp --dport 5190 -j MARK --set-mark 23 # aol instant messenger iptables -t mangle -A MYSHAPER-OUT -p icmp -j MARK --set-mark 20 # ICMP (ping) - high prio, impress friends iptables -t mangle -A MYSHAPER-OUT -p udp -j MARK --set-mark 21 # DNS name resolution (small packets) iptables -t mangle -A MYSHAPER-OUT -p tcp --dport ssh -j MARK --set-mark 22 # secure shell iptables -t mangle -A MYSHAPER-OUT -p tcp --sport ssh -j MARK --set-mark 22 # secure shell iptables -t mangle -A MYSHAPER-OUT -p tcp --dport telnet -j MARK --set-mark 22 # telnet (ew...) iptables -t mangle -A MYSHAPER-OUT -p tcp --sport telnet -j MARK --set-mark 22 # telnet (ew...) iptables -t mangle -A MYSHAPER-OUT -p ipv6-crypt -j MARK --set-mark 24 # IPSec - we don't know what the payload is though... iptables -t mangle -A MYSHAPER-OUT -p tcp --sport http -j MARK --set-mark 25 # Local web server iptables -t mangle -A MYSHAPER-OUT -p tcp -m length --length :64 -j MARK --set-mark 21 # small packets (probably just ACKs) iptables -t mangle -A MYSHAPER-OUT -m mark --mark 0 -j MARK --set-mark 26 # redundant- mark any unmarked packets as 26 (low prio) # Done with outbound shaping # #################################################### echo Outbound shaping added to $DEV. Rate: ${RATEUP}Kbit/sec. # uncomment following line if you only want upstream shaping. # exit #################################################### # # Inbound Shaping (limits total bandwidth to RATEDN) # make sure imq module is loaded modprobe imq numdevs=1 ip link set imq0 up # add qdisc - default low-prio class 1:21 tc qdisc add dev imq0 handle 1: root htb default 21 # add main rate limit classes tc class add dev imq0 parent 1: classid 1:1 htb rate ${RATEDN}kbit # add leaf classes - TCP traffic in 21, non TCP traffic in 20 # tc class add dev imq0 parent 1:1 classid 1:20 htb rate $[$RATEDN/2]kbit ceil ${RATEDN}kbit prio 0 tc class add dev imq0 parent 1:1 classid 1:21 htb rate $[$RATEDN/2]kbit ceil ${RATEDN}kbit prio 1 # attach qdisc to leaf classes - here we at SFQ to each priority class. SFQ insures that # within each class connections will be treated (almost) fairly. tc qdisc add dev imq0 parent 1:20 handle 20: sfq perturb 10 tc qdisc add dev imq0 parent 1:21 handle 21: red limit 1000000 min 5000 max 100000 avpkt 1000 burst 50 # filter traffic into classes by fwmark - here we direct traffic into priority class according to # the fwmark set on the packet (we set fwmark with iptables # later). Note that above we've set the default priority # class to 1:26 so unmarked packets (or packets marked with # unfamiliar IDs) will be defaulted to the lowest priority # class. tc filter add dev imq0 parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20 tc filter add dev imq0 parent 1:0 prio 0 protocol ip handle 21 fw flowid 1:21 # add MYSHAPER-IN chain to the mangle table in iptables - this sets up the table we'll use # to filter and mark packets. iptables -t mangle -N MYSHAPER-IN iptables -t mangle -I PREROUTING -i $DEV -j MYSHAPER-IN # add fwmark entries to classify different types of traffic - Set fwmark from 20-26 according to # desired class. 20 is highest prio. iptables -t mangle -A MYSHAPER-IN -p ! tcp -j MARK --set-mark 20 # Set non-tcp packets to highest priority iptables -t mangle -A MYSHAPER-IN -p tcp -m length --length :64 -j MARK --set-mark 20 # short TCP packets are probably ACKs iptables -t mangle -A MYSHAPER-IN -p tcp --dport ssh -j MARK --set-mark 20 # secure shell iptables -t mangle -A MYSHAPER-IN -p tcp --sport ssh -j MARK --set-mark 20 # secure shell iptables -t mangle -A MYSHAPER-IN -p tcp --dport telnet -j MARK --set-mark 20 # telnet (ew...) iptables -t mangle -A MYSHAPER-IN -p tcp --sport telnet -j MARK --set-mark 20 # telnet (ew...) iptables -t mangle -A MYSHAPER-IN -m mark --mark 0 -j MARK --set-mark 21 # redundant- mark any unmarked packets as 26 (low prio) # finally, instruct these packets to go through the imq0 we set up above iptables -t mangle -A MYSHAPER-IN -j IMQ # Done with inbound shaping # #################################################### echo Inbound shaping added to $DEV. Rate: ${RATEDN}Kbit/sec.
5. 測試
最簡單的方法是用 low-priority 的通訊使upstream飽和.這依據您的級別設定. 比如,將ping和telnet通訊設定為最優先級別(lower fwmark). 如果您讓FTP上載飽和 upstream 的帶寬, 您只要關心ping往閘道的時間(on the other side of the DSL line) 增加一些數量同沒有隊列的情況相比較.Ping 的反應在 100ms 以下(依據您的設定). 如果多出1,2秒 ,表示有些地方不對勁.
6. OK It Works!! Now What?
接下來, 接下來就使出您能想得到的各種花招來享受它帶來的好處吧!
Now that you've successfully started to manage your bandwidth, you should start thinking of ways to use it. After all, you're probably paying for it!
Use a Gnutella client and SHARE YOUR FILES without adversely affecting your network performance
Run a web server without having web page hits slow you down in Quake