Infiniband 网络性能测试
生活随笔
收集整理的這篇文章主要介紹了
Infiniband 网络性能测试
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
1、帶寬測試 在server端執行 [ibtests]# ib_send_bw -a -c UD -d mlx4_0 -i 1
------------------------------------------------------------------Send BW Test
Connection type : UD
Inline data is used up to 1 bytes messagelocal address: LID 0x0b, QPN 0x28004c, PSN 0xfaa100remote address: LID 0x02, QPN 0x70004b, PSN 0xc14da8
Mtu : 2048
------------------------------------------------------------------#bytes #iterations BW peak[MB/sec] BW average[MB/sec]
------------------------------------------------------------------
#server端執行
[root@server ~]# qperf
#client端執行 [root@client ~]# qperf 172.26.2.41 ud_lat ud_bw rc_rdma_read_bw rc_rdma_write_bw uc_rdma_write_bw tcp_bw tcp_lat udp_bw udp_lat ud_lat:latency = 4.41 us ud_bw:send_bw = 2.63 GB/secrecv_bw = 2.63 GB/sec rc_rdma_read_bw:bw = 3.31 GB/sec rc_rdma_write_bw:bw = 3.41 GB/sec uc_rdma_write_bw:send_bw = 3.4 GB/secrecv_bw = 3.36 GB/sec tcp_bw:bw = 2.11 GB/sec tcp_lat:latency = 8.56 us udp_bw:send_bw = 2.84 GB/secrecv_bw = 699 MB/sec udp_lat:latency = 8.03 us
#iperf3
#server端執行
[root@server ~]# iperf3 -s -p 10081
#client端執行
[tpsa@client ~]$ iperf3 -c 172.26.2.41 -t 300 -p 10081
?
在client端執行? [ibtests]# ib_send_bw -a -c UD -d mlx4_0 -i 1 10.10.11.8 ------------------------------------------------------------------Send BW Test Connection type : UD Inline data is used up to 1 bytes messagelocal address: LID 0x02, QPN 0x70004b, PSN 0xc14da8remote address: LID 0x0b, QPN 0x28004c, PSN 0xfaa100 Mtu : 2048 ------------------------------------------------------------------#bytes #iterations BW peak[MB/sec] BW average[MB/sec]2 1000 7.51 7.214 1000 15.29 14.198 1000 30.66 30.4516 1000 60.33 59.9532 1000 119.53 113.2064 1000 233.75 233.16128 1000 414.95 413.64256 1000 794.90 698.20512 1000 1600.46 774.671024 1000 2011.81 804.292048 1000 2923.29 2919.91 ------------------------------------------------------------------?
2、延時測試 在server端執行 [ibtests]# ib_send_lat -a -c UD -d mlx4_0 -i 1 ------------------------------------------------------------------Send Latency Test Inline data is used up to 400 bytes message Connection type : UDlocal address: LID 0x0b QPN 0x2c004c PSN 0xa1be86remote address: LID 0x02 QPN 0x74004b PSN 0x6ea837 ------------------------------------------------------------------#bytes #iterations t_min[usec] t_max[usec] t_typical[usec]2 1000 1.41 4.45 1.434 1000 1.41 3.84 1.438 1000 1.41 2.75 1.4316 1000 1.41 3.01 1.4232 1000 1.49 3.92 1.5064 1000 1.55 3.96 1.57128 1000 1.70 2.58 1.71256 1000 2.41 5.73 2.45512 1000 2.82 4.07 2.901024 1000 3.28 4.95 3.312048 1000 4.11 11.74 4.14 ------------------------------------------------------------------在client端執行 [ibtests]# ib_send_lat -a -c UD -d mlx4_0 -i 2 10.10.11.8 ------------------------------------------------------------------Send Latency Test Inline data is used up to 400 bytes message Connection type : UDlocal address: LID 0x02 QPN 0x74004b PSN 0x6ea837remote address: LID 0x0b QPN 0x2c004c PSN 0xa1be86 ------------------------------------------------------------------#bytes #iterations t_min[usec] t_max[usec] t_typical[usec]2 1000 1.41 9.97 1.434 1000 1.38 5.31 1.438 1000 1.41 2.78 1.4316 1000 1.40 4.01 1.4232 1000 1.49 3.67 1.5064 1000 1.55 5.20 1.56128 1000 1.69 3.13 1.71256 1000 2.40 5.72 2.45512 1000 2.83 4.13 2.901024 1000 3.28 4.95 3.312048 1000 4.11 11.68 4.14 ------------------------------------------------------------------
?
2、其他測試工具
#qperf#server端執行
[root@server ~]# qperf
#client端執行 [root@client ~]# qperf 172.26.2.41 ud_lat ud_bw rc_rdma_read_bw rc_rdma_write_bw uc_rdma_write_bw tcp_bw tcp_lat udp_bw udp_lat ud_lat:latency = 4.41 us ud_bw:send_bw = 2.63 GB/secrecv_bw = 2.63 GB/sec rc_rdma_read_bw:bw = 3.31 GB/sec rc_rdma_write_bw:bw = 3.41 GB/sec uc_rdma_write_bw:send_bw = 3.4 GB/secrecv_bw = 3.36 GB/sec tcp_bw:bw = 2.11 GB/sec tcp_lat:latency = 8.56 us udp_bw:send_bw = 2.84 GB/secrecv_bw = 699 MB/sec udp_lat:latency = 8.03 us
#iperf3
#server端執行
[root@server ~]# iperf3 -s -p 10081
#client端執行
[tpsa@client ~]$ iperf3 -c 172.26.2.41 -t 300 -p 10081
?
3、網絡調優
?
#啟用connected模式(默認是datagram模式,datagram模式下網絡延時更低,connected模式下網絡帶寬更高),接口帶寬提高一倍左右 echo connected > /sys/class/net/ib0/mode or sed -i 's/SET_IPOIB_CM=.*/SET_IPOIB_CM=yes' /etc/infiniband/openib.conf /etc/init.d/openibd restart #系統參數調優(centos7) systemctl status tuned.service #看看是否啟用了tuned服務 tuned-adm profile network-throughput #優化網絡帶寬 tuned-adm profile network-latency #優化網絡延時 tuned-adm active #查看當前配置 # 停止irqbalance服務 # systemctl stop irqbalance && systemctl disable irqbalance#查看ib接口與哪個cpu相鄰 #numa_num=$(cat /sys/class/net/ib0/device/numa_node)#對ib網卡中斷做綁核操作 #/usr/sbin/set_irq_affinity_bynode.sh $numa_num ib0 #[root@server ~]$ rpm -qf /usr/sbin/set_irq_affinity_bynode.sh mlnx-ofa_kernel-3.3-OFED.3.3.1.0.0.1.gf583963.rhel7u2.x86_64#驗證綁核 #查看ib0使用的中斷號 [root@server ~]# ls /sys/class/net/ib0/device/msi_irqs 100 102 104 55 57 59 61 63 65 67 69 71 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 54 56 58 60 62 64 66 68 70 74 76 78 80 82 84 86 88 90 92 94 96 98#查看某個中斷號的smp_affinity值 [root@server ~]# cat /proc/irq/100/smp_affinity 0000,00001000#跟默認值對比 [root@server ~]# cat /proc/irq/default_smp_affinity?
#也可以通過mellanox提供的工具自動優化 # mlnx_tune -hUsage: mlnx_tune [options]Options:-h, --help show this help message and exit-d, --debug_info dump system debug information without setting aprofile-r, --report Report HW/SW status and issues without setting aprofile-c, --colored Switch using colored/monochromed status reports. Onlyapplicable with --report-p PROFILE, --profile=PROFILESet profile and run it. choose from:['HIGH_THROUGHPUT','IP_FORWARDING_MULTI_STREAM_THROUGHPUT','IP_FORWARDING_MULTI_STREAM_PACKET_RATE','IP_FORWARDING_SINGLE_STREAM','IP_FORWARDING_SINGLE_STREAM_0_LOSS','IP_FORWARDING_SINGLE_STREAM_SINGLE_PORT','LOW_LATENCY_VMA']-q, --verbosity print debug information to the screen [default False]-v, --version print tool version and exit [default False]-i INFO_FILE_PATH, --info_file_path=INFO_FILE_PATHinfo_file path. [default %s]#顯示當前配置狀態# mlnx_tune -r #開始優化,# mlnx_tune -p HIGH_THROUGHPUT[root@server ~]# rpm -qf `which mlnx_tune` mlnx-ofa_kernel-3.3-OFED.3.3.1.0.0.1.gf583963.rhel7u2.x86_64
?3、查看接口信息
[root@gz-cs-gpu-3-8 eden]# ibstat CA 'mlx4_0'CA type: MT26428Number of ports: 1Firmware version: 2.9.1000Hardware version: b0Node GUID: 0x0002c9030059dddaSystem image GUID: 0x0002c9030059ddddPort 1:State: ActivePhysical state: LinkUpRate: 40Base lid: 58LMC: 0SM lid: 1Capability mask: 0x02510868Port GUID: 0x0002c9030059dddbLink layer: InfiniBand [root@gz-cs-gpu-3-8 eden]# ibstatus Infiniband device 'mlx4_0' port 1 status:default gid: fe80:0000:0000:0000:0002:c903:0059:dddbbase lid: 0x3asm lid: 0x1state: 4: ACTIVEphys state: 5: LinkUprate: 40 Gb/sec (4X QDR)link_layer: InfiniBand?
| InfiniBand Link | Signal Pairs? | Signaling Rate? | Data Rate (Full Duplex)? |
| 1X-SDR | 2 | 2.5 Gbps? | 2.0 Gbps? |
| 4X-SDR? | 8? | 10 Gbps (4 x 2.5 Gbps)? | 8 Gbps (4 x 2 Gbps)? |
| 12X-SDR | 24? | 30 Gbps (12 x 2.5 Gbps)? | 24 Gbps (12 x 2 Gbps)? |
| 1X-DDR? | 2? | 5 Gbps? | 4.0 Gbps? |
| 4X-DDR? | 8? | 20 Gbps (4 x 5 Gbps)? | 16 Gbps (4 x 4 Gbps)? |
| 12X-DDR? | 24? | 60 Gbps (12 x 5 Gbps)? | 48 Gbps (12 x 4 Gbps)? |
| 1X-QDR? | 2? | 10 Gbps | 8.0 Gbps? |
| 4X-QDR? | 8? | 40 Gbps (4 x 5 Gbps)? | 32 Gbps (4 x 8 Gbps)? |
| 12XQDDR? | 24? | 1200 Gbps (12 x 5 Gbps)? | 96 Gbps (12 x 8 Gbps)? |
?
?
?
?
?
?
?
?
?
?
?
?
?
轉載于:https://www.cnblogs.com/edenlong/p/10273433.html
總結
以上是生活随笔為你收集整理的Infiniband 网络性能测试的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 一阳穿三线选股指标公式,简单却实用
- 下一篇: 安全操作规程规范培训PPT模板