一例智能网卡(mellanox)的网卡故障分析
背景:這個(gè)是在centos 7.6.1810的環(huán)境上復(fù)現(xiàn)的,智能網(wǎng)卡是目前很多
云服務(wù)器上的網(wǎng)卡標(biāo)配,在OPPO主要用于vpc等場(chǎng)景,智能網(wǎng)卡的代碼隨著
功能的增強(qiáng)導(dǎo)致復(fù)雜度一直在上升,驅(qū)動(dòng)的bug一直是內(nèi)核bug中的大頭,在遇到類似問(wèn)題時(shí),內(nèi)核開發(fā)者由于對(duì)驅(qū)動(dòng)代碼不熟悉,排查會(huì)比較費(fèi)勁,本身涉及的背景知識(shí)有:dma_pool,dma_page,net_device,mlx5_core_dev設(shè)備,設(shè)備卸載,uaf問(wèn)題等,另外,這個(gè)bug目測(cè)在最新的linux基線也沒有解決,本文單獨(dú)拿出來(lái)列舉是因?yàn)閡af問(wèn)題相對(duì)比較獨(dú)特。
下面列一下我們是怎么排查并解決這個(gè)問(wèn)題的。
一、故障現(xiàn)象
OPPO云內(nèi)核團(tuán)隊(duì)接到連通性告警報(bào)障,發(fā)現(xiàn)機(jī)器復(fù)位:
UPTIME: 00:04:16-------------運(yùn)行的時(shí)間很短 LOAD AVERAGE: 0.25, 0.23, 0.11 TASKS: 2027 RELEASE: 3.10.0-1062.18.1.el7.x86_64 MEMORY: 127.6 GB PANIC: "BUG: unable to handle kernel NULL pointer dereference at (null)" PID: 23283 COMMAND: "spider-agent" TASK: ffff9d1fbb090000 [THREAD_INFO: ffff9d1f9a0d8000] CPU: 0 STATE: TASK_RUNNING (PANIC)crash> bt PID: 23283 TASK: ffff9d1fbb090000 CPU: 0 COMMAND: "spider-agent"#0 [ffff9d1f9a0db650] machine_kexec at ffffffffb6665b34#1 [ffff9d1f9a0db6b0] __crash_kexec at ffffffffb6722592#2 [ffff9d1f9a0db780] crash_kexec at ffffffffb6722680#3 [ffff9d1f9a0db798] oops_end at ffffffffb6d85798#4 [ffff9d1f9a0db7c0] no_context at ffffffffb6675bb4#5 [ffff9d1f9a0db810] __bad_area_nosemaphore at ffffffffb6675e82#6 [ffff9d1f9a0db860] bad_area_nosemaphore at ffffffffb6675fa4#7 [ffff9d1f9a0db870] __do_page_fault at ffffffffb6d88750#8 [ffff9d1f9a0db8e0] do_page_fault at ffffffffb6d88975#9 [ffff9d1f9a0db910] page_fault at ffffffffb6d84778[exception RIP: dma_pool_alloc+427]//caq:異常地址RIP: ffffffffb680efab RSP: ffff9d1f9a0db9c8 RFLAGS: 00010046RAX: 0000000000000246 RBX: ffff9d0fa45f4c80 RCX: 0000000000001000RDX: 0000000000000000 RSI: 0000000000000246 RDI: ffff9d0fa45f4c10RBP: ffff9d1f9a0dba20 R8: 000000000001f080 R9: ffff9d00ffc07c00R10: ffffffffc03e10c4 R11: ffffffffb67dd6fd R12: 00000000000080d0R13: ffff9d0fa45f4c10 R14: ffff9d0fa45f4c00 R15: 0000000000000000ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #10 [ffff9d1f9a0dba28] mlx5_alloc_cmd_msg at ffffffffc03e10e3 [mlx5_core]//涉及的模塊 #11 [ffff9d1f9a0dba78] cmd_exec at ffffffffc03e3c92 [mlx5_core] #12 [ffff9d1f9a0dbb18] mlx5_cmd_exec at ffffffffc03e442b [mlx5_core] #13 [ffff9d1f9a0dbb48] mlx5_core_access_reg at ffffffffc03ee354 [mlx5_core] #14 [ffff9d1f9a0dbba0] mlx5_query_port_ptys at ffffffffc03ee411 [mlx5_core] #15 [ffff9d1f9a0dbc10] mlx5e_get_link_ksettings at ffffffffc0413035 [mlx5_core] #16 [ffff9d1f9a0dbce8] __ethtool_get_link_ksettings at ffffffffb6c56d06 #17 [ffff9d1f9a0dbd48] speed_show at ffffffffb6c705b8 #18 [ffff9d1f9a0dbdd8] dev_attr_show at ffffffffb6ab1643 #19 [ffff9d1f9a0dbdf8] sysfs_kf_seq_show at ffffffffb68d709f #20 [ffff9d1f9a0dbe18] kernfs_seq_show at ffffffffb68d57d6 #21 [ffff9d1f9a0dbe28] seq_read at ffffffffb6872a30 #22 [ffff9d1f9a0dbe98] kernfs_fop_read at ffffffffb68d6125 #23 [ffff9d1f9a0dbed8] vfs_read at ffffffffb684a8ff #24 [ffff9d1f9a0dbf08] sys_read at ffffffffb684b7bf #25 [ffff9d1f9a0dbf50] system_call_fastpath at ffffffffb6d8dedeRIP: 00000000004a5030 RSP: 000000c001099378 RFLAGS: 00000212RAX: 0000000000000000 RBX: 000000c000040000 RCX: ffffffffffffffffRDX: 000000000000000a RSI: 000000c00109976e RDI: 000000000000000d---read的文件fd編號(hào)RBP: 000000c001099640 R8: 0000000000000000 R9: 0000000000000000R10: 0000000000000000 R11: 0000000000000206 R12: 000000000000000cR13: 0000000000000032 R14: 0000000000f710c4 R15: 0000000000000000ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b從堆棧看,是某進(jìn)程讀取文件觸發(fā)了一個(gè)內(nèi)核態(tài)的空指針引用。
二、故障現(xiàn)象分析
從堆棧信息看:
1、當(dāng)時(shí)進(jìn)程打開fd編號(hào)為13的文件,這個(gè)從rdi的值可以看出。
2、speed_show 和 __ethtool_get_link_ksettings 表示在讀取網(wǎng)卡的速率值
下面看下打開的文件是哪個(gè),
注意上面 pci編號(hào)與 網(wǎng)卡名稱的對(duì)應(yīng)關(guān)系,后面會(huì)用到。
打開文件讀取speed本身應(yīng)該是一個(gè)很常見的流程,
下面從 exception RIP: dma_pool_alloc+427 進(jìn)一步分析為什么觸發(fā)了NULL pointer dereference
展開具體的堆棧如下:
從堆棧中取出對(duì)應(yīng)的 mlx5_core_dev 為 ffff9d0fa3c800c0
crash> mlx5_core_dev.cmd ffff9d0fa3c800c0 -xo struct mlx5_core_dev {[ffff9d0fa3c80138] struct mlx5_cmd cmd; } crash> mlx5_cmd.pool ffff9d0fa3c80138pool = 0xffff9d0fa45f4c00------這個(gè)就是dma_pool,寫驅(qū)動(dòng)代碼的同學(xué)會(huì)經(jīng)常遇到出問(wèn)題的代碼行號(hào)為:
crash> dis -l dma_pool_alloc+427 -B 5 /usr/src/debug/kernel-3.10.0-1062.18.1.el7/linux-3.10.0-1062.18.1.el7.x86_64/mm/dmapool.c: 334 0xffffffffb680efab <dma_pool_alloc+427>: mov (%r15),%ecx 而對(duì)應(yīng)的r15,從上面的堆棧看,確實(shí)是null。305 void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,306 dma_addr_t *handle)307 { ...315 spin_lock_irqsave(&pool->lock, flags);316 list_for_each_entry(page, &pool->page_list, page_list) {317 if (page->offset < pool->allocation)---//caq:當(dāng)前滿足條件318 goto ready;//caq:跳轉(zhuǎn)到ready319 }320 321 /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */322 spin_unlock_irqrestore(&pool->lock, flags);323 324 page = pool_alloc_page(pool, mem_flags & (~__GFP_ZERO));325 if (!page)326 return NULL;327 328 spin_lock_irqsave(&pool->lock, flags);329 330 list_add(&page->page_list, &pool->page_list);331 ready:332 page->in_use++;//caq:表示正在引用333 offset = page->offset;//從上次用完的地方開始使用334 page->offset = *(int *)(page->vaddr + offset);//caq:出問(wèn)題的行號(hào) ...}從上面的代碼看,page->vaddr為NULL,offset也為0,才會(huì)引用NULL,page有兩個(gè)來(lái)源,
第一種是從pool中的page_list中取,
第二種是從pool_alloc_page臨時(shí)申請(qǐng),當(dāng)然申請(qǐng)之后會(huì)掛入到pool中的page_list,
下面查看一下這個(gè)page_list.
crash> dma_pool ffff9d0fa45f4c00 -x struct dma_pool {page_list = {next = 0xffff9d0fa45f4c80, prev = 0xffff9d0fa45f4c00}, lock = {{rlock = {raw_lock = {val = {counter = 0x1}}}}}, size = 0x400, dev = 0xffff9d1fbddec098, allocation = 0x1000, boundary = 0x1000, name = "mlx5_cmd\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000", pools = {next = 0xdead000000000100, prev = 0xdead000000000200} }crash> list dma_pool.page_list -H 0xffff9d0fa45f4c00 -s dma_page.offset,vaddr ffff9d0fa45f4c80offset = 0vaddr = 0x0 ffff9d0fa45f4d00offset = 0vaddr = 0x0從 dma_pool_alloc 函數(shù)的代碼邏輯看,pool->page_list確實(shí)不為空,而且滿足
if (page->offset < pool->allocation) 的條件,所以第一個(gè)page應(yīng)該是 ffff9d0fa45f4c80
也就是從第一種情況取出的:
問(wèn)題分析到這里,因?yàn)閐ma_pool中的page,申請(qǐng)之后,vaddr都會(huì)初始化,
一般在pool_alloc_page 中進(jìn)行初始化,怎么可能會(huì)NULL呢?
然后查看一下這個(gè)地址:
由于以前用過(guò)類似的dma函數(shù),印象中dma_page沒有這么大,再看看第二個(gè)dma_page如下:
crash> kmem ffff9d0fa45f4d00 CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE ffff9d00ffc07900 kmalloc-128 128 8963 14976 234 8kSLAB MEMORY NODE TOTAL ALLOCATED FREEffffe299c0917d00 ffff9d0fa45f4000 0 64 29 35FREE / [ALLOCATED]ffff9d0fa45f4d00 PAGE PHYSICAL MAPPING INDEX CNT FLAGS ffffe299c0917d00 10245f4000 0 ffff9d0fa45f4c00 1 2fffff00004080 slab,headcrash> dma_page ffff9d0fa45f4d00 struct dma_page {page_list = {next = 0xffff9d0fa45f5000, prev = 0xffff9d0fa45f4d00}, vaddr = 0x0, -----------caq:也是nulldma = 0, in_use = 0, offset = 0 }crash> list dma_pool.page_list -H 0xffff9d0fa45f4c00 -s dma_page.offset,vaddr ffff9d0fa45f4c80offset = 0vaddr = 0x0 ffff9d0fa45f4d00offset = 0vaddr = 0x0 ffff9d0fa45f5000offset = 0vaddr = 0x0 .........看來(lái)不僅是第一個(gè)dma_page有問(wèn)題,所有在pool中的dma_page單元都一樣,
那直接查看一下dma_page的正常大小:
按道理長(zhǎng)度才40字節(jié),就算申請(qǐng)slab的話,也應(yīng)該擴(kuò)展為64字節(jié)才對(duì),怎么可能像上面那個(gè)dma_page一樣是128字節(jié)呢?為了解開這個(gè)疑惑,找一個(gè)正常的其他節(jié)點(diǎn)對(duì)比一下:
crash> netNET_DEVICE NAME IP ADDRESS(ES) ffff8f9e800be000 lo 127.0.0.1 ffff8f9e62640000 p1p1 ffff8f9e626c0000 p1p2 ffff8f9e627c0000 p3p1 -----//caq:以這個(gè)為例 ffff8f9e62100000 p3p2 然后根據(jù)代碼:通過(guò)net_device查看mlx5e_priv:static?int?mlx5e_get_link_ksettings(struct?net_device?*netdev,struct?ethtool_link_ksettings?*link_ksettings) { ...struct?mlx5e_priv?*priv????=?netdev_priv(netdev); ... }static inline void *netdev_priv(const struct net_device *dev) {return (char *)dev + ALIGN(sizeof(struct net_device), NETDEV_ALIGN); }crash> px sizeof(struct net_device) $2 = 0x8c0crash> mlx5e_priv.mdev ffff8f9e627c08c0---根據(jù)偏移計(jì)算mdev = 0xffff8f9e67c400c0crash> mlx5_core_dev.cmd 0xffff8f9e67c400c0 -xo struct mlx5_core_dev {[ffff8f9e67c40138] struct mlx5_cmd cmd; }crash> mlx5_cmd.pool ffff8f9e67c40138pool = 0xffff8f9e7bf48f80crash> dma_pool 0xffff8f9e7bf48f80 struct dma_pool {page_list = {next = 0xffff8f9e79c60880, //caq:其中的一個(gè)dma_pageprev = 0xffff8fae6e4db800}, .......size = 1024, dev = 0xffff8f9e800b3098, allocation = 4096, boundary = 4096, name = "mlx5_cmd\000\217\364{\236\217\377\377\300\217\364{\236\217\377\377\200\234>\250\217\217\377\377", pools = {next = 0xffff8f9e800b3290, prev = 0xffff8f9e800b3290} } crash> dma_page 0xffff8f9e79c60880 //caq:查看這個(gè)dma_page struct dma_page {page_list = {next = 0xffff8f9e79c60840, -------其中的一個(gè)dma_pageprev = 0xffff8f9e7bf48f80}, vaddr = 0xffff8f9e6fc9b000, //caq:正常vaddr不可能會(huì)NULL的dma = 69521223680, in_use = 0, offset = 0 }crash> kmem 0xffff8f9e79c60880 CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE ffff8f8fbfc07b00 kmalloc-64--正常長(zhǎng)度 64 667921 745024 11641 4kSLAB MEMORY NODE TOTAL ALLOCATED FREEffffde5140e71800 ffff8f9e79c60000 0 64 64 0FREE / [ALLOCATED][ffff8f9e79c60880]PAGE PHYSICAL MAPPING INDEX CNT FLAGS ffffde5140e71800 1039c60000 0 0 1 2fffff00000080 slab以上操作要求對(duì)net_device和mlx5相關(guān)驅(qū)動(dòng)代碼比較熟悉。
相比于異常的dma_page,正常的dma_page是一個(gè)64字節(jié)的slab,所以很明顯,
要么這個(gè)是一個(gè)踩內(nèi)存問(wèn)題,要么是一個(gè)uaf(used after free )問(wèn)題。
一般問(wèn)題查到這,怎么快速判斷是哪一種類型呢?因?yàn)檫@兩種問(wèn)題,涉及到內(nèi)存紊亂,一般都比較難查,這時(shí)候需要跳出來(lái),我們先看一下其他運(yùn)行進(jìn)程的情況,找到了一個(gè)進(jìn)程如下:
為什么會(huì)關(guān)注這個(gè)進(jìn)程,因?yàn)檫@么多年以來(lái),因?yàn)樾遁d模塊引發(fā)的uaf問(wèn)題排查不低于20次了,有時(shí)候是reboot,有時(shí)候是unload,有時(shí)候是在work中釋放資源,所以直覺上,覺得和這個(gè)卸載有很大關(guān)系。下面分析一下,reboot流程里面操作到哪了。
2141 void device_shutdown(void)2142 {2143 struct device *dev, *parent;2144 2145 spin_lock(&devices_kset->list_lock);2146 /*2147 * Walk the devices list backward, shutting down each in turn.2148 * Beware that device unplug events may also start pulling2149 * devices offline, even as the system is shutting down.2150 */2151 while (!list_empty(&devices_kset->list)) {2152 dev = list_entry(devices_kset->list.prev, struct device,2153 kobj.entry); ........2178 if (dev->device_rh && dev->device_rh->class_shutdown_pre) {2179 if (initcall_debug)2180 dev_info(dev, "shutdown_pre\n");2181 dev->device_rh->class_shutdown_pre(dev);2182 }2183 if (dev->bus && dev->bus->shutdown) {2184 if (initcall_debug)2185 dev_info(dev, "shutdown\n");2186 dev->bus->shutdown(dev);2187 } else if (dev->driver && dev->driver->shutdown) {2188 if (initcall_debug)2189 dev_info(dev, "shutdown\n");2190 dev->driver->shutdown(dev);2191 }}從上面代碼看出以下兩點(diǎn):
1、每個(gè)device 的 kobj.entry 成員串接在 devices_kset->list 中。
2、每個(gè)設(shè)備的shutdown流程從 device_shutdown 看是串行的。
從reboot 的堆棧看,卸載一個(gè) mlx設(shè)備的流程包含如下:
pci_device_shutdown–>shutdown–>mlx5_unload_one–>mlx5_detach_device
–>mlx5_cmd_cleanup–>dma_pool_destroy
mlx5_detach_device的流程分支為:
void dma_pool_destroy(struct dma_pool *pool) { .......while (!list_empty(&pool->page_list)) {//caq:將pool中的dma_page一一刪除struct dma_page *page;page = list_entry(pool->page_list.next,struct dma_page, page_list);if (is_page_busy(page)) { .......list_del(&page->page_list);kfree(page);} elsepool_free_page(pool, page);//每個(gè)dma_page去釋放}kfree(pool);//caq:釋放pool ....... }static void pool_free_page(struct dma_pool *pool, struct dma_page *page) {dma_addr_t dma = page->dma;#ifdef DMAPOOL_DEBUGmemset(page->vaddr, POOL_POISON_FREED, pool->allocation); #endifdma_free_coherent(pool->dev, pool->allocation, page->vaddr, dma);list_del(&page->page_list);//caq:釋放后會(huì)將page_list成員毒化kfree(page); }從reboot的堆棧中,查看對(duì)應(yīng)的 信息
#4 [ffff9d0f95d7fb08] cmd_exec at ffffffffc03e41c9 [mlx5_core]ffff9d0f95d7fb10: ffffffffb735b580 ffff9d0f904caf18 ffff9d0f95d7fb20: ffff9d00ff801da8 ffff9d0f23121200 ffff9d0f95d7fb30: ffff9d0f23121740 ffff9d0fa7480138 ffff9d0f95d7fb40: 0000000000000000 0000001002020000 ffff9d0f95d7fb50: 0000000000000000 ffff9d0f95d7fbe8 ffff9d0f95d7fb60: ffff9d0f00000000 0000000000000000 ffff9d0f95d7fb70: 00000000756415e3 ffff9d0fa74800c0 ----mlx5_core_dev設(shè)備,對(duì)應(yīng)的是 p3p1,ffff9d0f95d7fb80: ffff9d0f95d7fbf8 ffff9d0f95d7fbe8 ffff9d0f95d7fb90: 0000000000000246 ffff9d0f8f3a20b8 ffff9d0f95d7fba0: ffff9d0f95d7fbd0 ffffffffc03e442b #5 [ffff9d0f95d7fba8] mlx5_cmd_exec at ffffffffc03e442b [mlx5_core]ffff9d0f95d7fbb0: 0000000000000000 ffff9d0fa74800c0 ffff9d0f95d7fbc0: ffff9d0f8f3a20b8 ffff9d0fa74bea00 ffff9d0f95d7fbd0: ffff9d0f95d7fc38 ffffffffc03f085d #6 [ffff9d0f95d7fbd8] mlx5_core_destroy_mkey at ffffffffc03f085d [mlx5_core]要注意,reboot正在釋放的 mlx5_core_dev 是 ffff9d0fa74800c0,這個(gè)設(shè)備對(duì)應(yīng)的net_device是:
p3p1,而 23283 進(jìn)程正在訪問(wèn)的 mlx5_core_dev 是 ffff9d0fa3c800c0 ,對(duì)應(yīng)的是 p3p2。
我們看下目前還殘留在 devices_kset 中的device:
crash> p devices_kset devices_kset = $4 = (struct kset *) 0xffff9d1fbf4e70c0 crash> p devices_kset.list $5 = {next = 0xffffffffb72f2a38, prev = 0xffff9d0fbe0ea130 }crash> list -H -o 0x18 0xffffffffb72f2a38 -s device.kobj.name >device.list我們發(fā)現(xiàn)p3p1 與 p3p2均不在 device.list中,[root@it202-seg-k8s-prod001-node-10-27-96-220 127.0.0.1-2020-12-07-10:58:06]# grep 0000:5e:00.0 device.list //caq:未找到 這個(gè)是 p3p1,當(dāng)前reboot流程正在卸載。 [root@it202-seg-k8s-prod001-node-10-27-96-220 127.0.0.1-2020-12-07-10:58:06]# grep 0000:5e:00.1 device.list //caq:未找到,這個(gè)是 p3p2,已經(jīng)卸載完 [root@it202-seg-k8s-prod001-node-10-27-96-220 127.0.0.1-2020-12-07-10:58:06]# grep 0000:3b:00.0 device.list //caq:這個(gè)mlx5設(shè)備還沒unloadkobj.name = 0xffff9d1fbe82aa70 "0000:3b:00.0", [root@it202-seg-k8s-prod001-node-10-27-96-220 127.0.0.1-2020-12-07-10:58:06]# grep 0000:3b:00.1 device.list //caq:這個(gè)mlx5設(shè)備還沒unloadkobj.name = 0xffff9d1fbe82aae0 "0000:3b:00.1",由于 p3p2 與 p3p1均不在 device.list中,而根據(jù) pci_device_shutdown的串行卸載流程,當(dāng)前正在卸載的是 p3p1,所以很確定的是 23283 進(jìn)程訪問(wèn)的是卸載后的cmd_pool,根據(jù)前面描述的卸載流程 :
pci_device_shutdown–>shutdown–>mlx5_unload_one–>mlx5_cmd_cleanup–>dma_pool_destroy
此時(shí)的pool已經(jīng)被釋放了,pool中的dma_page均無(wú)效的。
然后嘗試google對(duì)應(yīng)的bug,查看到一個(gè)跟當(dāng)前現(xiàn)象極為相似,redhat遇到了類似的問(wèn)題:https://access.redhat.com/solutions/5132931
但是,紅帽在這個(gè)鏈接中認(rèn)為解決了uaf的問(wèn)題,合入的補(bǔ)丁卻是:
commit 4cca96a8d9da0ed8217cfdf2aec0c3c8b88e8911 Author: Parav Pandit <parav@mellanox.com> Date: Thu Dec 12 13:30:21 2019 +0200diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 997cbfe..05b557d 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -6725,6 +6725,8 @@ void __mlx5_ib_remove(struct mlx5_ib_dev *dev,const struct mlx5_ib_profile *profile,int stage){ + dev->ib_active = false; +/* Number of stages to cleanup */while (stage) {stage--;敲黑板,三遍:
這個(gè)合入是不能解決對(duì)應(yīng)的bug的,比如如下的并發(fā):
我們用一個(gè)簡(jiǎn)單的圖來(lái)表示一下并發(fā)處理:
所以如果要真正解決這個(gè)問(wèn)題,還需要 netif_device_detach 中清理 __LINK_STATE_START的bit位,或者在 speed_show 中判斷一下 __LINK_STATE_PRESENT 位?如果考慮影響范圍,不想動(dòng)公共流程,則應(yīng)該
在 mlx5e_get_link_ksettings 中判斷一下 __LINK_STATE_PRESENT。
這個(gè)就留給喜歡跟社區(qū)打交道的同學(xué)去完善吧。
三、故障復(fù)現(xiàn)
1、競(jìng)態(tài)問(wèn)題,可以制造類似上圖cpu1 與cpu2 的競(jìng)爭(zhēng)場(chǎng)景。
四、故障規(guī)避或解決
可能的解決方案是:
1、不要按照紅帽https://access.redhat.com/solutions/5132931那樣升級(jí)。
2、單獨(dú)打補(bǔ)丁。
作者簡(jiǎn)介
Anqing
目前在OPPO混合云負(fù)責(zé)linux內(nèi)核及容器,虛擬機(jī)等虛擬化方面的工作
獲取更多精彩內(nèi)容,請(qǐng)掃碼關(guān)注[OPPO數(shù)智技術(shù)]公眾號(hào)
總結(jié)
以上是生活随笔為你收集整理的一例智能网卡(mellanox)的网卡故障分析的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: .NET 二维码生成(ThoughtWo
- 下一篇: Linux命令之man