linux驱动读取文件失败怎么办,linux – 由于单个驱动器读取错误导致软件RAID-1导致内核故障...
我在兩個相同的希捷1GB硬盤上運(yùn)行Fedora 19(內(nèi)核3.11.3-201.fc19.x86_64),并安裝了軟件RAID-1(mdadm):
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb3[1] sda3[0]
973827010 blocks super 1.2 [2/2] [UU]
unused devices:
最近,在兩個驅(qū)動器之一上出現(xiàn)了一些錯誤:smartd檢測到“1個當(dāng)前不可讀(待定)扇區(qū)”和“1個脫機(jī)不可糾正扇區(qū)”. RAID陣列“重新安排”了一些部門.然后大約一天后,內(nèi)核產(chǎn)生了各種I / O消息/異常:
Oct 18 06:39:20 x smartd[461]: Device: /dev/sdb [SAT], 1 Currently unreadable (pending) sectors
Oct 18 06:39:20 x smartd[461]: Device: /dev/sdb [SAT], 1 Offline uncorrectable sectors
...
Oct 18 07:09:20 x smartd[461]: Device: /dev/sdb [SAT], 1 Currently unreadable (pending) sectors
Oct 18 07:09:20 x smartd[461]: Device: /dev/sdb [SAT], 1 Offline uncorrectable sectors
...
Oct 18 07:30:28 x kernel: [467502.192792] md/raid1:md1: sdb3: rescheduling sector 1849689328
Oct 18 07:30:28 x kernel: [467502.192822] md/raid1:md1: sdb3: rescheduling sector 1849689336
Oct 18 07:30:28 x kernel: [467502.192846] md/raid1:md1: sdb3: rescheduling sector 1849689344
Oct 18 07:30:28 x kernel: [467502.192870] md/raid1:md1: sdb3: rescheduling sector 1849689352
Oct 18 07:30:28 x kernel: [467502.192895] md/raid1:md1: sdb3: rescheduling sector 1849689360
Oct 18 07:30:28 x kernel: [467502.192919] md/raid1:md1: sdb3: rescheduling sector 1849689368
Oct 18 07:30:28 x kernel: [467502.192943] md/raid1:md1: sdb3: rescheduling sector 1849689376
Oct 18 07:30:28 x kernel: [467502.192966] md/raid1:md1: sdb3: rescheduling sector 1849689384
Oct 18 07:30:28 x kernel: [467502.192991] md/raid1:md1: sdb3: rescheduling sector 1849689392
Oct 18 07:30:28 x kernel: [467502.193035] md/raid1:md1: sdb3: rescheduling sector 1849689400
...
Oct 19 06:26:08 x kernel: [550035.944191] ata3.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
Oct 19 06:26:08 x kernel: [550035.944224] ata3.01: BMDMA stat 0x64
Oct 19 06:26:08 x kernel: [550035.944248] ata3.01: failed command: READ DMA EXT
Oct 19 06:26:08 x kernel: [550035.944274] ata3.01: cmd 25/00:08:15:fb:9c/00:00:6c:00:00/f0 tag 0 dma 4096 in
Oct 19 06:26:08 x kernel: [550035.944274] res 51/40:00:1c:fb:9c/40:00:6c:00:00/10 Emask 0x9 (media error)
Oct 19 06:26:08 x kernel: [550035.944322] ata3.01: status: { DRDY ERR }
Oct 19 06:26:08 x kernel: [550035.944340] ata3.01: error: { UNC }
Oct 19 06:26:08 x kernel: [550036.573438] ata3.00: configured for UDMA/133
Oct 19 06:26:08 x kernel: [550036.621444] ata3.01: configured for UDMA/133
Oct 19 06:26:08 x kernel: [550036.621507] sd 2:0:1:0: [sdb] Unhandled sense code
Oct 19 06:26:08 x kernel: [550036.621516] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550036.621523] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Oct 19 06:26:08 x kernel: [550036.621530] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550036.621537] Sense Key : Medium Error [current] [descriptor]
Oct 19 06:26:08 x kernel: [550036.621555] Descriptor sense data with sense descriptors (in hex):
Oct 19 06:26:08 x kernel: [550036.621562] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
Oct 19 06:26:08 x kernel: [550036.621606] 6c 9c fb 1c
Oct 19 06:26:08 x kernel: [550036.621626] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550036.621638] Add. Sense: Unrecovered read error - auto reallocate failed
Oct 19 06:26:08 x kernel: [550036.621646] sd 2:0:1:0: [sdb] CDB:
Oct 19 06:26:08 x kernel: [550036.621653] Read(10): 28 00 6c 9c fb 15 00 00 08 00
Oct 19 06:26:08 x kernel: [550036.621692] end_request: I/O error, dev sdb, sector 1822227228
Oct 19 06:26:08 x kernel: [550036.621719] raid1_end_read_request: 9 callbacks suppressed
Oct 19 06:26:08 x kernel: [550036.621727] md/raid1:md1: sdb3: rescheduling sector 1816361448
Oct 19 06:26:08 x kernel: [550036.621782] ata3: EH complete
Oct 19 06:26:08 x kernel: [550041.155637] ata3.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
Oct 19 06:26:08 x kernel: [550041.155669] ata3.01: BMDMA stat 0x64
Oct 19 06:26:08 x kernel: [550041.155694] ata3.01: failed command: READ DMA EXT
Oct 19 06:26:08 x kernel: [550041.155719] ata3.01: cmd 25/00:08:15:fb:9c/00:00:6c:00:00/f0 tag 0 dma 4096 in
Oct 19 06:26:08 x kernel: [550041.155719] res 51/40:00:1c:fb:9c/40:00:6c:00:00/10 Emask 0x9 (media error)
Oct 19 06:26:08 x kernel: [550041.155767] ata3.01: status: { DRDY ERR }
Oct 19 06:26:08 x kernel: [550041.155785] ata3.01: error: { UNC }
Oct 19 06:26:08 x kernel: [550041.343437] ata3.00: configured for UDMA/133
Oct 19 06:26:08 x kernel: [550041.391438] ata3.01: configured for UDMA/133
Oct 19 06:26:08 x kernel: [550041.391501] sd 2:0:1:0: [sdb] Unhandled sense code
Oct 19 06:26:08 x kernel: [550041.391510] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550041.391517] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Oct 19 06:26:08 x kernel: [550041.391525] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550041.391532] Sense Key : Medium Error [current] [descriptor]
Oct 19 06:26:08 x kernel: [550041.391546] Descriptor sense data with sense descriptors (in hex):
Oct 19 06:26:08 x kernel: [550041.391553] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
Oct 19 06:26:08 x kernel: [550041.391596] 6c 9c fb 1c
Oct 19 06:26:08 x kernel: [550041.391616] sd 2:0:1:0: [sdb]
Oct 19 06:26:08 x kernel: [550041.391624] Add. Sense: Unrecovered read error - auto reallocate failed
Oct 19 06:26:08 x kernel: [550041.391636] sd 2:0:1:0: [sdb] CDB:
Oct 19 06:26:08 x kernel: [550041.391643] Read(10): 28 00 6c 9c fb 15 00 00 08 00
Oct 19 06:26:08 x kernel: [550041.391681] end_request: I/O error, dev sdb, sector 1822227228
Oct 19 06:26:08 x kernel: [550041.391737] ata3: EH complete
Oct 19 06:26:08 x kernel: [550041.409686] md/raid1:md1: read error corrected (8 sectors at 1816363496 on sdb3)
Oct 19 06:26:08 x kernel: [550041.409705] handle_read_error: 9 callbacks suppressed
Oct 19 06:26:08 x kernel: [550041.409709] md/raid1:md1: redirecting sector 1816361448 to other mirror: sda3
計算機(jī)繼續(xù)將一些條目記錄到系統(tǒng)日志中,直到大約一個小時后,然后最終變得完全沒有響應(yīng). syslog中沒有記錄“內(nèi)核oops”.必須重新啟動計算機(jī),此時raid陣列處于重新同步狀態(tài).重新同步后,一切都顯示良好,驅(qū)動器似乎正常工作.
我還注意到,所有重新安排的部門恰好分開了8個扇區(qū),這對我來說似乎很奇怪.
最后,重啟后大約一兩天,但在raid重新同步完成后顯著,驅(qū)動器重置了不可讀(待定)和離線不可糾正的扇區(qū)計數(shù),我認(rèn)為這是正常的,因為驅(qū)動器使這些扇區(qū)脫機(jī)并重新映射它們:
Oct 20 01:05:42 x kernel: [ 2.186400] md: bind
Oct 20 01:05:42 x kernel: [ 2.204826] md: bind
Oct 20 01:05:42 x kernel: [ 2.209618] md: raid1 personality registered for level 1
Oct 20 01:05:42 x kernel: [ 2.210079] md/raid1:md1: not clean -- starting background reconstruction
Oct 20 01:05:42 x kernel: [ 2.210087] md/raid1:md1: active with 2 out of 2 mirrors
Oct 20 01:05:42 x kernel: [ 2.210122] md1: detected capacity change from 0 to 997198858240
Oct 20 01:05:42 x kernel: [ 2.210903] md: resync of RAID array md1
Oct 20 01:05:42 x kernel: [ 2.210911] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Oct 20 01:05:42 x kernel: [ 2.210915] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
Oct 20 01:05:42 x kernel: [ 2.210920] md: using 128k window, over a total of 973827010k.
Oct 20 01:05:42 x kernel: [ 2.241676] md1: unknown partition table
...
Oct 20 06:33:10 x kernel: [19672.235467] md: md1: resync done.
...
Oct 21 05:35:50 x smartd[455]: Device: /dev/sdb [SAT], No more Currently unreadable (pending) sectors, warning condition reset after 1 email
Oct 21 05:35:50 x smartd[455]: Device: /dev/sdb [SAT], No more Offline uncorrectable sectors, warning condition reset after 1 email
smartctl -a / dev / sdb現(xiàn)在顯示
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
...
5 Reallocated_Sector_Ct 0x0033 096 096 036 Pre-fail Always - 195
...
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
...
出現(xiàn)了一些問題:
1)為什么所有重新安排的部門都要分開8個?
2)為什么內(nèi)核無響應(yīng)并需要重啟?難道這不是RAID-1在沒有系統(tǒng)爆炸的情況下應(yīng)該處理的情況嗎?
3)為什么raid重新同步完成后僅23小時才會重置不可讀和離線不可糾正的計數(shù)?
解決方法:
1) Why would all the rescheduled sectors be exactly 8 apart?
行業(yè)數(shù)量的這種差距是可以預(yù)期的,問題是這些差距會有多大(4k或更大). 8x 512字節(jié)是4k,這是大多數(shù)文件系統(tǒng)使用的扇區(qū)大小.因此文件系統(tǒng)可能要求從RAID讀取4k,RAID要求/ dev / sdb獲取該數(shù)據(jù).該讀取的第一個扇區(qū)失敗(這是您在日志中看到的扇區(qū)號),RAID切換到/ dev / sda并從那里服務(wù)4k.然后文件系統(tǒng)請求讀取下一個4k,返回到/ dev / sdb,扇區(qū)號為8再次失敗,這也是你在日志中看到的…
2) Why would the kernel become unresponsive and require a reboot?
不應(yīng)該正常發(fā)生.問題是重新分配的情況是你能得到的最貴的.每次失敗的讀取都必須重定向到另一個磁盤,必須在原始磁盤上重寫等.如果它同時填充您的日志文件,則會導(dǎo)致新的寫入請求,而這些請求又必須重新分配在這種情況下,將磁盤完全取出會更便宜.
這也是其他硬件(如SATA控制器)如何處理故障驅(qū)動器的問題.如果控制器本身出現(xiàn)打嗝,則會對性能產(chǎn)生更大的影響.
如果沒有日志條目,很難說出究竟發(fā)生了什么;這是Linux內(nèi)核的一個弱點,沒有直接的解決方案可以在事情真正發(fā)生時保留最后的消息.
3) Why would the unreadable and offline uncorrectable counts reset only 23 hours after the raid resync was complete?
某些值僅在您執(zhí)行脫機(jī)數(shù)據(jù)收集(UPDATED Offline列)時更新,這可能需要一些時間.如果磁盤設(shè)置為自動執(zhí)行此操作,則它取決于磁盤,例如每四個小時一次.如果您不想依賴磁盤,則應(yīng)使用smartmontools進(jìn)行設(shè)置.
標(biāo)簽:linux,fedora,software-raid,raid1
來源: https://codeday.me/bug/20190813/1649017.html
總結(jié)
以上是生活随笔為你收集整理的linux驱动读取文件失败怎么办,linux – 由于单个驱动器读取错误导致软件RAID-1导致内核故障...的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: oracle 报错pls 00405,o
- 下一篇: linux++tar打包目录,linux