Re: Question about a hard drive error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Wed, 10 Nov 2010, John R Pierce wrote:

> On 11/10/10 6:58 PM, Gilbert Sebenste wrote:
>>  Hey everyone,
>>
>>  I just got one of these today:
>>
>>  Nov 10 16:07:54 stormy kernel: sd 0:0:0:0: SCSI error: return code =
>>  0x08000000
>>  Nov 10 16:07:54 stormy kernel: sda: Current: sense key: Medium Error
>>  Nov 10 16:07:54 stormy kernel:     Add. Sense: Unrecovered read error
>>  Nov 10 16:07:54 stormy kernel:
>>  Nov 10 16:07:54 stormy kernel: Info fld=0x0
>>  Nov 10 16:07:54 stormy kernel: end_request: I/O error, dev sda, sector
>>  3896150669
>
> see where it says dev sda ?   thats physical drive zero which has a read 
> error on that sector.
>
>
>>  Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743752)
>>  Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743760)
>>  Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743768)
>>  Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743776)
>>  Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743784)
>>  Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743792)
>>  Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743800)
>>  Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743808)
>>
>>  My question is this: I have RAID00 set up, but don't really understand
>>  it well. This is how my disks are set up:
>>
>>  Filesystem           1K-blocks      Used Available Use% Mounted on
>>  /dev/mapper/VolGroup00-LogVol00
>>                         1886608544 296733484 1492495120  17% /
>>  /dev/sda1               101086     19877     75990  21% /boot
>>  tmpfs                  1684312   1204416    479896  72% /dev/shm
>> 
>
> that is not how your disks are setup, thats how your FILE SYSTEMS are setup.

Correct, apologies for the incorrect wording.

> that dev/mapper thing is a LVM volume.  you can display the physical volumes 
> behind a LVM with the command 'pvs'

Thank you! That was helpful.

>>  Which one is having the trouble? Any ideas so I can swap it out?
>
> raid0 is not suitable for reliability.  if any one drive in the raid0 fails 
> (or is removed) the whole volume has failed and will become unusable.

Thanks John, I appreciate it! Both are being replaced after a nearby 55 
KV power line shorted to ground and blew a manhole cover 50' into the air,
damaging a lot of equipment over here, even those on UPS's. Nobody was 
hurt, thank goodness. But, I'll be looking into RAID 5 in the future.

*******************************************************************************
Gilbert Sebenste                                                     ********
(My opinions only!)                                                  ******
*******************************************************************************
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux