Re: Question about a hard drive error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On 11/10/10 6:58 PM, Gilbert Sebenste wrote:
> Hey everyone,
>
> I just got one of these today:
>
> Nov 10 16:07:54 stormy kernel: sd 0:0:0:0: SCSI error: return code =
> 0x08000000
> Nov 10 16:07:54 stormy kernel: sda: Current: sense key: Medium Error
> Nov 10 16:07:54 stormy kernel:     Add. Sense: Unrecovered read error
> Nov 10 16:07:54 stormy kernel:
> Nov 10 16:07:54 stormy kernel: Info fld=0x0
> Nov 10 16:07:54 stormy kernel: end_request: I/O error, dev sda, sector
> 3896150669

see where it says dev sda ?   thats physical drive zero which has a read 
error on that sector.


> Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743752)
> Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743760)
> Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743768)
> Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743776)
> Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743784)
> Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743792)
> Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743800)
> Nov 10 16:07:54 stormy kernel: Read-error on swap-device (253:1:743808)
>
> My question is this: I have RAID00 set up, but don't really understand
> it well. This is how my disks are set up:
>
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00
>                        1886608544 296733484 1492495120  17% /
> /dev/sda1               101086     19877     75990  21% /boot
> tmpfs                  1684312   1204416    479896  72% /dev/shm
>

that is not how your disks are setup, thats how your FILE SYSTEMS are setup.

that dev/mapper thing is a LVM volume.  you can display the physical 
volumes behind a LVM with the command 'pvs'




> Which one is having the trouble? Any ideas so I can swap it out?


raid0 is not suitable for reliability.  if any one drive in the raid0 
fails (or is removed) the whole volume has failed and will become unusable.


_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux