Re: anaconda with lvm default

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2005-03-11 at 17:05 +0000, Timothy Murphy wrote:
> On Fri 11 Mar 2005 13:54, Michael Honeyfield wrote:
> 
> > > LVM is good.
> >
> > On a desktop PC with a single disk? Makes little sense.
> 
> I agree.
> 
> Also, what is the probability of a disk error causing serious trouble,
> compared with classical partitions?

Uh, exactly the same?

To elaborate, I'll assume the task which we aim to determine probability
of failure based only upon disk errors is "booting the system to the
point where init is executed".

Given that, here's basically what it is without LVM.  The numbers are
very approximate.

boot_sectors is how many sectors BIOS must read to boot anything (1000)
grub_sectors is for grub.conf (2) and stage2 (200)
kernel_sectors is vmlinuz (3000) and initrd (800)
fs_sectors is fs metadata, i.e. superblock+dirents+inodes (1000)
sector_p are the probability of a sector failing catastrophically.

p0 = probability of catastrophic disk failure during early boot
   = probability of failure in a period nearish 5 minutes
   = ( boot_sectors + grub_config_sectors + kernel_sectors
       + fs_metadata_sectors
     ) * sector_p
   = 1000+2+200+3000+800+1000) * sector_p
   = 5202 * sector_p

sector_p is *incredibly* small, because most failures on modern disks
are correctable read failures.  A correctable read failure will cause
the sector to be remapped, and so sectors likely to experience
catastrophe are typically not in use *at all*.

The MTBF of a random seagate drive google told me about is 1200000
hours, which means the probability of failure during an hour to be
1/(1200000 - previous_runtime_hours), and the probability of failure for
a 5 minute period to be 1/(14400000 -
previous_runtime_5_minute_periods).  That's the mean time, so take it
with a grain of salt, but it's the best we're going to do.

So p0 is roughly 5202/14400000 .

If we now consider the LVM setup we're using by default (i.e. / and swap
on LVM, /boot on /dev/hda1), still with a single disk, that becomes:

p1 = p0 + lvm_metadata_sectors*sector_p
   = 5202*sector_p + 100*sector_p
   = 5302 * sector_p

Which is to say, the difference is zilch.

So in 14400000 boots, 5300 of them will be failures, as opposed to 5200
of them the classical way.  Roughly.

I hope you know exactly how meaningless this number is.

-- 
        Peter


[Index of Archives]     [Kickstart]     [Fedora Users]     [Fedora Legacy List]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]
  Powered by Linux