Re: KISS (was disappearing luks header and other mysteries)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, September 22, 2014 11:41, Arno Wagner wrote:
> On Sun, Sep 21, 2014 at 16:51:09 CEST, Sven Eschenberg wrote:
>> Hi Arno,
>>
>> On Sun, September 21, 2014 11:58, Arno Wagner wrote:
>> > On Sat, Sep 20, 2014 at 02:29:43 CEST, Sven Eschenberg wrote:
>> >> Well, it is not THAT easy.
>> >
>> > Actially it is.
>> >
>> >> If you want resilience/availability, you'll need RAID. Now what do
>> you
>> >> put
>> >> ontop of the RAID when you need to slice it?
>> >
>> > And there the desaster starts: Don't slice RAID. It isnot a good
>> > idea.
>>
>> While in principle this is true, in practise you cannot have different
>> filesystems on the same RAID then in such a setup. So you'll need as
>> many
>> RAIDs as filesystems. Which in turn means you will have to go for RAID
>> on
>> partitions in respect to current disk sizes.
>
> Yes? So? Is there a problem anywhere here?
>
>> The second aspect I mentioned is spanning filesystems over RAIDs. The
>> reasonable number of disks in a single RAID is quite limited and as such
>> really huge filesystems need to span multiple RAIDs. I know, as long as
>> single files don't exceed the size of a reasonable RAID you could still
>> use multiple FSes.
>
> If you do huge storage installations, your needs change. This list
> kind of assumes as default that you stay in conventional sizes, say no
> more than 8 disks in an array. Things that you need to do for really
> large storage do not translate well to normal sizes. We can still discuss
> huge installations of course, but please mark this clearly, as users
> that never will have to deal with these may get confused and think
> it applies to them. My personal experience also ends at about 8 disks
> per RAID, as that was the maximum I could cram into my research servers
> and budget, so please take everything I say without qualification
> for storage size to be said in that context.
>
> Now, this is not meant in any way as discrimination against people
> that have to deal with huge storage volumes, please feel free to
> discuss anything you want here, but please always state that this is
> from a huge-storage perspective so as to not confuse people. The same
> applies when you talk about things needed to automatize changes
> for multiple machines, which again is not the "default" perspective
> for most people. There, I have a little more experience, as I had
> a cluster with 25 machines and that is already enough to not want
> to do anything manually.
>
> And of course, the possibility of EB-sized arrays with hundreds of
> disks does not justify putting LVM on a laptop with a single disk.
> One size does not fit all.

You are absolutely right, there never is a one size fits all. I really
thought about things in a complete generic way. On a laptop you can always
live easily without snapshotting of any kind (just one example). On a
server snapshotting can be handy, it depends though how open files are
handled. This probably is off topic though and would be quite an intese
and deep discussion I guess.

Anyway, I think many distributions did the LVM thing as a partitioning
replacement scheme that is flexible. This way, you'd open a single
cryptotarget (example), have an LVM on top and then have all different
filesystems in there. The question though is, do you need different
filesystems for /home /usr (you name it) on a laptop? No probably not at
all. Maybe you could even already live with dmcrypt just for /home.

Afterall it most probably was the one size fits all concept that led to
the decision (a wrong one imho, but understandable as there is a single
deployment/setup which eases maintanance for distributors, laziness won I
assume :-) ).
>
>> >
>> >
>> >> Put a disklabel/partition on
>> >> top of it and stick with a static setup or use LVM which can span
>> >> multiple
>> >> RAIDs (and types) supports snapshotting etc. . Depending on your
>> needs
>> >> and
>> >> usage you will end up with LVM in the end. If you want encryption,
>> >> you'll
>> >> need a crypto layer (or you put it in the FS alongside volume
>> slicing).
>> >> Partitions underaneath the RAID, not necessary if the RAID
>> >> implementation
>> >> can subslice physical devices and arrange for different levels on the
>> >> same
>> >> disk. Except unfortunately, when you need a bootloader.
>> >>
>> >> I don't see any alternative which would be KISS enough, except
>> merging
>> >> the
>> >> layers to avoid collissions due to stacking order etc. . Simple usage
>> >> and
>> >> debugging for the user, but the actual single merged layer would be
>> >> anything but KISS.
>> >
>> > You miss one thing: LVM breaks layereing and rather badly so. That
>> > is a deadly sin. Partitioning should only ever been done on
>> > monolithic devices. There is a good reason for that, namely that
>> > parition-raid, filesystems and LUKS all respect partitioning per
>> > default, and hence it actually takes work to break the container
>> > structure.
>>
>> That is true, usually slicing, RAIDs and subvolumes are all part of the
>> RAID layer and as such RAID subvolumes are monolithic devices from an OS
>> point of view (read with HW-RAID HBAs). AFAIK with DDF metadata mdraid
>> takes this path and LVM could (except for spanning/snapshotting) be
>> taken
>> out of the equation.
>
> One problem here is that dmraid on paritions is conceptually
> on the wrong layer compared to hardware RAID. But quite
> frankly, hardwre RAID never reached any reasonable degree
> of sophistication and was more of a "magic box" solution
> that yu could not look into. I do not think there is any problem
> doing RAID on partitions and not partitioning the array
> again, but it is different from what people used to hardware
> RAID expect.

I agree, I do see advantages in RAID over partitions and softraid (and use
it) as it does not need special hardware, saves replacement parts, is
flexible, it is just a pitty it does not basicly give the blackbox
experience esp. within the OS after it was setup. From a normal user's
point of view, after starting a raid over partitions, it would be more
consistent if the inodes for the raid members would magically vanish
instead of only being marked as in use (yes I do see the downside of this
aswell).

And then there is that chipset softraid rake which can drive people nuts.
(conceptually RAID on disks, metadata at end and then GPT: tools don't see
secondary GPT as they can and do access disks individually ...)
I am seeing quite some room for improvements in many places ;-) .

>
>> >
>> > LVM rides all over that and hence it is absolutely no surprise
>> > at all that people keep breaking things using it. It is like
>> > a chainsaw without safety features. Until those safety-features
>> > are present and work reliably, LVM should be avoided in all
>> > situation where there is an alternative. There almost always is.
>> >
>>
>> I doubt you'll ever get foolproofness and sophistication/flexibility at
>> the same time, just look at cryptsetup and the libgcrypt/whirlpool
>> issues.
>> Foolproof mostly means lack of choise or 'features' ;-).
>
> It is a balance. My take is that most people do not need the
> flexibility LVM gives them and at the sametime cannot really
> master its complexity, and then they end up sawing off a foot.
> There are cases where you do need it, I do not dispute that.
> But an ordinary, self-administrated end-user installation is
> not one of them. And yes, even the "chainsaw without safety
> features" has valid applications, but you will never, ever give
> it to a non-expert and you will only use it if there is no
> better way.

Simpyl put: Agreed. As I said, I was reflecting genericly ...
>
>> > But please, be my guest shooting yourself in the foot all
>> > you like. I will just not refrain from telling you "I told
>> > you so".
>>
>> In a way you are right, then again, at some point in time, you'll let
>> kids
>> use forks, knives and fire, you know ;-).
>
> Indeed. But only when you see them being able to handle it. What
> I see happeingn is that people keep breaking things with LVM in
> situations where there was no need for it in the first place.

I agree again. You don't use something just because it is available but
because there is a reasonable need.

>
> Arno
>
> --
> Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@xxxxxxxxxxx
> GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D
> 9718
> ----
> A good decision is based on knowledge and not on numbers. -- Plato
>
> If it's in the news, don't worry about it.  The very definition of
> "news" is "something that hardly ever happens." -- Bruce Schneier

Regards

-Sven

_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt




[Index of Archives]     [Device Mapper Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux