Re: ceph-volume simple disk scenario without LVM for OSD on PVC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My two cents, likely controvesial but it has to be said:

>> So maybe a bare-bones bluestore mode makes sense.  In the simple case, it
>> really should be *very* simple.  But its scope pretty quickly expodes:
>> what about wal and db devices?  We have labels for those, so we could
>> support those, also easily... if the user has to partition the devices
>> beforehand manually.  

How is foisting volume management off on the admin any different from foisting off partition management?  Today we’re how many years after BlueStore was released? And we *still’* don’t have documentation for planning and managing it that is either complete or correct?  Yes 4%, I’m looking at *you*.  And don’t get me started about the rbd-mirror ambiguity.

We already have bare-bones BlueStore, I’m running thousands of Luminous OSDs on it today.

Re WAL and DB, their management and journals before them has indeed been a pain.  But those who have to do it have already written tools that work for their scenarios.  Moving the target yet again will just mean more time that people have to sink into reinventing the wheel, vs doing interesting things.  Or updating their clusters. Changes for the sake of changes have these results:

* Time taken away from our burgeoning backlogs to retool for the change du jour
* New releases don’t get installed because of the tooling and testing work, and fear of what bizarre pivot-related bug will next break thousands of paying users?  There are reasons I’m still running 12.2.2.  
* Existing bugs persist because devs are working on replacing understood things with non-understood things
* New bugs appear for the same reason
* We often hear “Why is Ceph so slow?  Solution X on the same hardware gives us X times the performance”. Perhaps we could do something about that if we weren’t chasing trends.

I feel the same way about containers.  What does that bandwagon *really* gain us other than trendiness?  How much would they *cost* countless admins who have to throw away years of accumulated know-how and tooling?

Too often it seems that Ceph attends to whatever’s whizzy for greenfield deployments, at the expense of jerking around existing, production clusters.

> I don't think keeping a simple or barebones approach will survive contact with 
> real-world deployments. Imho if we want a raw mode, we better be prepared to 
> deal with multi-device OSDs and multi-OSD devices and the partitioning this 
> requires.

Real-world deployments have been doing these things for a long time.  It is, if not a perfectly solved problem, at least a familiar one.  The devil you know.

>> Since we can't cover all of that, and at a minimum, we can't cover
>> dm-crypt,

dm-crypt is another thing that real-world deployments already have covered.  Don’t fix what aint’ broken.

That said, dm-crypt is itself a band-aid for hardware shortcomings.  SED sure wasn’t manageable.  Maybe OPAL will be, if — unlike SMART — manufacturers can surprise us all and deliver usable and interoperable behavior.

— aad
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux