Re: Can we deprecate FileStore in Quincy?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



сб, 26 июн. 2021 г. в 10:54, Stuart Longland <stuartl@xxxxxxxxxxxxxxxxxx>:
>
> On Tue, 1 Jun 2021 12:24:12 -0700
> Neha Ojha <nojha@xxxxxxxxxx> wrote:
>
> > Given that BlueStore has been the default and more widely used
> > objectstore since quite some time, we would like to understand whether
> > we can consider deprecating FileStore in our next release, Quincy and
> > remove it in the R release. There is also a proposal [0] to add a
> > health warning to report FileStore OSDs.
>
> I'd consder this:
>
> - Bluestore requires OSD hosts with 8GB+ of RAM,
...
> There are very few single-board computers that have 8GB+ of RAM

I have mixed feelings about this.

That 8GB+ figure is not really true. Yes, the default OSD memory
target is 4 GB, and it sometimes overshoots. And especially it likes
to overshoot during ShallowFSCK, e.g. during (or after) upgrades, and
the amount of RAM consumed during ShallowFSCK does not really depend
on the OSD memory target. With a 14TB HDD, it could easily eat 10GB of
RAM.

But still - you could set the OSD memory target lower than the default
(performance will be limited then, due to insufficient caching, but
the OSD will still work). And for the ShallowFSCK phase to complete
successfully, you could add swap during upgrades. In fact, I had to do
so (with zram) during the upgrade of a Luminous cluster to Nautilus
some time before, and that with a beefy server with 128 GB of RAM and
16 OSDs, serving a lot of CephFS. So I don't really see this as an
obstacle, because swap/zram is needed only during upgrades, and
anyway, I wouldn't connect a 14TB drive to a Raspberry Pi, because of
its slow Ethernet and a huge time required to resync a new HDD.

So, a 4GB board could still be made to work as a cheap and bad OSD,
and even survive upgrades, but I wouldn't do it at home. Simply
because Ceph never made sense for small clusters, no matter what the
hardware is - for such use cases, you could always do a software RAID
over ISCSI or over AoE, with less overhead.

-- 
Alexander E. Patrakov
CV: http://u.pc.cd/wT8otalK
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux