Re: Can we deprecate FileStore in Quincy?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you want to go cheap and somewhat questionable,

there are some asrock mainboards with a soldered in atom cpu, that support up to 32gb memory (officially only 8, but the controller does more) and have 2 sata directly + a free 16x pcie port, Those boards are usually less than 90€, not as cheap as a raspberry, but you can do way larger nodes due a faster cpu and more memory. You could even add additional network ports via usb3.... But I would not use something like this for anything more serious than a proof of concept system/ home nas.

Greetings


On 6/28/21 2:05 AM, Stuart Longland wrote:
On Sat, 26 Jun 2021 08:01:46 -0500
Mark Nelson <mnelson@xxxxxxxxxx> wrote:

FWIW, you can lower both the osd_memory_target and tweak a couple of
other settings that will lower bluestore memory usage.  A 2GB target is
about the lowest you can reasonably set it to (and you'll likely hurt
performance due to cache misses), but saying you need a host with 8+GB
of RAM is probably a little excessive. There's also a good chance that
filestore memory usage isn't as consistently low as you think it is.
Yes you can avoid the in-memory caches that bluestore has since
filestore relies more heavily on page cache, but things like osdmap,
pglog, and various other buffers are still going to use memory in
filestore just like bluestore.  You might find yourself working fine 99%
of the time and then going OOM during recovery or something if you try
to deploy filestore on a low memory SBC.
To be honest, the smallest of my nodes has 8GB RAM presently, but I'd
like to scale out, and most of my cost-effective options for scale-out
are 4GB or less.  Raspberry Pi4 is the only I've seen available to mere
mortals like myself that exceeds this limit, and even then it's far from
an ideal system.

PC Engines APU3s look good as nodes, as they have SATA
on-board, multiple Ethernet interfaces that can be bonded for quick
networking, and they use CoreBoot managed over a serial port, but
needing 8GB is a killer.

I presently use filestore on HDD-based OSDs because I've found it gives
me the best performance.  The SSD-based OSDs are running Bluestore,
since they seem to be able to keep up better.  Never tried FileStore on
less than 8GB, so you could well be right, but my experience with BlueStore
on these OSDs was abysmal.

Having said all of that, you can get 16GB of ECC DDR4 RAM in the US new
for around $70-100USD.  A quick search on google makes it look like
you'll pay about twice that new in AU, but there's plenty of stuff on
the used market for ~$60-100AUD (like $50-70USD).  I don't think that's
super unreasonable and frankly would be far more reliable than running
on SBCs with non-ECC memory.  I would love to see SBCs become more
prolific, but memory has always been a big constraint (especially before
the 8GB devices came out), and not only for Ceph.
Yep, but remember when you buy a SBC, the RAM comes
soldered-to-the-board in most cases.  If you want removable RAM, you're
looking at a small-form-factor server board of some kind like the
Supermicro A1SAi boards that have been my storage nodes since 2016.

Regards,
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux