Re: Ideal Bluestore setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ean,

I don't have any experience with less than 8 drives per OSD node, and
the setup heavily depends on what you want to use it for.  Assuming
small proof of concept with not much requirement for performance (due
to low spindle count), I would do this:

On Mon, Jan 22, 2018 at 1:28 PM, Ean Price <ean@xxxxxxxxxxxxxx> wrote:
> Hi folks,
>
> I’m not sure the ideal setup for bluestore given the set of hardware I have to work with so I figured I would ask the collective wisdom of the ceph community. It is a small deployment so the hardware is not all that impressive, but I’d still like to get some feedback on what would be the preferred and most maintainable setup.
>
> We have 5 ceph OSD hosts with the following setup:
>
> 16 GB RAM
> 1 PCI-E NVRAM 128GB
> 1 SSD 250 GB
> 2 HDD 1 TB each
>
> I was thinking to put:
>
> OS on NVRAM with 2x20 GB partitions for bluestore’s WAL and rocksdb

I would put the OS on the SSD and not colocate with WAL/DB.  I would
also put WAL/DB on the NVMe drive as the fastest.

> And either use bcache with the SSD to cache the 2x HDDs or possibly use Ceph’s built in cache tiering.

Ceph cache tiering is likely out of the range of this setup, and
requires a very clear understanding of the workload.  I would not use
it.

No experience with bcache, but again seems to be a bit of overkill for
a small setup like this.  Simple = stable.

>
> My questions are:
>
> 1) is a 20GB logical volume adequate for the WAL and db with a 1TB HDD or should it be larger?

I believe so, yes.  If it spills over, the data will just go onto the drives.

>
> 2) or - should I put the rocksdb on the SSD and just leave the WAL on the NVRAM device?

You are likely better off with WAL and DB on the NVRAM

>
> 3) Lastly, what are the downsides of bcache vs Ceph’s cache tiering? I see both are used in production so I’m not sure which is the better choice for us.
>
> Performance is, of course, important but maintainability and stability are definitely more important.

I would avoid both bcache and tiering to simplify the configuration,
and seriously consider larger nodes if possible, and more OSD drives.

HTH,
--
Alex Gorbachev
Storcium

>
> Thanks in advance for your advice!
>
> Best,
> Ean
>
>
>
>
>
> --
> __________________________________________________
>
> This message contains information which may be confidential.  Unless you
> are the addressee (or authorized to receive for the addressee), you may not
> use, copy, or disclose to anyone the message or any information contained
> in the message.  If you have received the message in error, please advise
> the sender by reply e-mail or contact the sender at Price Paper & Twine
> Company by phone at (516) 378-7842 and delete the message.  Thank you very
> much.
>
> __________________________________________________
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux