cache tiering or bluestore partitions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are planning our ceph architecture and I have a question:

How should NVMe drives be used when our spinning storage devices use
bluestore:

1. block WAL and DB partitions
(https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/)
2. Cache tier
(https://docs.ceph.com/docs/nautilus/rados/operations/cache-tiering/)
3. Something else?

Hardware- Each node has:
3x 8 TB HDD
1x 450 GB NVMe drive
192 GB RAM
2x Xeon CPUs (24 cores total)

I plan to have three OSD daemons running on the node. There are 95 nodes
total with the same hardware.

Use Case:

The plan is create cephfs and use this filesystem to store people's home
directories and data. I anticipate more read operations than writes.

Regarding cache tiering: The online documentation says cache tiering
will often degrade performance. But when I read various threads on this
ML there do seem to be people using cache tiering with success. I do see
that it is heavily dependent upon one's use-case. In 2019 is there any
updated recommendations as to whether to use cache tiering?

If there is a third suggestion that people have I would be interested in
hearing it. Thanks in advance.

Sincerely,
Shawn Kwang

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux