30 GB is one of the sweet spots during normal operation. But during compaction, ceph writes the new data before removing the old, hence the 60GB.
_______________________________________________Hello all.
Sorry for the beginner questions...
I am in the process of setting up a small (3 nodes, 288TB) Ceph cluster to store some research data. It is expected that this cluster will grow significantly in the next year, possibly to multiple petabytes and 10s of nodes. At this time I'm expected a relatively small number of clients, with only one or two actively writing collected data - albeit at a high volume per day.
Currently I'm deploying on Debian 9 via ceph-ansible.
Before I put this cluster into production I have a couple questions based on my experience to date:
Luminous, Mimic, or Nautilus? I need stability for this deployment, so I am sticking with Debian 9 since Debian 10 is fairly new, and I have been hesitant to go with Nautilus. Yet Mimic seems to have had a hard road on Debian but for the efforts at Croit.
- Statements on the Releases page are now making more sense to me, but I would like to confirm that Nautilus is the right choice at this time?
Bluestore DB size: My nodes currently have 8 x 12TB drives (plus 4 empty bays) and a PCIe NVMe drive. If I understand the suggested calculation correctly, the DB size for a 12 TB Bluestore OSD would be 480GB. If my NVMe isn't big enough to provide this size, should I skip provisioning the DBs on the NVMe, or should I give each OSD 1/12th of what I have available? Also, should I try to shift budget a bit to get more NVMe as soon as I can, and redo the OSDs when sufficient NVMe is available?
Thanks.
-Dave
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com