If you're at all concerned with performance and the budget is set, drop a storage node and replace some osds with SSDs in the other nodes. We use 32x 4TB + 4x SSDs storage nodes
and have 192GB of memory; 128GB wasn't enough. If you try to do this setup without SSD journals then you are going to scream at yourself in the future for not doing it.
We have 4x 200GB SSDs but only use 8x 10GB journals on each one. You want to pay attention to the speed of the SSD, not the size, when judging how many OSDs to assign their journals to it. Every write to each OSD is first written on the journal and then onto the drive. From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Sergio A. de Carvalho Jr. [scarvalhojr@xxxxxxxxx]
Sent: Thursday, April 07, 2016 12:18 PM To: Alan Johnson Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: Ceph performance expectations Thanks, Alan.
Unfortunately, we currently don't have much flexibility in terms of the hardware we can get so adding SSDs might not be possible in the near future. What is the best practice here, allocating, for each OSD, one disk just for data and one disk just for
thd journal? Since the journals are rather small (in our setup a 5GB partition is created on every disk), wouldn't this a bit of a waste of disk space?
I was wondering if it would make sense to give each OSD one full 4TB disk and use one of the 900 GB disks for all journals (12 journals in this case). Would that cause even more contention since then different OSDs would then be trying to write their journals
to the same disk?
On Thu, Apr 7, 2016 at 4:13 PM, Alan Johnson
<alanj@xxxxxxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com