Re: Improving Performance with more OSD's?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Udo,

Lindsay did this for performance reasons so that the data is spread evenly
over the disks, I believe it has been accepted that the remaining 2tb on the
3tb disks will not be used.

Nick


-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Udo
Lembke
Sent: 05 January 2015 07:15
To: Lindsay Mathieson
Cc: ceph-users@xxxxxxxx >> ceph-users
Subject: Re:  Improving Performance with more OSD's?

Hi Lindsay,

On 05.01.2015 06:52, Lindsay Mathieson wrote:
> ...
> So two OSD Nodes had:
> - Samsung 840 EVO SSD for Op. Sys.
> - Intel 530 SSD for Journals (10GB Per OSD)
> - 3TB WD Red
> - 1 TB WD Blue
> - 1 TB WD Blue
> - Each disk weighted at 1.0
> - Primary affinity of the WD Red (slow) set to 0
the weight should be the size of the filesystem. With weight 1 for all
disks, you run in trouble if your cluster filled, because the 1TB-Disks are
full, before the 3TB disk!

You should have something like 0.9 for the 1TB and 2.82 for the 3TB disks (
"df -k | grep osd | awk '{print $2/(1024^3) }' " ).

Udo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux