Re: New Ceph cluster design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
    same REX, we had troubles with OutOfMemory Kill Process on OSD process with ten 8 To disks. After an upgrade to 128 Go these troubles disapears.

Recommendations on memory aren't overestimated.

Regards,
Tristan


On 09/03/2018 11:31, Eino Tuominen wrote:
On 09/03/2018 12.16, Ján Senko wrote:

I am planning a new Ceph deployement and I have few questions that I could not find good answers yet.

Our nodes will be using Xeon-D machines with 12 HDDs each and 64GB each.
Our target is to use 10TB drives for 120TB capacity per node.
We ran into problems with 20 x 6 TB drives and 64 GB memory which we then increased to 128 GB. According to my experience the recommendation of 1 GB of memory per 1 TB of disk space has to be taken seriously.


begin:vcard
fn:Tristan Le Toullec
n:Le Toullec;Tristan
org:CNRS;LOPS
adr:;;rue Dumont D'Urville;PLOUZANE;;29280;France
email;internet:tristan.letoullec@xxxxxxx
title:System Admin
tel;work:0290915544
version:2.1
end:vcard

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux