Re: 800TB - Ceph Physical Architecture Proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

*snipsnap*

On 09.04.2016 08:11, Christian Balzer wrote:
3 MDS nodes:
-SuperMicro 1028TP-DTR (one node from scale-out chassis)
--2x E5-2630v4
--128GB RAM
--2x 120GB SSD (RAID 1 for OS)
Not using CephFS, but if the MDS are like all the other Ceph bits
(MONs in particular) they are likely to do happy writes to leveldbs or
the likes, do verify that.
If that's the case, fast and durable SSDs will be needed.
MDS does not use any local state (similar to rados gateway). All information like metadata and journal are stored in the metadata pool. If you need a fast CephFS, this pool should definitely be put on a SSD based storage.

The first data pool is also special since it contains part of the metadata. The ideal setup IMHO is one SSD pool for the metadata, another (small) SSD pool for the first data pool, and one or more data pool. You can change the association between files/directories after setting up CephFS. A simple setup would have one additional data pool (replicated or erasure coding with cache tier using a ceph release without troubling cache tiers). After initialization and the first mount you would change the layout of the CephFS root directory to use the second data pool. All files and directories created _afterwards_ will be put on the second pool.

MDS instances are memory hogs. Depending on the number of files/directories, client and the number of parallel accessed files the memory requirements will rise. But 128 GB should be more than enough for most use cases. We have the MDS instances running in parallel with OSD on our hosts (128 GB RAM, 12 OSDs), which works fine for us.

Keep in mind that running multiple server is recommended in active/standby configuration only. There is also support for an active/active setup, but that's marked as experimental at the moment.

Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux