Re: MDS configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



based on the empirical measurement, starting with a set of (real)
number of files, focusing mainly on small-files. and found some rough
relationship,
all filesets with same dist. i.e., ~80% <4MB file, mostly small, all 1x. replc.
For MDS RAM:

You need about:
0.6GB RAM to store 0.03million files (fileset vol. 1.2TB)
1.2GB to store  0.065million files (fileset vol. 2.4TB)
1.8GB to store 0.13million files  (fileset vol. 4.8TB)

ratio of no.files per GB fortunately increases, (i.e., 0.07million per
GB for the 0.13million files), hopefully per-GB will support
0.1million files as more files are stored.
then, 18GB = 1.8million files,
180GB = 18million
1800GB= 180million (this is about 6.64PB),

So to support that amount,, we need 100 MDS nodes with 18GB purely for
cmds, incl mem for OS, etc, maybe 20-24GB per node.

Cheers

On Sat, Jun 11, 2011 at 06:47, Fyodor Ustinov <ufm@xxxxxx> wrote:
>
> Hi!
>
> Which configuration would you recommended for cluster with 50-80 million files?
>
> WBR,
>    Fyodor.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux