Re: MDS configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 12 Jun 2011, djlee064 wrote:
> ah; previous message was an example of 'during the operation' i.e.,
> randread entire sets;
> sorry about that,
> cmds uses only a tiny MB ~100MB once files stored.
> for cosd, during the operation uses 150-200mb per cosd (both during write/read)
> 
> still, when considering performance, previous message stands or else
> the performance decreases if the minimum amount of RAM isn't there.

Right.  The ~1k/file ratio Greg mentioned is also without any memory 
optimization work.  There is a huge amount of inode and dentry state 
associated with modifications that need not consume memory all of the 
time.  Fixing it up has been low on the priority list.

sage


> 
> cheers
> 
> On Sun, Jun 12, 2011 at 20:41, djlee064 <djlee064@xxxxxxxxx> wrote:
> > based on the empirical measurement, starting with a set of (real)
> > number of files, focusing mainly on small-files. and found some rough
> > relationship,
> > all filesets with same dist. i.e., ~80% <4MB file, mostly small, all 1x. replc.
> > For MDS RAM:
> >
> > You need about:
> > 0.6GB RAM to store 0.03million files (fileset vol. 1.2TB)
> > 1.2GB to store  0.065million files (fileset vol. 2.4TB)
> > 1.8GB to store 0.13million files  (fileset vol. 4.8TB)
> >
> > ratio of no.files per GB fortunately increases, (i.e., 0.07million per
> > GB for the 0.13million files), hopefully per-GB will support
> > 0.1million files as more files are stored.
> > then, 18GB = 1.8million files,
> > 180GB = 18million
> > 1800GB= 180million (this is about 6.64PB),
> >
> > So to support that amount,, we need 100 MDS nodes with 18GB purely for
> > cmds, incl mem for OS, etc, maybe 20-24GB per node.
> >
> > Cheers
> >
> > On Sat, Jun 11, 2011 at 06:47, Fyodor Ustinov <ufm@xxxxxx> wrote:
> >>
> >> Hi!
> >>
> >> Which configuration would you recommended for cluster with 50-80 million files?
> >>
> >> WBR,
> >>    Fyodor.
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux