Re: The relationship between the number of cosd and memory cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 13, 2011 at 1:00 AM, Sylar Shen <kimulaaa@xxxxxxxxx> wrote:
> Hi developers,
> I have a question between the numbers of cosd and memory cache in a server.
> Here is my environment....
> I got 15 servers as OSDes and each one has 8 cores CPU, 16GB RAM and
> 1TB*12 Disks.
> I only have 1 MDS and 3 MONs(4 servers as total) and the
> specifications are the same as OSDes.
>
> I know that each cosd process would consume some memories and so does
> data transmission.
> In my environment, I set 10 cosd processes on each server.
> My assumption is that based on my scenario, the buffer used by each
> cosd would be (Total RAM-N*cosd used)/N .....(N is the number of cosd)
> That is,if I assume each cosd would use 200MB RAM, so each cosd would
> have about (16GB-10*200MB)/10 = 1.4GB RAM to use as buffer.
> I don't know if my assumption is right or not. Please correct me if it's wrong.
> "IF" my theory is right, would it be better if I set the number of
> cosd to be less?
> What I mean is that when I get less cosd process, each cosd would have
> more memories to use as buffer and it's better?
>
> In other words, based on my environment with 12 disks in a server, how
> many numbers of cosd should I set to get the best usage of memories?
> I know the more RAM, the better performance. But are there some kind
> of formulas to calculate the proportions between cosd and RAM?
> Thanks in advance!
Well, as you correctly note each cosd takes up some RAM to run, and
whatever's left over can be used by the page cache (the cosd doesn't
implement any of its own caching). In this sense, you only want one
daemon per machine, because that maximizes your ability to cache.

The reason you might want to run more than one daemon per machine is
to handle failure conditions better. If you have a daemon per disk,
then recovery becomes much faster in the case of a daemon failure; you
don't need to RAID your disks (losing space) or unify them with btrfs
(risking an entire node if one disk fails).

It's a tradeoff and you should evaluate it for yourself, basically --
unfortunately there's not enough experience yet to give you a rule of
thumb about which is "better".

I will say that you might want to turn your monitor machines into
OSD+mon machines -- the monitors don't use many resources and that's a
lot of memory going to waste! :)
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux