Re: Estimating OSD memory requirements (was Re: stuff for v0.56.4)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bryan,

On 03/11/2013 09:10 AM, Bryan K. Wright wrote:
> sage@xxxxxxxxxxx said:
>> On Thu, 7 Mar 2013, Bryan K. Wright wrote:
>>
>> sage@xxxxxxxxxxx said:
>>> - pg log trimming (probably a conservative subset) to avoid memory bloat 
>>
>> Anything that reduces the size of OSD processes would be appreciated.
>> You can probably do this with just
>>  log max recent = 1000
>> By default it's keeping 100k lines of logs in memory, which can eat a lot  of
>> ram (but is great when debugging issues).
> 
> 	Thanks for the tip about "log max recent".  I've made this 
> change, but it doesn't seem to significantly reduce the size of the 
> OSD processes.
> 
> 	In general, are there some rules of thumb for estimated the
> memory requirements for OSDs?  I see processes blow up to 8gb of 
> resident memory sometimes.  If I need to allow for that much memory
> per OSD process, I may have to just walk away from ceph.
> 
> 	Does the memory usage scale with the size of the disks?
> I've been trying to run 12 OSDs with 12 2TB disks on a single box.
> Would I be better off (memory-usage-wise) if I RAIDed the disks
> together and used a single OSD process?
> 
> 	Thanks for any advice.

You might also try tuning "osd client message size cap"; its
current default is 500 MiB.

During the periods your aggregate applied write load is higher
than your OSD aggregate write bandwidth (taking into account
replicas), you'll be buffering up this amount of client data.

Since it only applies to incoming client messages, to figure
total memory use I believe you need to multiply that by the
number of replicas you're using.

FWIW, for sequential writes from lots of clients, I can
maintain full write bandwidth with "osd client message size
cap" tuned to 60 MiB.

-- Jim

> 
> 					Bryan
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux