Re: journal size suggestions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 10, 2013 at 3:28 AM, Gandalf Corvotempesta
<gandalf.corvotempesta@xxxxxxxxx> wrote:
> Thank you for the response.
> You are talking of median expected writes, but should I consider the single
> disk write speed or the network speed? A single disk is 100MB/s so
> 100*30=3000MB of journal for each osd? Or should I consider the network
> speed that is 1.25GB/s?
> Why 30 seconds? default flush frequency is 5 seconds.
> What do you mean with fine tuning spinning storage media? On which tuning
> are you referring to?
>
Since journal is created on per-osd basis, you should calculate it
with only disk speed in mind. As I remember no one referred directly
to flush interval when recommending referring to tens of seconds on
such calculation, neither do I - it`s just a safe road anyway to have
some capacity over this value. By fine tuning I meant such things as
readahead values, number of internal XFS partitions, size of XFS
chunks, hardware controller cache policy(if you have some) and so on -
being honest, filesystem tuning is not affecting performance so much
on general workload types, but may affect greatly on some specific
things like digits in the benchmark :) .

> Il giorno 09/lug/2013 23:45, "Andrey Korolyov" <andrey@xxxxxxx> ha scritto:
>
>> On Wed, Jul 10, 2013 at 1:16 AM, Gandalf Corvotempesta
>> <gandalf.corvotempesta@xxxxxxxxx> wrote:
>> > Hi,
>> > i'm planning a new cluster on a 10GbE network.
>> > Each storage node will have a maximum of 12 SATA disks and 2 SSD as
>> > journals.
>> >
>> > What do you suggest as journal size for each OSD? 5GB is enough?
>> > Should I just consider SATA writing speed when calculating journal
>> > size or also network speed?
>>
>> Hello,
>>
>> As many recommendations suggests before, you may set journal size
>> proportional to amount of median (or peak, if expected) writes
>> multiplied, say, by thirty seconds - that`s the safe area and you
>> should not able to suffer because of journal size following this
>> calculation. Twelve SATA disks in theory may have enough output to
>> thrash 10G network but you`ll face lack of IOPS times before almost
>> for sure, and OSD daemons are not working very close to the physical
>> limits speaking of transferring data from/to disk, so fine tuning of
>> spinning storage media still is primary target to play with in such
>> configuration.
>>
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux