(no subject)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the information.

-Sreenath

-------------------------

Date: Wed, 25 Mar 2015 04:11:11 +0100
From: Francois Lafont <flafdivers@xxxxxxx>
To: ceph-users <ceph-users@xxxxxxxx>
Subject: Re:  PG calculator queries
Message-ID: <5512274F.1000003@xxxxxxx>
Content-Type: text/plain; charset=utf-8

Hi,

Sreenath BH wrote :

> consider following values for a pool:
>
> Size = 3
> OSDs = 400
> %Data = 100
> Target PGs per OSD = 200 (This is default)
>
> The PG calculator generates number of PGs for this pool as : 32768.
>
> Questions:
>
> 1. The Ceph documentation recommends around 100 PGs/OSD, whereas the
> calculator takes 200 as default value. Are there any changes in the
> recommended value of PGs/OSD?

Not really I think. Here http://ceph.com/pgcalc/, we can read:

    Target PGs per OSD
    This value should be populated based on the following guidance:
    - 100 If the cluster OSD count is not expected to increase in
      the foreseeable future.
    - 200 If the cluster OSD count is expected to increase (up to
      double the size) in the foreseeable future.
    - 300 If the cluster OSD count is expected to increase between
      2x and 3x in the foreseeable future.

So, it seems to me cautious to recommend 100 in the official documentation
because you can increase the pg_num but it's impossible to decrease it.
So, if I should recommend just one value, It would be 100.

> 2. Under "notes" it says:
> "Total PG Count" below table will be the count of Primary PG copies.
> However, when calculating total PGs per OSD average, you must include
> all copies.
>
> However, the number of 200 PGs/OSD already seems to include the
> primary as well as replica PGs in a OSD. Is the note a typo mistake or
> am I missing something?

To my mind, in the site, the "Total PG Count" doesn't include all copies.
So, for me, there is no typo. Here is 2 basic examples from
http://ceph.com/pgcalc/
with just *one* pool.

1.
Pool-Name  Size  OSD#  %Data    Target-PGs-per-OSD  Suggested-PG-count
rbd        2     10    100.00%  100                 512

2.
Pool-Name  Size  OSD#  %Data    Target-PGs-per-OSD  Suggested-PG-count
rbd        2     10    100.00%  200                 1024

In the first example, I have:   512/10 =  51.2  but (Size x  512)/10 = 102.4
In the second example, I have: 1024/10 = 102.4  but (Size x 1024)/10 = 204.8

HTH.

--
Fran?ois Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux