Re: CephFS: number of PGs for metadata pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 9, 2015 at 1:25 PM, Mykola Dvornik <mykola.dvornik@xxxxxxxxx> wrote:
> Hi Jan,
>
> Thanks for the reply. I see your point about replicas. However my motivation
> was a bit different.
>
> Consider some given amount of objects that are stored in the metadata pool.
> If I understood correctly ceph data placement approach, the number of
> objects per PG should decrease with the amount of PGs per pool.
>
> So my concern is that in catastrophic event of some PG(s) being lost I will
> loose more objects if the amount of PGs per pool is small. At the same time
> I don't want to have too few objects per PG to keep things disk IO, but not
> CPU bounded.

If you are especially concerned about triple-failures (i.e. permanent
PG loss), I would suggest you look at doing things like a size=4 pool
for your metadata (maybe on SSDs).

You could also look at simply segregating your size=3 metadata on to
separate spinning drives, so that these comparatively less loaded OSDs
will be able to undergo recovery faster in the event of a failure than
an ordinary data drive that's full of terabytes of data, and have a
lower probability of a triple failure.

John
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux