Re: cephfs change metadata pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's a 5 nodes cluster. Each node has 3 OSDs. I set pg_num = 512 for both cephfs_data and cephfs_metadata. I experienced some slow/blocked requests issues when I was using hammer 0.94.x and prior. So I was thinking if the pg_num is too large for metadata. I just upgraded the cluster to Jewel today. Will watch if the problem remains.

Thank you.

On Tue, Jul 12, 2016 at 6:45 PM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
I'm not at all sure that rados cppool actually captures everything (it
might). Doug has been working on some similar stuff for disaster
recovery testing and can probably walk you through moving over.

But just how large *is* your metadata pool in relation to others?
Having a too-large pool doesn't cost much unless it's
grossly-inflated, and having a nice distribution of your folders is
definitely better than not.
-Greg

On Tue, Jul 12, 2016 at 4:14 PM, Di Zhang <zhangdibio@xxxxxxxxx> wrote:
> Hi,
>
>     Is there any way to change the metadata pool for a cephfs without losing
> any existing data? I know how to clone the metadata pool using rados cppool.
> But the filesystem still links to the original metadata pool no matter what
> you name it.
>
>     The motivation here is to decrease the pg_num of the metadata pool. I
> created this cephfs cluster sometime ago, while I didn't realize that I
> shouldn't assign a large pg_num to such a small pool.
>
>     I'm not sure if I can delete the fs and re-create it using the existing
> data pool and the cloned metadata pool.
>
>     Thank you.
>
>
> Zhang Di
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux