Re: Scrub while cluster re-balancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What's the output of ceph osd dump | grep ^pool ?

On Tue, Dec 2, 2014 at 10:44 PM, Mallikarjun Biradar <mallikarjuna.biradar@xxxxxxxxx> wrote:
Hi Craig,


but, my concern is why ceph status is not reporting for pool 2 (testPool2 in this case). Whether its not performing scrub or its ceph status report issue?

Though I have enough of objects in testPool2, scrub is not reporting "active+clean+scrubbing" in "ceph -s". 

ems@rack6-ramp-4:~$ sudo ceph osd lspools
0 rbd,1 testPool,2 testPool2,
ems@rack6-ramp-4:~$

ems@rack6-ramp-4:~$ sudo rados df
pool name       category                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
rbd             -                          0            0            0            0           0            0            0            0            0
testPool        -                 5948025217      1452174            0            0           0    141056332  22948324301    141070117  22950524809
testPool2       -                   45039617        10999            0            0           0     11238999     44955958     11259655     45038593
  total used     18004641796      1463173
  total avail    32330689516
  total space    50335331312
ems@rack6-ramp-4:~$

-Thanks & regards,
Mallikarjun Biradar

On Wed, Dec 3, 2014 at 1:20 AM, Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx> wrote:
ceph osd dump | grep ^pool will map pool names to numbers.  PGs are named after the pool; PG 2.xx belongs to pool 2.

rados df will tell you have many items and data are in a pool.

On Tue, Dec 2, 2014 at 10:53 AM, Mallikarjun Biradar <mallikarjuna.biradar@xxxxxxxxx> wrote:

Hi Craig,

ceph -s is not showing any PG's in pool2.
I have 3 pools.  rbd and two pools that i created testPool and testPool2.

I have more than 10TB of data in testPool1 and good amount of data in testPool2 as well.
Iam not using rbd pool.

-Thanks & regards,
Mallikarjun Biradar

On 3 Dec 2014 00:15, "Craig Lewis" <clewis@xxxxxxxxxxxxxxxxxx> wrote:
You mean `ceph -w` and `ceph -s` didn't show any PGs in the active+clean+scrubbing state while pool 2's PGs were being scrubbed?

I see that happen with my really small pools.  I have a bunch of RadosGW pools that contain <5 objects, and ~1kB of data.  When I scrub the PGs in those pools, they complete so fast that they never show up in `ceph -w`.


Since you have pools 0, 1, and 2, I assume those are the default 'data', 'metadata', and 'rdb'.  If you're not using RDB, then the rdb pool will be very small.



On Tue, Dec 2, 2014 at 5:32 AM, Mallikarjun Biradar <mallikarjuna.biradar@xxxxxxxxx> wrote:
Hi all,

I was running scrub while cluster is in re-balancing state.

From the osd logs..

2014-12-02 18:50:26.934802 7fcc6b614700  0 log_channel(default) log [INF] : 0.3 scrub ok
2014-12-02 18:50:27.890785 7fcc6b614700  0 log_channel(default) log [INF] : 0.24 scrub ok
2014-12-02 18:50:31.902978 7fcc6b614700  0 log_channel(default) log [INF] : 0.25 scrub ok
2014-12-02 18:50:33.088060 7fcc6b614700  0 log_channel(default) log [INF] : 0.33 scrub ok
2014-12-02 18:50:50.828893 7fcc6b614700  0 log_channel(default) log [INF] : 1.61 scrub ok
2014-12-02 18:51:06.774648 7fcc6b614700  0 log_channel(default) log [INF] : 1.68 scrub ok
2014-12-02 18:51:20.463283 7fcc6b614700  0 log_channel(default) log [INF] : 1.80 scrub ok
2014-12-02 18:51:39.883295 7fcc6b614700  0 log_channel(default) log [INF] : 1.89 scrub ok
2014-12-02 18:52:00.568808 7fcc6b614700  0 log_channel(default) log [INF] : 1.9f scrub ok
2014-12-02 18:52:15.897191 7fcc6b614700  0 log_channel(default) log [INF] : 1.a3 scrub ok
2014-12-02 18:52:34.681874 7fcc6b614700  0 log_channel(default) log [INF] : 1.aa scrub ok
2014-12-02 18:52:47.833630 7fcc6b614700  0 log_channel(default) log [INF] : 1.b1 scrub ok
2014-12-02 18:53:09.312792 7fcc6b614700  0 log_channel(default) log [INF] : 1.b3 scrub ok
2014-12-02 18:53:25.324635 7fcc6b614700  0 log_channel(default) log [INF] : 1.bd scrub ok
2014-12-02 18:53:48.638475 7fcc6b614700  0 log_channel(default) log [INF] : 1.c3 scrub ok
2014-12-02 18:54:02.996972 7fcc6b614700  0 log_channel(default) log [INF] : 1.d7 scrub ok
2014-12-02 18:54:19.660038 7fcc6b614700  0 log_channel(default) log [INF] : 1.d8 scrub ok
2014-12-02 18:54:32.780646 7fcc6b614700  0 log_channel(default) log [INF] : 1.fa scrub ok
2014-12-02 18:54:36.772931 7fcc6b614700  0 log_channel(default) log [INF] : 2.4 scrub ok
2014-12-02 18:54:41.758487 7fcc6b614700  0 log_channel(default) log [INF] : 2.9 scrub ok
2014-12-02 18:54:46.910043 7fcc6b614700  0 log_channel(default) log [INF] : 2.a scrub ok
2014-12-02 18:54:51.908335 7fcc6b614700  0 log_channel(default) log [INF] : 2.16 scrub ok
2014-12-02 18:54:54.940807 7fcc6b614700  0 log_channel(default) log [INF] : 2.19 scrub ok
2014-12-02 18:55:00.956170 7fcc6b614700  0 log_channel(default) log [INF] : 2.44 scrub ok
2014-12-02 18:55:01.948455 7fcc6b614700  0 log_channel(default) log [INF] : 2.4f scrub ok
2014-12-02 18:55:07.273587 7fcc6b614700  0 log_channel(default) log [INF] : 2.76 scrub ok
2014-12-02 18:55:10.641274 7fcc6b614700  0 log_channel(default) log [INF] : 2.9e scrub ok
2014-12-02 18:55:11.621669 7fcc6b614700  0 log_channel(default) log [INF] : 2.ab scrub ok
2014-12-02 18:55:18.261900 7fcc6b614700  0 log_channel(default) log [INF] : 2.b0 scrub ok
2014-12-02 18:55:19.560766 7fcc6b614700  0 log_channel(default) log [INF] : 2.b1 scrub ok
2014-12-02 18:55:20.501591 7fcc6b614700  0 log_channel(default) log [INF] : 2.bb scrub ok
2014-12-02 18:55:21.523936 7fcc6b614700  0 log_channel(default) log [INF] : 2.cd scrub ok
 
Interestingly, for pg's 2.x (2.4, 2.9 etc)in logs here, cluster status was not reporting scrubbing, whereas for 0.x & 1.x it was reporting as scrubbing in cluster status.

In case of scrub operation on PG's (2.x) is really scrubbing performed OR cluster status is missing to report them?

 -Thanks & Regards,
Mallikarjun Biradar

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux