Re: Ceph error: active+clean+scrubbing+deep

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kakito,

You def. _want_ scrubbing to happen!

http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing

If you feel it kills your system you can tweak some of the values; like:
osd scrub load threshold
osd scrub max interval
osd deep scrub interval

I have no experience in changing those values, so I can't say how it
will influence your system.

Also, not that it is any of my business, but it seems you're running
with replication set to 1.

Cheers,
Martin

On Tue, Apr 16, 2013 at 3:11 AM, kakito <tientienminh080590@xxxxxxxxx> wrote:
> Dear all,
>
> I use Ceph Storage,
>
> Recently, I often get an error:
>
> mon.0 [INF] pgmap v277690: 640 pgs: 639 active+clean, 1
> active+clean+scrubbing+deep; 14384 GB data, 14409 GB used, 90007 GB / 107 TB
> avail.
>
> It seems that it is not correct.
>
> I tried to restart. But not ok.
>
> It lows my system.
>
> I user ceph 0.56.4, kernel 3.8.6-1.el6.elrepo.x86_64
>
> How to fix it ?!
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux