Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I assume it's the balancer module. If you write lots of data quickly into the cluster the distribution can vary and the balancer will try to even out the placement. You can check the status with

ceph balancer status

and disable it if necessary:

ceph balancer mode none

Regards,
Eugen


Zitat von Kai Stian Olstad <ceph+list@xxxxxxxxxx>:

Hi

I'm testing a Ceph cluster with "rados bench", it's an empty Cephadm install that only has one pool device_health_metrics.

Create a pool with 1024 pg on the hdd devices(15 servers has HDDs and 13 has SSDs) ceph osd pool create pool-ec32-isa-reed_sol_van-hdd 1024 1024 erasue ec32-isa-reed_sol_van-hdd --autoscale-mode=off

I then run "rados bench" from the 13 SSD hosts at the same time.
    rados bench -p pool-ec32-isa-reed_sol_van-hdd 600 write --no-cleanup

After just a few seconds "ceph -s" starts to reports degraded data redundancy

Here is some examples during the 10 minutes testing period
Degraded data redundancy: 260/11856050 objects degraded (0.014%), 1 pg degraded Degraded data redundancy: 260/1856050 objects degraded (0.014%), 1 pg degraded
    Degraded data redundancy: 1 pg undersized
Degraded data redundancy: 1688/3316225 objects degraded (0.051%), 3 pgs degraded Degraded data redundancy: 5457/7005845 objects degraded (0.078%), 3 pgs degraded, 9 pgs undersized
    Degraded data redundancy: 1 pg undersized
Degraded data redundancy: 4161/7005845 objects degraded (0.059%), 3 pgs degraded Degraded data redundancy: 4315/7005845 objects degraded (0.062%), 2 pgs degraded, 4 pgs undersized


So my question is, it normal that Ceph report degraded under normal use?
or do I have a problem somewhere that I need to investigate?


--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux