Questions since updating to 18.0.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

We recently upgraded our cluster to version 18 and I've noticed some things
that I'd like feedback on before I go down a rabbit hole for
non-issues. cephadm was used for the upgrade and there were no issues.
Cluster is 56 OSD's spinners for right now only used for RBD images.

I've noticed active scrubs/deep scrubs. I don't remember seeing a large
amount before, usually around 20-30 scrubs and 15 deep I think, now I will
have 70 scrubs and 70 deep scrubs happening. Which I thought were limited
to 1 per OSD or am I misunderstanding osd_max_scrubs?  Everything on the
cluster is currently at default values.

The other thing I've noticed is since the upgrade it seems like any time
backfill happens the client io drops, but neither is high to begin with,
30MiB/s read/write client IO drops to 10-15 with 200MiB/s backfill. Before
upgrading backfill would be hitting 5-600 with 30 clientio. I realize lots
of things could affect this and it could be separate from the cluster, I'm
still investigating, but wanted to mention it incase someone could
recommend a check or some change to Reef that could cause this. mclock
profile is client_io.

Thanks,
Curt
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux