Re: Scrubbing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Peter,

your irony is perfect, it is worth to notice.

Meaning of my previous post was, that CEPH cluster didn't fulfill
my needs and, although I had set mClock profile to
"high_client_ops" (because I have a plenty of time to rebalancing
and scrubbing), my clients went to problems.

And there was question, if scheduler manage CEPH cluster
background (and clients) operation in this way to stil be usable
for clients.

I've tried to send feedback to developers.

Thanks for understanding.

Sincerely
Jan Marek

Dne St, led 24, 2024 at 11:18:20 CET napsal(a) Peter Grandi:
> > [...] After a few days, I have on our OSD nodes around 90MB/s
> > read and 70MB/s write while 'ceph -s' have client io as
> > 2,5MB/s read and 50MB/s write. [...]
> 
> This is one of my pet-peeves: that a storage system must have
> capacity (principally IOPS) to handle both a maintenance
> workload and a user workload, and since the former often
> involves whole-storage or whole-metadata operations it can be
> quite heavy, especially in the case of Ceph where rebalancing
> and scrubbing and checking should be fairly frequent to detect
> and correct inconsistencies.
> 
> > Is this activity OK? [...]
> 
> Indeed. Some "clever" people "save money" by "rightsizing" their
> storage so it cannot run at the same time the maintenance and
> the user workload, and so turn off the maintenance workload,
> because they "feel lucky" I guess, but I do not recommend that.
> :-). I have seen more than one Ceph cluster that did not have
> the capacity even to run *just* the maintenance workload.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

-- 
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux