Re: CEPH zero iops after upgrade to Reef and manual read balancer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Hello Josh
Thank you your for reply to us.

After giving the command in the cluster I got the following error. We are
concerned about user data. Could you kindly confirm this command will not
affect any user data?

root@ceph-node1:/# ceph osd rm-pg-upmap-primary
Traceback (most recent call last):   File "/usr/bin/ceph", line 1327, in
<module>     retval = main()   File "/usr/bin/ceph", line 1036, in main
retargs = run_in_thread(cluster_handle.conf_parse_argv, childargs)   File
"/usr/lib/python3.6/site-packages/", line 1538, in
run_in_thread     raise t.exception   File
"/usr/lib/python3.6/site-packages/", line 1504, in run
self.retval = self.func(*self.args, **self.kwargs)   File "rados.pyx", line
551, in rados.Rados.conf_parse_argv   File "rados.pyx", line 314, in
rados.cstr_list   File "rados.pyx", line 308, in rados.cstr
UnicodeEncodeError: 'utf-8' codec can't encode characters in position 3-4:

Apart from that, do you need any others information?

Mosharaf Hossain
Manager, Product Development
IT Division
Bangladesh Export Import Company Ltd.

On Thu, Sep 14, 2023 at 1:52 PM Josh Salomon <jsalomon@xxxxxxxxxx> wrote:

> Hi Mosharaf,
> If you undo the read balancing commands (using the command 'ceph
> osd rm-pg-upmap-primary' on all pgs in the pool) do you see improvements in
> the performance?
> Regards,
> Josh
> On Thu, Sep 14, 2023 at 12:35 AM Laura Flores <lflores@xxxxxxxxxx> wrote:
>> Hi Mosharaf,
>> Can you please create a tracker issue and attach a copy of your osdmap?
>> Also, please include any other output that characterizes the slowdown in
>> client I/O operations you're noticing in your cluster. I can take a look
>> once I have that information,
>> Thanks,
>> Laura
>> On Wed, Sep 13, 2023 at 5:23 AM Mosharaf Hossain <
>> mosharaf.hossain@xxxxxxxxxxxxxx> wrote:
>>> Hello Folks
>>> We've recently performed an upgrade on our Cephadm cluster, transitioning
>>> from Ceph Quiency to Reef. However, following the manual implementation
>>> of
>>> a read balancer in the Reef cluster, we've experienced a significant
>>> slowdown in client I/O operations within the Ceph cluster, affecting both
>>> client bandwidth and overall cluster performance.
>>> This slowdown has resulted in unresponsiveness across all virtual
>>> machines
>>> within the cluster, despite the fact that the cluster exclusively
>>> utilizes
>>> SSD storage."
>>> Kindly guide us to move forward.
>>> Regards
>>> Mosharaf Hossain
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> --
>> Laura Flores
>> She/Her/Hers
>> Software Engineer, Ceph Storage <>
>> Chicago, IL
>> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
>> M: +17087388804
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux