Re: Watcher Issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Eugen 

Thanks for your reply. 
I have the image available and it’s not under trash. 

When scaling a pod to different node using statefulset, pod gives mount issue. 

I was looking for a command if we can kill the client.id <http://client.id/> from ceph. CEPH must have a command to kill its clients etc… 
Don’t understand why pod complaining about same volume name about a k8s host using it. Whereas its nowhere.. Not sure what to do in this situation.. 
We tried upgrading csi, k8s cluster. Renamed image and blocklisted the host. And renamed back image to its original image but still red status showing same client host. 


Regards
Dev

> On Jan 21, 2025, at 12:16 PM, Eugen Block <eblock@xxxxxx> wrote:
> 
> Hi,
> 
> have you checked if the image is in the trash?
> 
> rbd -p {pool} trash ls
> 
> You can try to restore the image if there is one, then blocklist the client to release the watcher, then delete the image again.
> 
> I have to do that from time to time on a customer’s openstack cluster.
> 
> Zitat von Devender Singh <devender@xxxxxxxxxx>:
> 
>> Hello
>> 
>> Seeking some help if I can clean the client mounting my volume?
>> 
>> rbd status pool/image
>> 
>> Watchers:
>> 	watcher=10.160.0.245:0/2076588905 client.12541259 cookie=140446370329088
>> 
>> Issue: pod is failing in init- state.
>> Events:
>>  Type     Reason       Age                  From     Message
>>  ----     ------       ----                 ----     -------
>>  Warning  FailedMount  96s (x508 over 24h)  kubelet  MountVolume.MountDevice failed for volume "pvc-3a2048f1" : rpc error: code = Internal desc = rbd image k8s-rgnl-disks/csi-vol-945c6a66-9129 is still being used
>> 
>> It shows above client, but there is no such volume…
>> 
>> Another similar issue… on dashboard…
>> 
>> CephNodeDiskspaceWarning
>> Mountpoint /mnt/dst-volume on sea-prod-host01 will be full in less than 5 days based on the 48 hour trailing fill rate.
>> 
>> Whereas nothing is mounted, I mapped one image yesterday using red map and then unmapped and unmounted everything but it been more than 12hours now, still showing the message..
>> 
>> 
>> CEPH version: 18.2.4
>> 
>> Regards
>> Dev
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux