hi everybody we have problem with nfs gansha load balancer whene use rsync -avre to copy file from another share to ceph nfs share path we get this error `rsync -rav /mnt/elasticsearch/newLogCluster/acr-202* /archive/Elastic-v7-archive` rsync : close failed on "/archive/Elastic-v7-archive/...." : Input/output error (5) rsync error: error in file IO (code 11) at receiver.c(586) [Receiver=3.1.3]" we used ingress for load balancing nfs service and No other problems are observed in the cluster. Below is information about the pool, volume path and quota ------------ amount10.20.32.161:/volumes/arch-1/arch 30T 5.0T 26T 17% /archive# ceph osd pool get-quota arch-bigdata-data quotas for pool 'arch-bigdata-data': max objects: N/A max bytes : 30 TiB (current num bytes: 5488192308978 bytes) --------------- # ceph fs subvolume info arch-bigdata arch arch-1 { "atime": "2023-06-11 13:32:22", "bytes_pcent": "16.64", "bytes_quota": 32985348833280, "bytes_used": 5488566602388, "created_at": "2023-06-11 13:32:22", "ctime": "2023-06-25 10:45:35", "data_pool": "arch-bigdata-data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "gid": 0, "mode": 16877, "mon_addrs": [ "10.20.32.153:6789", "10.20.32.155:6789", "10.20.32.154:6789" ], "mtime": "2023-06-25 10:38:48", "path": "/volumes/arch-1/arch/f246a31b-7103-41b9-8005-63d00efe88e4", "pool_namespace": "", "state": "complete", "type": "subvolume", "uid": 0 } .Has anyone ever experienced this error? What way do you suggest to solve it? _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx