Thanks for the reply anton.
RAM - 250GB
We have single active MDS, Ceph version luminous 12.2.4
default PG number is 64 and we are not changing PG count while creating pool
we have totally 8 server each with 60OSD of 6TB size.
8 server splitted into 2 per region . Crush map is designed to use 2 servers for each pool
Regards
Surya Balan
On Tue, Jul 17, 2018 at 1:48 PM, Anton Aleksandrov <anton@xxxxxxxxxxxxxx> wrote:
You need to give us more details about your OSD setup and hardware specification of nodes (CPU core count, RAM amount)
On 2018.07.17. 10:25, Surya Bala wrote:
Hi folks,
We have production cluster with 8 nodes and each node has 60 disks of size 6TB each. We are using cephfs and FUSE client with global mount point. We are doing rsync from our old server to this cluster rsync is slow compared to normal server
when we do 'ls' inside some folder, which has many more number of files like 1lakhs and 2lakhs, the response is too slow.
Any suggestions please
RegardsSurya
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com