On 08.10.2018 at 09:21, Yan, Zheng wrote:
On Mon, Oct 8, 2018 at 1:54 PM Tomasz Płaza <tomasz.plaza@xxxxxxxxxx> wrote:
Hi,
Can someone please help me, how do I improve performance on our CephFS
cluster?
System in use is: Centos 7.5 with ceph 12.2.7.
The hardware in use are as follows:
3xMON/MGR:
1xIntel(R) Xeon(R) Bronze 3106
16GB RAM
2xSSD for system
1Gbe NIC
2xMDS:
2xIntel(R) Xeon(R) Bronze 3106
64GB RAM
2xSSD for system
10Gbe NIC
6xOSD:
1xIntel(R) Xeon(R) Silver 4108
2xSSD for system
6xHGST HUS726060ALE610 SATA HDD's
1xINTEL SSDSC2BB150G7 for osd db`s (10G partitions) rest for OSD to
place cephfs_metadata
10Gbe NIC
pools (default crush rule aware of device class):
rbd with 1024 pg crush rule replicated_hdd
cephfs_data with 256 pg crush rule replicated_hdd
cephfs_metadata with 32 pg crush rule replicated_ssd
test done by fio: fio --randrepeat=1 --ioengine=libaio --direct=1
--gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k
--iodepth=64 --size=1G --readwrite=randrw --rwmixread=75
kernel version? maybe cephfs driver in your kernel does not support
AIO (--iodepth is 1 effectively)
Yan, Zheng
Kernel is 3.10.0-862.9.1.el7.x86_64 (I can update it to
3.10.0-862.14.4.el7) but do not know how to check aio support in kernel
drive if it is relevant because I mounted it with ceph-fuse -n
client.cephfs -k /etc/ceph/ceph.client.cephfs.keyring -m
192.168.10.1:6789 /mnt/cephfs
Tom Płaza
shows iops write/read performance as folows:
rbd 3663/1223
cephfs (fuse) 205/68 (wich is a little lower than raw performance of one
hdd used in cluster)
Everything is connected to one Cisco 10Gbe switch.
Please help.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com