Re: Slow write speed on 3-node cluster with 6* SATA Harddisks (~ 3.5 MB/s)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good you are getting 124MB/s via Gbit, I have only been able to get 
110MB/s. 
If you are interested, I am also having 4TB sata hdd without db/wal on 
ssd, 4 nodes, but 10Gbit

[@]# dd if=/dev/zero of=zero.file bs=32M oflag=direct status=progress
3758096384 bytes (3.8 GB) copied, 36.364817 s, 103 MB/s^C
113+0 records in
113+0 records out
3791650816 bytes (3.8 GB) copied, 36.6824 s, 103 MB/s


[@]# dd if=zero.file of=/dev/null bs=32M iflag=direct status=progress
3657433088 bytes (3.7 GB) copied, 15.346591 s, 238 MB/s
113+0 records in
113+0 records out
3791650816 bytes (3.8 GB) copied, 15.8171 s, 240 MB/s





-----Original Message-----
From: Simon Ironside [mailto:sironside@xxxxxxxxxxxxx] 
Sent: dinsdag 5 november 2019 13:08
To: ceph-users@xxxxxxx
Subject:  Re: Slow write speed on 3-node cluster with 6* 
SATA Harddisks (~ 3.5 MB/s)

Hi,

My three-node lab cluster is similar to yours but with 3x bluestore OSDs 
per node (4TB SATA spinning disks) and 1x shared DB/WAL (240GB SATA SSD) 
device per node. I'm only using gigabit networking (one interface 
public, one interface cluster) also ceph 14.2.4 with 3x replicas.

I would have expected your dd commands to use the cache, try these 
instead inside your VM:

# Write test
dd if=/dev/zero of=/zero.file bs=32M oflag=direct status=progress

# Read test
dd if=/zero.file of=/dev/null bs=32M iflag=direct status=progress

You can obviously delete /zero.file when you're finished.

- bs=32M tells dd to read/write 32MB at a time, I think the default is 
something like 512 bytes which slows things up significantly without a 
cache.
- oflag/iflag=direct will use direct I/O bypassing the cache.
- status=progress is just instead of where you're using pv to show the 
transfer rate.

On my cluster I get 124MB/sec read (maxing out the network) and 74MB/sec 
write. Without bs=32M I get more like 1MB/sec read and write. The VM I'm 
using for this test is cache=writeback and virtio-scsi (i.e. sda rather 
than vda).

Simon

On 05/11/2019 11:31, Hermann Himmelbauer wrote:
> Hi,
> Thank you for your quick reply, Proxmox offers me "writeback"
> (cache=writeback) and "writeback unsafe" (cache=unsafe), however, for 
> my "dd" test, this makes no difference at all.
> 
> I still have write speeds of ~ 4,5 MB/s.
> 
> Perhaps "dd" disables the write cache?
> 
> Would it perhaps help to put the journal or something else on a SSD?
> 
> Best Regards,
> Hermann
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux