Do not use consumer SSD for OSD. Especially for journal disk.
If you use consumer SSD, please consider add some dedicated SSD Enterprise for journal disk. Ratio should be 1:2 or 1:4 (1 SSD Enterprise with 4 SSD Consumer).
Best Regards,
On Fri, Jan 5, 2018 at 3:20 PM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
Maybe because of this 850 evo / 850 pro listed here as 1.9MB/s 1.5MB/s
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to- test-if-your-ssd-is-suitable- as-a-journal-device/
-----Original Message-----
From: Rafał Wądołowski [mailto:rwadolowski@cloudferro.com ]
Sent: donderdag 4 januari 2018 16:56
To: ceph@xxxxxxxxxx; ceph-users@xxxxxxxxxxxxxx
Subject: Re: Performance issues on Luminous
I have size of 2.
We know about this risk and we accept it, but we still don't know why
performance so so bad.
Cheers,
Rafał Wądołowski
On 04.01.2018 16:51, ceph@xxxxxxxxxx wrote:
I assume you have size of 3 then divide your expected 400 with 3
and you are not far Away from what you get...
In Addition you should Never use Consumer grade ssds for ceph as
they will be reach the DWPD very soon...
Am 4. Januar 2018 09:54:55 MEZ schrieb "Rafał Wądołowski"
<rwadolowski@xxxxxxxxxxxxxx> <mailto:rwadolowski@cloudferro.com > :
Hi folks,
I am currently benchmarking my cluster for an performance
issue and I
have no idea, what is going on. I am using these devices in
qemu.
Ceph version 12.2.2
Infrastructure:
3 x Ceph-mon
11 x Ceph-osd
Ceph-osd has 22x1TB Samsung SSD 850 EVO 1TB
96GB RAM
2x E5-2650 v4
4x10G Network (2 seperate bounds for cluster and public) with
MTU 9000
I had tested it with rados bench:
# rados bench -p rbdbench 30 write -t 1
Total time run: 30.055677
Total writes made: 1199
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 159.571
Stddev Bandwidth: 6.83601
Max bandwidth (MB/sec): 168
Min bandwidth (MB/sec): 140
Average IOPS: 39
Stddev IOPS: 1
Max IOPS: 42
Min IOPS: 35
Average Latency(s): 0.0250656
Stddev Latency(s): 0.00321545
Max latency(s): 0.0471699
Min latency(s): 0.0206325
# ceph tell osd.0 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"bytes_per_sec": 414199397
}
Testing osd directly
# dd if=/dev/zero of=/dev/sdc bs=4M oflag=direct count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB, 400 MiB) copied, 1.0066 s, 417 MB/s
When I do dd inside vm (bs=4M wih direct), I have result like
in rados
bench.
I think that the speed should be arround ~400MB/s.
Is there any new parameters for rbd in luminous? Maybe I
forgot about
some performance tricks? If more information needed feel free
to ask.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
==============
Nghia Than
Nghia Than
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com