They are configured with bluestore.
The network, cpu and disk are doing nothing. I was observing with atop,
iostat, top.
Similiar hardware configuration I have on jewel (with filestore), and
there are performing good.
Cheers,
Rafał Wądołowski
On 04.01.2018 17:05, Luis Periquito wrote:
you never said if it was bluestore or filestore?
Can you look in the server to see which component is being stressed
(network, cpu, disk)? Utilities like atop are very handy for this.
Regarding those specific SSDs they are particularly bad when running
some time without trimming - performance nosedives by at least an
order of magnitude. If you really want to go with that risk look at
least to the PROs. And some workloads will always be slow on them.
You never say what's your target environment: do you value
IOPS/latency? Those CPUs won't be great, and I've read a few things
recommending to avoid NUMA (2 CPUs in there). And (higher) frequency
is more important than # of cores to have a high IOPS cluster.
On Thu, Jan 4, 2018 at 3:56 PM, Rafał Wądołowski
<rwadolowski@xxxxxxxxxxxxxx> wrote:
I have size of 2.
We know about this risk and we accept it, but we still don't know why
performance so so bad.
Cheers,
Rafał Wądołowski
On 04.01.2018 16:51, ceph@xxxxxxxxxx wrote:
I assume you have size of 3 then divide your expected 400 with 3 and you are
not far Away from what you get...
In Addition you should Never use Consumer grade ssds for ceph as they will
be reach the DWPD very soon...
Am 4. Januar 2018 09:54:55 MEZ schrieb "Rafał Wądołowski"
<rwadolowski@xxxxxxxxxxxxxx>:
Hi folks,
I am currently benchmarking my cluster for an performance issue and I
have no idea, what is going on. I am using these devices in qemu.
Ceph version 12.2.2
Infrastructure:
3 x Ceph-mon
11 x Ceph-osd
Ceph-osd has 22x1TB Samsung SSD 850 EVO 1TB
96GB RAM
2x E5-2650 v4
4x10G Network (2 seperate bounds for cluster and public) with MTU 9000
I had tested it with rados bench:
# rados bench -p rbdbench 30 write -t 1
Total time run: 30.055677
Total writes made: 1199
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 159.571
Stddev Bandwidth: 6.83601
Max bandwidth (MB/sec): 168
Min bandwidth (MB/sec): 140
Average IOPS: 39
Stddev IOPS: 1
Max IOPS: 42
Min IOPS: 35
Average Latency(s): 0.0250656
Stddev Latency(s): 0.00321545
Max latency(s): 0.0471699
Min latency(s): 0.0206325
# ceph tell osd.0 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"bytes_per_sec": 414199397
}
Testing osd directly
# dd if=/dev/zero of=/dev/sdc bs=4M oflag=direct count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB, 400 MiB) copied, 1.0066 s, 417 MB/s
When I do dd inside vm (bs=4M wih direct), I have result like in rados
bench.
I think that the speed should be arround ~400MB/s.
Is there any new parameters for rbd in luminous? Maybe I forgot about
some performance tricks? If more information needed feel free to ask.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com