Re: Disk/Pool Layout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Thu, Aug 27, 2015 at 1:59 PM, Jan Schermer  wrote:

> S3500 is faster than S3700? I can compare 3700 x 3510 x 3610 tomorrow but I'd be very surprised if the S3500 had a _sustained_ throughput better than 36xx or 37xx. Were you comparing that on the same HBA and in the same way? (No offense, just curious)

None taken. I used the same box and swapped out the drives. The only difference was the S3500 has been heavily used, the 3700 was fresh from the package (if anything that should have helped the S3700).

What HBA was this?
With my LSI 2308 some drives have issues that manifest as an "IOPS amplification" of about 5x (unfortunately btrace doesn't work too well on my kernel so not 100% sure what is happening - still investigating).
To get the "true" speed of SSDs I either have to test them on AHCI or not use --sync=1 (direct should be sufficient - 1:1). And of course test that on a block device just as you do. I usually disable write cache also so that I get the bottom line of performance, sometimes it speeds the SSDs up actually.
But what I see is pretty wild, still not sure what's happening.

I tested on a Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02). I didn't do any tuning, just the defaults.
 
I only got the 3610 today and I got about 15K IOPS (same benchmark you do) when I started it, and it got up to 17.5K IOPS when I was leaving home. Let's see what is shows in the morning, I left it running overnight. If I remember correctly the S3700 did ~40K?
Anyway this is still only an artifical benchmark relevant to journal-like workload, but mix that with some queued reads and varying block sizes and I bet the S3700 beats the lower models into the ground. I'm curious so I'll try finding the different performance characteristics when I get to it.

We are tying to get some 3610s in to test. I'm interested to know your results. 

for i in {1..8}; do fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=$i --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test; done

# jobs  IOPs   Bandwidth (KB/s)

Intel S3500 (SSDSC2BB240G4) Max 4K RW 7,500
1       5,617  22,468.0
2       8,326  33,305.0
3      11,575  46,301.0
4      13,882  55,529.0
5      16,254  65,020.0
6      17,890  71,562.0
7      19,438  77,752.0
8      20,894  83,576.0

Intel S3700 (SSDSC2BA200G3) Max 4K RW 32,000
 1      4,417  17,670.0
 2      5,544  22,178.0
 3      7,337  29,352.0
 4      9,243  36,975.0
 5     11,189  44,759.0
 6     13,218  52,874.0
 7     14,801  59,207.0
 8     16,604  66,419.0
 9     17,671  70,685.0
10     18,715  74,861.0
11     20,079  80,318.0
12     20,832  83,330.0
13     20,571  82,288.0
14     23,033  92,135.0
15     22,169  88,679.0
16     22,875  91,502.0

>
> Mons can use some space, I've experienced logging havoc, leveldb bloating havoc  (I have to compact manually or it just grows and grows), and my Mons write quite a lot at times. I guesstimate my mons can write 200GB a day, often less but often more. Maybe that's not normal. I can confirm those numbers tomorrow.

True, I haven't had the compact issues so I can't comment on that. He has a small cluster so I don't think he will get to the level you have.

I only have about 2x more OSDs than he does. A lot more space, yes, but the number of OSDs is comparable.
I also have a lot more PGs, but that only seems to improve things so far.

As long as you have CPU for more PGs. I've found that too many PGs performs worse because you saturate your CPUs. We are currently running ~50 PGs per OSD at the moment (we are still feeling our way around the optimum performance settings).
 
>>>
>>>   256GB RAM
>>>
>>>
>>> Again - I think too much if that's the only role for those nodes, 64GB
>>> should be plenty.
>>
>> Agree, if you can afford more RAM, it just means more page cache.
>
> But too much  page cache = bad.

I think /proc/sys/vm/min_free_kbytes help.
Nope. Had that set all the way up to 10G with no effect.
One scenario (I think I described it here already) is when I start a new OSD. The new OSD needs to allocate ~2GB of memory and if it isn't truly "free" then it causes all sorts of problems (peering stuck, slow ops...). Lowering min_free_kbytes or dropping caches helps because it makes the memory actually available fto the OSD and it starts right up, but that's not a nice solution.
This is CentOS6/RHEL6 with 2.6.32 Redhat frankenkernel with backports and a lot of patches that interact in mysterious ways...

This is good info. We are on CentOS 7.1 with 4.0.x kernel. Is starting OSDs the issue you had? I'm surprised that min_free_kbytes wouldn't help in this situation. Is there something else you found with too much page cache?
 
> True for the 120GB drives. You only really need something like 1-10GB at most.
> I'd still get a smaller higher-class drive and just not touch provisioning, if only for the sake of warranty. But I think it's easier to just skip dedicated journal drives in this case.

I think I remember someone saying that journals on separate SSDs gave them better performance than journals co-located on the SSD, I don't remember though. If warranty replacement is your primary concern, then go with the 3700. If they already have the 3500, they can get it to perform/endure like the 3700 with the only cost is disk space.
Yeah. It's true the 3500s will likely survive a few years and then the cost for something like 37xx will be much lower. 

The issue with journals on the same _filesystem_ is that a fsync of the journal causes all the dirty data to be flushed out, you should have a separate partition so that it doesn't interact (except in drive and its cache, a non-issue with Intels)
On the other hand, if you have journal as a file on filesystem you can disable barriers and get much higher throughput, while disabling flushes on a block device is hard or impossible (there's a very obscure option of echoing "temporary write through" to the scsi_disk/cache_type sysfs node, but that's not available on Ubuntu for example).

I agree about the separate partition, maybe it was a problem with the SSD cache I don't remember the specifics. Your suggestion on disabling barriers peaked my interest. Initially we had barriers disabled, but since we don't have battery backed controllers we backed that setting out. Are you suggesting disabling barriers in all cases? I'd like to discuss the pros/cons of this option.

- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.0.2
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJV34T4CRDmVDuy+mK58QAA2m0P/0ZVtM0d6/LlkC7H8ywQ
jhMDspEXJ/wXAxcdkcCEbclkqmQr2IAkrHf/ovYiVBj+RRz6V4v7V4xWb8Fc
vPD0K19U1bTlKIsywG5ngkfysKhrBSvJokurBYdlPV6+n2kZEYQUsv+nghNU
nOa+/s0ErkyqSxtFZ4knK4m3cTnRPj9olZu8UbsVQbHuanAW7/U2SKKqJvBc
8if2uD29qIcU+bm5N+NjvBVqBpvdIGhfiP5NSWpH7d/i1qYb19AaNHyQLKcq
NP9Zz+RxMVsOZ8bB7crz33hfyD85w0AEC+aJ0h4Kft7A4dEQlZZxoRMHzsg2
N25K2ylZuMAVWqiobfR/Bk9N7ZwSlWYxYDA9QshSPAVgTJiiWu+ftT2KA4Nq
RH72C5iROSzvFSSyMgWXOejEcG+hRiERkOxfdoUZkrrQ3U9+QY9i5xw5GWpC
TbE9p3m+8TYArw1QgfObedj1ViInT36YNAwYe5AJOVIMl0ZUYfTURm74dgIf
R7Mxpo/TNqNC1dZKk/qDTxir1OWlWsvebIAE+vt1Al9Yc7RkjISZ8oHzZiAi
3r9yoWFnOr+dS1k1y6rVu4UsEtlHmySakHOUG0JSwCZeAZherGoIvT5oT50D
K//kWt0qnJgqaGPAup6FFawFX8q7G9tX4AeV8+7mze1leKZquTOU5oa7HpZv
kEaD
=Jp4k
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux