Re: slow performance even when using SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was getting roughly the same results of your tmpfs test using
spinning disks for OSDs with a 160GB Intel 320 SSD being used for the
journal.  Theoretically the 520 SSD should give better performance
than my 320s.

Keep in mind that even with balance-alb, multiple GigE connections
will only be used if there are multiple TCP sessions being used by
Ceph.

You don't mention it in your email, but if you're using kernel 3.4+
you'll want to make sure your create your btrfs filesystem using the
large node & leaf size (Big Metadata - I've heard recommendations of
32k instead of default 4k) so your performance doesn't degrade over
time.

I'm curious what speed you're getting from dd in a streaming write.
You might try running a "dd if=/dev/zero of=<intel ssd partition>
bs=128k count=something" to see what the SSD will spit out without
Ceph in the picture.

Calvin

On Thu, May 10, 2012 at 7:09 AM, Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
> OK, here some retests. I had the SDDs conected to an old Raid controller
> even i did used them as JBODs (oops).
>
> Here are two new Tests (using kernel 3.4-rc6) it would be great if
> someone could tell me if they're fine or bad.
>
> New tests with all 3 SSDs connected to the mainboard.
>
> #~ rados -p rbd bench 60 write
> Total time run:        60.342419
> Total writes made:     2021
> Write size:            4194304
> Bandwidth (MB/sec):    133.969
>
> Average Latency:       0.477476
> Max latency:           0.942029
> Min latency:           0.109467
>
> #~ rados -p rbd bench 60 write -b 4096
> Total time run:        60.726326
> Total writes made:     59026
> Write size:            4096
> Bandwidth (MB/sec):    3.797
>
> Average Latency:       0.016459
> Max latency:           0.874841
> Min latency:           0.002392
>
> Another test with only osd on the disk and the journal in memory / tmpfs:
> #~ rados -p rbd bench 60 write
> Total time run:        60.513240
> Total writes made:     2555
> Write size:            4194304
> Bandwidth (MB/sec):    168.889
>
> Average Latency:       0.378775
> Max latency:           4.59233
> Min latency:           0.055179
>
> #~ rados -p rbd bench 60 write -b 4096
> Total time run:        60.116260
> Total writes made:     281903
> Write size:            4096
> Bandwidth (MB/sec):    18.318
>
> Average Latency:       0.00341067
> Max latency:           0.720486
> Min latency:           0.000602
>
> Another problem i have is i'm always getting:
> "2012-05-10 15:05:22.140027 mon.0 192.168.0.100:6789/0 19 : [WRN]
> message from mon.2 was stamped 0.109244s in the future, clocks not
> synchronized"
>
> even on all systems ntp is running fine.
>
> Stefan
>
> Am 10.05.2012 14:09, schrieb Stefan Priebe - Profihost AG:
>> Dear List,
>>
>> i'm doing a testsetup with ceph v0.46 and wanted to know how fast ceph is.
>>
>> my testsetup:
>> 3 servers with Intel Xeon X3440, 180GB SSD Intel 520 Series, 4GB RAM, 2x
>> 1Gbit/s LAN each
>>
>> All 3 are running as mon a-c and osd 0-2. Two of them are also running
>> as mds.2 and mds.3 (has 8GB RAM instead of 4GB).
>>
>> All machines run ceph v0.46 and vanilla Linux Kernel v3.0.30 and all of
>> them use btrfs on the ssd which serves /srv/{osd,mon}.X. All of them use
>> eth0+eth1 as bond0 (mode 6).
>>
>> This gives me:
>> rados -p rbd bench 60 write
>>
>> ...
>> Total time run:        61.465323
>> Total writes made:     776
>> Write size:            4194304
>> Bandwidth (MB/sec):    50.500
>>
>> Average Latency:       1.2654
>> Max latency:           2.77124
>> Min latency:           0.170936
>>
>> Shouldn't it be at least 100MB/s? (1Gbit/s / 8)
>>
>> And rados -p rbd bench 60 write -b 4096 gives pretty bad results:
>> Total time run:        60.221130
>> Total writes made:     6401
>> Write size:            4096
>> Bandwidth (MB/sec):    0.415
>>
>> Average Latency:       0.150525
>> Max latency:           1.12647
>> Min latency:           0.026599
>>
>> All btrfs ssds are also mounted with noatime.
>>
>> Thanks for your help!
>>
>> Greets Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux