Are the journals on the same device – it might be better to use the SSDs for journaling since you are not getting better performance with SSDs?
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Marek Dohojda
Sent: Monday, November 23, 2015 10:24 PM
To: Haomai Wang
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Performance question
Sorry I should have specified SAS is the 100 MB :) , but to be honest SSD isn't much faster.
On Mon, Nov 23, 2015 at 7:38 PM, Haomai Wang <haomaiwang@xxxxxxxxx> wrote:
On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda
<mdohojda@xxxxxxxxxxxxxxxxxxx> wrote:
> No SSD and SAS are in two separate pools.
>
> On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang <haomaiwang@xxxxxxxxx> wrote:
>>
>> On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda
>> <mdohojda@xxxxxxxxxxxxxxxxxxx> wrote:
>> > I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs. 7 of which
>> > are
>> > SSD and 7 of which are SAS 10K drives. I get typically about 100MB IO
>> > rates
>> > on this cluster.
So which pool you get with 100 MB?
>>
>> You mixed up sas and ssd in one pool?
>>
>> >
>> > I have a simple question. Is 100MB within my configuration what I
>> > should
>> > expect, or should it be higher? I am not sure if I should be looking for
>> > issues, or just accept what I have.
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>
--
Best Regards,
Wheat
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com