Re: xfs/nobarrier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Dec 28, 2014 at 1:25 AM, Lindsay Mathieson
<lindsay.mathieson@xxxxxxxxx> wrote:
> On Sat, 27 Dec 2014 06:02:32 PM you wrote:
>> Are you able to separate log with data in your setup and check the
>> difference?
>
> Do you mean putting the OSD journal on a separate disk? I have the journals on
> SSD partitions, which has helped a lot, previously I was getting 13 MB/s
>

No, I meant XFS journal, as we are speaking about filestore fs performance.

> Its not a good SSD - Samsung 840 EVO :( one of my plans for the new year is to
> get SSD's with better seq write speed and IOPS
>
> I've been trying to figure out if adding more OSD's will improve my
> performance, I only have 2 OSD's (one per node)

Erm, yes. Two OSDs cannot be considered even for a performance
measurement testbed setup, neither should three or any other small
number. This explains numbers you are getting and impact from
nobarrier option.

>
>>  So, depending on type of your benchmark
>> (sync/async/IOPS-/bandwidth-hungry) you may win something just for
>> crossing journal and data between disks (and increase failure domain
>> for a single disk as well  ).
>
> One does tend to foxus on raw seq read/writes for becnhmarking, but my actual
> usage is solely for hosting KVM images, so really random R/W is probably more
> important.

Ok, then my suggestion may not help as much as it can.

>
> --
> Lindsay
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux