Re: write speed , leave a little to be desired?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi thanks for the replies, but I was under the impression that the journal is the same as the cache pool, so that there is no extra journal write? 
About the EVOs, as this is a test cluster we would like to test how far we can push commodity hardware.
The servers are all DELL 1U Rack mounts with 6bgs SATA controllers and dual 10gbit NIcs


Thanks,
//Florian


> On 11 Dec 2015, at 15:01, Jan Schermer <jan@xxxxxxxxxxx> wrote:
> 
> The drive will actually be writing 500MB/s in this case, if the journal is on the same drive.
> All writes get to the journal and then to the filestore, so 200MB/s is actually a sane figure.
> 
> Jan
> 
> 
>> On 11 Dec 2015, at 13:55, Zoltan Arnold Nagy <zoltan@xxxxxxxxxxxxxxxxxx> wrote:
>> 
>> It’s very unfortunate that you guys are using the EVO drives. As we’ve discussed numerous times on the ML, they are not very suitable for this task.
>> I think that 200-300MB/s is actually not bad (without knowing anything about the hardware setup, as you didn’t give details…) coming from those drives, but expect to replace them soon.
>> 
>>> On 11 Dec 2015, at 13:44, Florian Rommel <florian.rommel@xxxxxxxxxxxxxxx> wrote:
>>> 
>>> Hi, we are just testing our new ceph cluster and to optimise our spinning disks we created an erasure coded pool and a SSD cache pool.
>>> 
>>> We modified the crush map to make an sad pool as easy server contains 1 ssd drive and 5 spinning drives.
>>> 
>>> Stress testing the cluster in terms of read performance is very nice pushing a little bit over 1.2GB/s
>>> however the write speed is pushing 200-300MB/s.
>>> 
>>> All the SSDs are SAMSUNG 500GB EVO 850 PROs and can push 500MB/write speed, as tested with hdparm and dd.
>>> 
>>> What can we tweak that the write speed increases as well over the network?
>>> 
>>> We run everything over 10Ge
>>> 
>>> The cache mode is set to write-back
>>> 
>>> Any help would be greatly appreciated.
>>> 
>>> Thank you and best regards
>>> //Florian
>>> 
>>> 
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux