Re: Did maximum performance reached?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As I'm understanding now that's in this case (30 disks) 10Gbit Network is not a bottleneck!

With other HW config ( + 5 OSD nodes = + 50 disks ) I'd get 3400 MB/s,
and 3 clients can work on full bandwidth, yes?

OK, let's try ! ! ! ! ! ! !

Perhaps, somebody has more suggestions for increasing performance:
1. NVMe journals, 
2. btrfs over osd
3. ssd-based osds,
4. 15K hdds 
5. RAID 10 on each OSD node
.....
everybody - brainstorm!!!

>John:
>Your expected bandwidth (with size=2 replicas) will be (900MB/s * 3)/2 =
>1300MB/s -- so I think you're actually doing pretty well with your
>1367MB/s number.










************************************************************************************
This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses.
************************************************************************************




 
 
************************************************************************************
This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses.
************************************************************************************



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux