Re: infiniband implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot Adam, it was an error typo, the 3700 are for the journals and the 3500 for the OS. Any special crush algorithm configuration for IB and for the mix of SSD and SATA OSD daemons?

Thanks in advance,


German

2015-06-29 14:05 GMT-03:00 Adam Boyhan <adamb@xxxxxxxxxx>:
One thing that jumps out at me is using the S3700 for OS but the S3500 for journals.  I would use the S3700 for journals and S3500 for the OS.  Looks pretty good other than that!




From: "German Anders" <ganders@xxxxxxxxxxxx>
To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Monday, June 29, 2015 12:24:41 PM
Subject: infiniband implementation

hi cephers,

   Want to know if there's any 'best' practice or procedure to implement Ceph with Infiniband FDR 56gb/s for front and back end connectivity. Any crush tunning parameters, etc.

The Ceph cluster has:

- 8 OSD servers
    - 2x Intel Xeon E5 8C with HT
    - 128G RAM
    - 2x 200G Intel DC S3700 (RAID-1) OS
    - 3x 200G Intel DC S3500 - Journals
    - 4x 800G Intel DC S3500 - OSD SSD & Journal on same disks
    - 4x 3TB - OSD SATA
    - 1x IB FDR ADPT DP

- 3 MON servers
    - 2x Intel Xeon E5 6C with HT
    - 128G RAM
    - 2x 200G Intel SSD (RAID-1) OS
    - 1x IB FDR ADP DP

All with Ubuntu 14.04.1LTS with Kern 4.0.6


Thanks in advance,

German

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux