Thanks ! I think you should try installing from the ceph mainstream..There are some bug fixes went on after Hammer (not sure if it is backported).. I would say try with 1 drive -> 1 OSD first since presently we have seen some stability issues (mainly due to resource constraint) with more OSDs in a box. The another point is, installation itself is not straight forward. You need to build all the components probably, not sure if it is added as git submodule or
not, Vu , could you please confirm ? Since we are working to make this solution work at scale, could you please give us some idea what is the scale you are looking at for future deployment ? Regards Somnath From: German Anders [mailto:ganders@xxxxxxxxxxxx]
Hi Roy, I understand, we are looking for using accelio with an starting small cluster of 3 mon and 8 osd servers: 3x MON servers 2x Intel Xeon E5-2630v3 @2.40Ghz (32C with HT) 24x 16GB DIMM DDR3 1333Mhz (384GB) 2x 120GB Intel SSD DC S3500 (RAID-1 for OS) 1x ConnectX-3 VPI FDR 56Gb/s ADPT DP 4x OSD servers 2x Intel Xeon E5-2609v2 @2.50Ghz (8C) 8x 16GB DIMM DDR3 1333Mhz (128GB) 2x 120GB Intel SSD DC SC3500 (RAID-1 for OS) 3x 120GB Intel SSD DC SC3500 (Journals) 4x 800GB Intel SSD DC SC3510 (OSD-SSD-POOL) 5x 3TB SAS (OSD-SAS-POOL) 1x ConnectX-3 VPI FDR 56Gb/s ADPT DP 4x OSD servers 2x Intel Xeon E5-2650v2 @2.60Ghz (32C with HT) 8x 16GB DIMM DDR3 1866Mhz (128GB) 2x 200GB Intel SSD DC S3700 (RAID-1 for OS) 3x 200GB Intel SSD DC S3700 (Journals) 4x 800GB Intel SSD DC SC3510 (OSD-SSD-POOL) 5x 3TB SAS (OSD-SAS-POOL) 1x ConnectX-3 VPI FDR 56Gb/s ADPT DP and thinking of using
infernalis v.9.0.0 or hammer release? comments? recommendations?
German 2015-09-01 14:46 GMT-03:00 Somnath Roy <Somnath.Roy@xxxxxxxxxxx>: Hi German, We are working on to make it production ready ASAP. As you know RDMA is very resource constrained
and at the same time will outperform TCP as well. There will be some definite tradeoff between cost Vs Performance. We are lacking on ideas on how big the RDMA deployment could be and it will be really helpful if
you can give some idea on how you are planning to deploy that (i.e how many nodes/OSDs/SSD or HDDs/ EC or Replication etc. etc.). Thanks & Regards Somnath From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of German Anders Thanks a lot for the quick response Robert, any idea when it's going to be ready for production? any alternative solution for similar-performance? Best regards,
German
2015-09-01 13:42 GMT-03:00 Robert LeBlanc <robert@xxxxxxxxxxxxx>: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Accelio and Ceph are still in heavy development and not ready for production. - ---------------- Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Tue, Sep 1, 2015 at 10:31 AM, German Anders wrote: Hi cephers, I would like to know the status for production-ready of Accelio & Ceph, does anyone had a home-made procedure implemented with Ubuntu? recommendations, comments? Thanks in advance, Best regards, German _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -----BEGIN PGP SIGNATURE----- Version: Mailvelope v1.0.2 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJV5dWKCRDmVDuy+mK58QAAZWcQAKIRYhnlSzIQJ9PGaC1J FGYxZ9IOmXX89IbpZuM8Ns8Q1Y52SrYkez8jwtB/A1OWXH0uw2GT45shDfzX xFaqRVVHnjI7MiO+aijGkDZLrdE5fvGfTAOa1m2ovlx7BWRG6k0aSeqdMr92 OB/n2ona94ILvHW/Uq/o5YnoFsThUdTTRWckWeRMKIz9eA7v+bneukjXyLf/ VwFAk0V9LevzNZY83nARYThDfL20SYT05dAhJ6bbzYFowdymZcNWTEDkUY02 m76bhEQO4k3MypL+kv0YyFi3cDkMBa4CaCm3UwRWC5KG6MlQnFl+f3UQuOwV YhYkagw2qUP4rx+/5LIAU+WEzegZ+3mDgk0qIB6pa7TK5Gk4hvHZG884YpXA Fa6Lj9x7gQjszLI1esW1zuNhlTBUJfxygfdJQPV2w/9cjjFlXG8QgmZcgyJF XjtH/T1BK8t7x6IgerXBPEjPlU6tYI75HSSryarFH9ntKIIr6Yrcaaa8heLD /7S/S05yQ2TcfnkVPGapDzJ2Ko5h5gwO/29EIlOsYiHCwDYXDonRFFUrRa2Z SzSq9iiCywglYtqqzaDpqeU5soPIaijHn7ELSEq51Lc6D19pRdEMdmFnxcmt 8QAYEihGnckbcSLdwm1nOP0Nme5ixyGLxcEfxUYv6hTxhJt4RuAj83f2cFxh TiL2 =oSrX -----END PGP SIGNATURE-----
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com