Re: Accelio & Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot guys, I'll configure the cluster and send you some feedback once we test it

Best regards,

German

2015-09-01 15:38 GMT-03:00 Somnath Roy <Somnath.Roy@xxxxxxxxxxx>:

Thanks !

6 OSD daemons per server should be good.

 

Vu,

Could you please send out the doc you are maintaining ?

 

Regards

Somnath

 

From: German Anders [mailto:ganders@xxxxxxxxxxxx]
Sent: Tuesday, September 01, 2015 11:36 AM


To: Somnath Roy
Cc: Robert LeBlanc; ceph-users
Subject: Re: Accelio & Ceph

 

Thanks Roy, we're planning to grow on this cluster if can get the performance that we need, the idea is to run non-relational databases here, so it would be high-io intensive. We are talking in grow terms of about 40-50 OSD servers with no more than 6 OSD daemons per server. If you got some hints or docs out there on how to compile ceph with accelio it would be awesome.


German

 

2015-09-01 15:31 GMT-03:00 Somnath Roy <Somnath.Roy@xxxxxxxxxxx>:

Thanks !

I think you should try installing from the ceph mainstream..There are some bug fixes went on after Hammer (not sure if it is backported)..

I would say try with 1 drive -> 1 OSD first since presently we have seen some stability issues (mainly due to resource constraint) with more OSDs in a box.

The another point is, installation itself is not straight forward. You need to build all the components probably, not sure if it is added as git submodule or not, Vu , could you please confirm ?

 

Since we are working to make this solution work at scale, could you please give us some idea what is the scale you are looking at for future deployment ?

 

Regards

Somnath

 

From: German Anders [mailto:ganders@xxxxxxxxxxxx]
Sent: Tuesday, September 01, 2015 11:19 AM
To: Somnath Roy
Cc: Robert LeBlanc; ceph-users


Subject: Re: Accelio & Ceph

 

Hi Roy,

   I understand, we are looking for using accelio with an starting small cluster of 3 mon and 8 osd servers:

3x MON servers

   2x Intel Xeon E5-2630v3 @2.40Ghz (32C with HT)

   24x 16GB DIMM DDR3 1333Mhz (384GB)

   2x 120GB Intel SSD DC S3500 (RAID-1 for OS)

   1x ConnectX-3 VPI FDR 56Gb/s ADPT DP

4x OSD servers

   2x Intel Xeon E5-2609v2 @2.50Ghz (8C)

   8x 16GB DIMM DDR3 1333Mhz (128GB)

   2x 120GB Intel SSD DC SC3500 (RAID-1 for OS)

   3x 120GB Intel SSD DC SC3500 (Journals)

   4x 800GB Intel SSD DC SC3510 (OSD-SSD-POOL)

   5x 3TB SAS (OSD-SAS-POOL)

   1x ConnectX-3 VPI FDR 56Gb/s ADPT DP

4x OSD servers

   2x Intel Xeon E5-2650v2 @2.60Ghz (32C with HT)

   8x 16GB DIMM DDR3 1866Mhz (128GB)

   2x 200GB Intel SSD DC S3700 (RAID-1 for OS)

   3x 200GB Intel SSD DC S3700 (Journals)

   4x 800GB Intel SSD DC SC3510 (OSD-SSD-POOL)

   5x 3TB SAS (OSD-SAS-POOL)

   1x ConnectX-3 VPI FDR 56Gb/s ADPT DP

and thinking of using infernalis v.9.0.0 or hammer release? comments? recommendations?


German

 

2015-09-01 14:46 GMT-03:00 Somnath Roy <Somnath.Roy@xxxxxxxxxxx>:

Hi German,

We are working on to make it production ready ASAP. As you know RDMA is very resource constrained and at the same time will outperform TCP as well. There will be some definite tradeoff between cost Vs Performance.

We are lacking on ideas on how big the RDMA deployment could be and it will be really helpful if you can give some idea on how you are planning to deploy that (i.e how many nodes/OSDs/SSD or HDDs/ EC or Replication etc. etc.).

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of German Anders
Sent: Tuesday, September 01, 2015 10:39 AM
To: Robert LeBlanc
Cc: ceph-users
Subject: Re: Accelio & Ceph

 

Thanks a lot for the quick response Robert, any idea when it's going to be ready for production? any alternative solution for similar-performance?

Best regards,


German

 

2015-09-01 13:42 GMT-03:00 Robert LeBlanc <robert@xxxxxxxxxxxxx>:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
 
Accelio and Ceph are still in heavy development and not ready for production.
 
- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
 
On Tue, Sep 1, 2015 at 10:31 AM, German Anders  wrote:
Hi cephers,
 
 I would like to know the status for production-ready of Accelio & Ceph, does anyone had a home-made procedure implemented with Ubuntu?
 
recommendations, comments?
 
Thanks in advance,
 
Best regards,
 
German
 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.0.2
Comment: https://www.mailvelope.com
 
wsFcBAEBCAAQBQJV5dWKCRDmVDuy+mK58QAAZWcQAKIRYhnlSzIQJ9PGaC1J
FGYxZ9IOmXX89IbpZuM8Ns8Q1Y52SrYkez8jwtB/A1OWXH0uw2GT45shDfzX
xFaqRVVHnjI7MiO+aijGkDZLrdE5fvGfTAOa1m2ovlx7BWRG6k0aSeqdMr92
OB/n2ona94ILvHW/Uq/o5YnoFsThUdTTRWckWeRMKIz9eA7v+bneukjXyLf/
VwFAk0V9LevzNZY83nARYThDfL20SYT05dAhJ6bbzYFowdymZcNWTEDkUY02
m76bhEQO4k3MypL+kv0YyFi3cDkMBa4CaCm3UwRWC5KG6MlQnFl+f3UQuOwV
YhYkagw2qUP4rx+/5LIAU+WEzegZ+3mDgk0qIB6pa7TK5Gk4hvHZG884YpXA
Fa6Lj9x7gQjszLI1esW1zuNhlTBUJfxygfdJQPV2w/9cjjFlXG8QgmZcgyJF
XjtH/T1BK8t7x6IgerXBPEjPlU6tYI75HSSryarFH9ntKIIr6Yrcaaa8heLD
/7S/S05yQ2TcfnkVPGapDzJ2Ko5h5gwO/29EIlOsYiHCwDYXDonRFFUrRa2Z
SzSq9iiCywglYtqqzaDpqeU5soPIaijHn7ELSEq51Lc6D19pRdEMdmFnxcmt
8QAYEihGnckbcSLdwm1nOP0Nme5ixyGLxcEfxUYv6hTxhJt4RuAj83f2cFxh
TiL2
=oSrX
-----END PGP SIGNATURE-----

 

 



PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

 

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux