Re: Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

Also if for political reasons you need a “vendor” solution – ask Dell about their DSS 7000 servers – 90 8TB  disks and two compute nodes in 4RU would go a long way to making up a multi-PB Ceph solution.

 

Supermicro also do a similar solution with some 36, 60 and 90 disk in 4RU models.

 

Cisco has C3260s which are about 60 disks depending on config.

 

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Jack Makenz
Sent: Monday, 30 May 2016 3:56 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: [ceph-users] Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

 

 

Forwarded conversation
Subject: Wasting the Storage capacity when using Ceph based On high-end storage systems
------------------------

From: Jack Makenz <jack.makenz@xxxxxxxxx>
Date: Sun, May 29, 2016 at 6:52 PM
To: ceph-community@xxxxxxxxxxxxxx

Hello All,

There are some serious problem about ceph that may waste storage capacity when using high-end storage system(Hitachi, IBM, EMC, HP ,...) as back-end for OSD hosts.

 

Imagine in the real cloud we need  n Petabytes of storage capacity that commodity hardware's hard disks or OSD server's hard disks can't provide this amount of storage capacity. thus we have to use storage systems as back-end for OSD hosts(to implement OSD daemons ).

 

But because almost all of these storage systems ( Regardless of their brand) use Raid technology and also ceph replicate at least two copy of each Object, lot's amount of storage capacity waste.  

 

So is there any solution to solve this problem/misunderstand ? 

 

Regards 

Jack Makenz


----------
From: Nate Curry <curry@xxxxxxxxxxxxx>
Date: Mon, May 30, 2016 at 5:50 AM
To: Jack Makenz <jack.makenz@xxxxxxxxx>
Cc: Unknown <ceph-community@xxxxxxxxxxxxxx>

I think that purpose of ceph is to get away from having to rely on high end storage systems and to be provide the capacity to utilize multiple less expensive servers as the storage system.

That being said you should still be able to use the high end storage systems with or without RAID enabled.  You could do away with RAID altogether and let Ceph handle the redundancy or you can have LUNs assigned to hosts be put into use as OSDs.  You could make it work however but to get the most out of your storage with Ceph I think a non-RAID configuration would be best. 

Nate Curry

_______________________________________________
Ceph-community mailing list
Ceph-community@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com


----------
From: Doug Dressler <darbymorrison@xxxxxxxxx>
Date: Mon, May 30, 2016 at 6:02 AM
To: Nate Curry <curry@xxxxxxxxxxxxx>
Cc: Jack Makenz <jack.makenz@xxxxxxxxx>, Unknown <ceph-community@xxxxxxxxxxxxxx>

For non-technical reasons I had to run ceph initially using SAN disks.

 

Lesson learned:

 

Make sure deduplication is disabled on the SAN :-)

 

 


----------
From: Jack Makenz <jack.makenz@xxxxxxxxx>
Date: Mon, May 30, 2016 at 9:05 AM
To: Nate Curry <curry@xxxxxxxxxxxxx>, ceph-community@xxxxxxxxxxxxxx

Thanks Nate, 

But as i mentioned before , providing petabytes of storage capacity on commodity hardware or enterprise servers is almost impossible, of course that it's possible by installing hundreds of servers with 3 terabytes hard disks, but this solution waste data center raise floor, power consumption and also money :)   

 

 

Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux