Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Forwarded conversation
Subject: Wasting the Storage capacity when using Ceph based On high-end storage systems
------------------------

From: Jack Makenz <jack.makenz@xxxxxxxxx>
Date: Sun, May 29, 2016 at 6:52 PM
To: ceph-community@xxxxxxxxxxxxxx


Hello All,
There are some serious problem about ceph that may waste storage capacity when using high-end storage system(Hitachi, IBM, EMC, HP ,...) as back-end for OSD hosts.

Imagine in the real cloud we need  n Petabytes of storage capacity that commodity hardware's hard disks or OSD server's hard disks can't provide this amount of storage capacity. thus we have to use storage systems as back-end for OSD hosts(to implement OSD daemons ).

But because almost all of these storage systems ( Regardless of their brand) use Raid technology and also ceph replicate at least two copy of each Object, lot's amount of storage capacity waste.  

So is there any solution to solve this problem/misunderstand ? 

Regards 
Jack Makenz

----------
From: Nate Curry <curry@xxxxxxxxxxxxx>
Date: Mon, May 30, 2016 at 5:50 AM
To: Jack Makenz <jack.makenz@xxxxxxxxx>
Cc: Unknown <ceph-community@xxxxxxxxxxxxxx>


I think that purpose of ceph is to get away from having to rely on high end storage systems and to be provide the capacity to utilize multiple less expensive servers as the storage system.

That being said you should still be able to use the high end storage systems with or without RAID enabled.  You could do away with RAID altogether and let Ceph handle the redundancy or you can have LUNs assigned to hosts be put into use as OSDs.  You could make it work however but to get the most out of your storage with Ceph I think a non-RAID configuration would be best. 

Nate Curry

_______________________________________________
Ceph-community mailing list
Ceph-community@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com


----------
From: Doug Dressler <darbymorrison@xxxxxxxxx>
Date: Mon, May 30, 2016 at 6:02 AM
To: Nate Curry <curry@xxxxxxxxxxxxx>
Cc: Jack Makenz <jack.makenz@xxxxxxxxx>, Unknown <ceph-community@xxxxxxxxxxxxxx>


For non-technical reasons I had to run ceph initially using SAN disks.

Lesson learned:

Make sure deduplication is disabled on the SAN :-)



----------
From: Jack Makenz <jack.makenz@xxxxxxxxx>
Date: Mon, May 30, 2016 at 9:05 AM
To: Nate Curry <curry@xxxxxxxxxxxxx>, ceph-community@xxxxxxxxxxxxxx


Thanks Nate, 
But as i mentioned before , providing petabytes of storage capacity on commodity hardware or enterprise servers is almost impossible, of course that it's possible by installing hundreds of servers with 3 terabytes hard disks, but this solution waste data center raise floor, power consumption and also money :)   


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux