Re: Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jack,

any raid controller support JBOD mode.

So you wont build a raid, even you can.

But you will leave this to ceph to build the redundancy softwarebased.

Or, if you have high needs of availbility, you can let the raid
controller build raid's of raid level's where the raw loose of capacity
is small. And in addition to that, let ceph add extra redundancy.

In any case, what you talk about is an advantage of ceph, and not the
opposite.

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 30.05.2016 um 07:55 schrieb Jack Makenz:
> 
> Forwarded conversation
> Subject: *Wasting the Storage capacity when using Ceph based On high-end
> storage systems*
> ------------------------
> 
> From: *Jack Makenz* <jack.makenz@xxxxxxxxx <mailto:jack.makenz@xxxxxxxxx>>
> Date: Sun, May 29, 2016 at 6:52 PM
> To: ceph-community@xxxxxxxxxxxxxx <mailto:ceph-community@xxxxxxxxxxxxxx>
> 
> 
> Hello All,
> There are some serious problem about ceph that may waste storage
> capacity when using high-end storage system(Hitachi, IBM, EMC, HP ,...)
> as back-end for OSD hosts.
> 
> Imagine in the real cloud we need  *_n Petabytes_* of storage capacity
> that commodity hardware's hard disks or OSD server's hard disks can't
> provide this amount of storage capacity. thus we have to use storage
> systems as back-end for OSD hosts(to implement OSD daemons ).
> 
> But because almost all of these storage systems ( Regardless of their
> brand) use Raid technology and also ceph replicate at least two copy of
> each Object, lot's amount of storage capacity waste.  
> 
> So is there any solution to solve this problem/misunderstand ? 
> 
> Regards 
> Jack Makenz
> 
> ----------
> From: *Nate Curry* <curry@xxxxxxxxxxxxx <mailto:curry@xxxxxxxxxxxxx>>
> Date: Mon, May 30, 2016 at 5:50 AM
> To: Jack Makenz <jack.makenz@xxxxxxxxx <mailto:jack.makenz@xxxxxxxxx>>
> Cc: Unknown <ceph-community@xxxxxxxxxxxxxx
> <mailto:ceph-community@xxxxxxxxxxxxxx>>
> 
> 
> I think that purpose of ceph is to get away from having to rely on high
> end storage systems and to be provide the capacity to utilize multiple
> less expensive servers as the storage system.
> 
> That being said you should still be able to use the high end storage
> systems with or without RAID enabled.  You could do away with RAID
> altogether and let Ceph handle the redundancy or you can have LUNs
> assigned to hosts be put into use as OSDs.  You could make it work
> however but to get the most out of your storage with Ceph I think a
> non-RAID configuration would be best. 
> 
> Nate Curry
> 
>     _______________________________________________
>     Ceph-community mailing list
>     Ceph-community@xxxxxxxxxxxxxx <mailto:Ceph-community@xxxxxxxxxxxxxx>
>     http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com
> 
> 
> ----------
> From: *Doug Dressler* <darbymorrison@xxxxxxxxx
> <mailto:darbymorrison@xxxxxxxxx>>
> Date: Mon, May 30, 2016 at 6:02 AM
> To: Nate Curry <curry@xxxxxxxxxxxxx <mailto:curry@xxxxxxxxxxxxx>>
> Cc: Jack Makenz <jack.makenz@xxxxxxxxx <mailto:jack.makenz@xxxxxxxxx>>,
> Unknown <ceph-community@xxxxxxxxxxxxxx
> <mailto:ceph-community@xxxxxxxxxxxxxx>>
> 
> 
> For non-technical reasons I had to run ceph initially using SAN disks.
> 
> Lesson learned:
> 
> Make sure deduplication is disabled on the SAN :-)
> 
> 
> 
> ----------
> From: *Jack Makenz* <jack.makenz@xxxxxxxxx <mailto:jack.makenz@xxxxxxxxx>>
> Date: Mon, May 30, 2016 at 9:05 AM
> To: Nate Curry <curry@xxxxxxxxxxxxx <mailto:curry@xxxxxxxxxxxxx>>,
> ceph-community@xxxxxxxxxxxxxx <mailto:ceph-community@xxxxxxxxxxxxxx>
> 
> 
> Thanks Nate, 
> But as i mentioned before , providing petabytes of storage capacity on
> commodity hardware or enterprise servers is almost impossible, of course
> that it's possible by installing hundreds of servers with 3 terabytes
> hard disks, but this solution waste data center raise floor, power
> consumption and also *_money_* :)   
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux