How to calculate necessary disk amount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Irek,

Got it, Thanks :)

?
idzzy

On August 22, 2014 at 6:17:52 PM, Irek Fasikhov (malmyzh at gmail.com) wrote:

node1: 4[TB], node2: 4[TB], node3: 4[TB] :)

22 ???. 2014 ?. 12:53 ???????????? "idzzy" <idezebi at gmail.com> ???????:
Hi Irek,

Understood.

Let me ask about only this.

> No, it's for the entire cluster.

Is this meant that total disk amount size of all nodes is over than 11.8 TB?
e.g ?node1: 4[TB], node2: 4[TB], node3: 4[TB]

not each node.
e.g ?node1: 11.8[TB], node2: 11.8[TB], node3:11.8 [TB]

Thank you.


On August 22, 2014 at 5:06:02 PM, Irek Fasikhov (malmyzh at gmail.com) wrote:

I recommend you use replication, because radosgw uses asynchronous replication.

Yes divided by nearfull ratio.
No, it's for the entire cluster.


2014-08-22 11:51 GMT+04:00 idzzy <idezebi at gmail.com>:
Hi,

If not use replication, Is it only to divide by nearfull_ratio?
(does only radosgw support replication?)

10T/0.85 = 11.8 TB of each node?

# ceph pg dump | egrep "full_ratio|nearfulll_ratio"
full_ratio 0.95
nearfull_ratio 0.85

Sorry I?m not familiar with ceph architecture.
Thanks for the reply.

?
idzzy

On August 22, 2014 at 3:53:21 PM, Irek Fasikhov (malmyzh at gmail.com) wrote:

Hi.

10?B*2/0.85 ~= 24 TB?with two replications,?total volume for the raw data.






--
? ?????????, ??????? ???? ???????????
???.: +79229045757
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140822/8052b71f/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux