Re: Issue with replication level.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just making sure I'm interpreting this right.
At steady state (data_size >> journal_size), total space usage for 2x
replication should be 2 * data_size + journal_size?
If you have (data_size < journal_size) and journal is on a file rather
than on a partition, then btrfs will report 4x data size.

On Tue, Mar 8, 2011 at 8:51 AM, Gregory Farnum
<gregory.farnum@xxxxxxxxxxxxx> wrote:
> On Tuesday, March 8, 2011 at 12:52 AM, Upendra Moturi wrote:
> Hi
>>
>> I am having issue witrh replication levels.
>> Even though i have set data @2x and mds@1x replication,the data copied
>> is occupying 4x space
>
> How are you measuring the data use as being at 4x? Keep in mind that data will be journaled on the OSD and that the MDS will journal any metadata changes. Also, depending on configuration the "used" data might include the host's used space -- this generally applies when the OSD's backing store is a directory; I'm not sure what happens if it's using a journal file but a separate device for data. :)
>
> Also, you probably don't want to leave metadata at 1x -- if you do then any one node failure can cause metadata loss and a bad filesystem!
> -Greg
>
>>
>>
>> I have used default crush map rules.
>>
>> 2) how to see how much space does a mon,mds,data,casdata and rbd
>> occupied individually.
>>
>> My ceph.conf
>>
>>  [global]
>>  pid file = /var/run/ceph/$name.pid
>>  debug ms = 1
>> [mon]
>>  mon data = /data/mon$id
>> [mon.0]
>>  host = ceph1
>>  mon addr = 192.168.155.5:6789
>> [mon.1]
>>  host = ceph2
>>  mon addr = 192.168.155.6:6789
>> [mon.2]
>>  host = ceph3
>>  mon addr = 192.168.155.7:6789
>> [mds]
>>
>> [mds0]
>>  host = ceph1
>> [mds1]
>>  host = ceph2
>>
>> [osd]
>>  sudo = true
>>  osd data = /data/osd$id
>>  osd journal = /data/osd$id/journal
>>  osd journal size = 512
>>  osd use stale snap = true
>> [osd0]
>>  host = ceph1
>>  btrfs devs = /dev/sdb
>> [osd1]
>>  host = ceph2
>>  btrfs devs = /dev/sdb
>> [osd2]
>>  host = ceph3
>>  btrfs devs = /dev/sdb
>>
>>
>>
>> My pg_data settings
>>
>> pg_pool 0 'data' pg_pool(rep pg_size 2 crush_ruleset 0 object_hash
>> rjenkins pg_num 192 pgp_num 192 lpg_num 2 lpgp_num 2 last_change 1
>> owner 0)
>> pg_pool 1 'metadata' pg_pool(rep pg_size 1 crush_ruleset 1 object_hash
>> rjenkins pg_num 192 pgp_num 192 lpg_num 2 lpgp_num 2 last_change 10
>> owner 0)
>> pg_pool 2 'casdata' pg_pool(rep pg_size 1 crush_ruleset 2 object_hash
>> rjenkins pg_num 192 pgp_num 192 lpg_num 2 lpgp_num 2 last_change 12
>> owner 0)
>> pg_pool 3 'rbd' pg_pool(rep pg_size 1 crush_ruleset 3 object_hash
>> rjenkins pg_num 192 pgp_num 192 lpg_num 2 lpgp_num 2 last_change 15
>> owner 0)
>>
>>
>>
>>
>>
>> --
>> Thanks and Regards,
>> Upendra.M
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux