Problems during first install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06.08.2014 09:25, Christian Balzer wrote:
> On Wed, 06 Aug 2014 09:18:13 +0200 Tijn Buijs wrote:
> 
>> Hello Pratik,
>>
>> Thanks for this tip. It was the golden one :). I just deleted all my VMs 
>> again and started over with (again) CentOS 6.5 and 1 OSD disk per data 
>> VM of 20 GB dynamically allocated. And this time everything worked 
>> correctly like they mentioned in the documentation :). I went on my way 
>> and added a second OSD disk to each of the data nodes (also 20 GB 
>> dynamically) and added that to my Ceph cluster. And this also worked:
>> [ceph at ceph-admin testcluster]$ ceph health
>> HEALTH_OK
>> [ceph at ceph-admin testcluster]$ ceph -s
>>      cluster 4125efe2-caa1-4bf8-8c6d-f10b2c71bf27
>>       health HEALTH_OK
>>       monmap e1: 1 mons at {ceph-mon1=10.28.28.71:6789/0}, election 
>> epoch 1, quorum 0 ceph-mon1
>>       osdmap e54: 6 osds: 6 up, 6 in
>>        pgmap v104: 192 pgs, 3 pools, 0 bytes data, 0 objects
>>              210 MB used, 91883 MB / 92093 MB avail
>>                   192 active+clean
>>
>> This is what I want to see :). All that is left to do now is increase 
>> the number of monitors from 1 to 3 and I have a nice test environment 
>> which resembles our production environment closely enough :). I started 
>> this process already and it didn't work yet, but I will play around with 
>> it some more. If I can't get it to work I will start a new thread :).
>> Also I would like to understand why 10 GB per OSD isn't enough to store 
>> nothing, but 20 GB per OSD is :).
>>
> My guess would be that the journal (default of 5GB and definitely not
> "nothing" ^o^) and all the other bits initially created are too much for
> comfort in a 10GB disk.

My guess is that with 10G OSDs you run into this bug:
http://tracker.ceph.com/issues/8551

Ceph calculates the weights on the basis of the OSD size by dividing the
size in bytes by 1T so that a 1T disk results in a weight of 1.0. A 10G
disk would result in a weight of 0.01 but in your case if you assigned
10G you probably have some filesystem overhead which makes the weight
closer to 0.009. Problem is the code calculating the weight cuts the
number of after the second decimal digit so you end up with a weight of
0.00.
The result is that ceph will not put any data on these OSDs and that
means that all PGs will stay in state incomplete until the weight gets
fixed.

You can verify this by dumping the crush map. If all the OSDs show a
weight of 0 you know this is your problem and you can fix it by
adjusting the weight to something more reasonable.

Regards,
  Dennis



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux