Re: ceph reports 10x actuall available space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



turns out i misunderstood the playbook. in "scenario 4" the variable
osd_directories refers to premounted partitions, not the directory of
the ceph journals.

also, unrelated, but might help someone on google, when you use virsh
attach-disk, remember to add --persistent like this (--persistent
might imply --live, but thats not clarified in the help, so putting
both will more likely last through little version changes)

virsh attach-disk $instance $disk vd$d --subdriver qcow2 --live --persistent

On Mon, Feb 2, 2015 at 8:05 PM, pixelfairy <pixelfairy@xxxxxxxxx> wrote:
> ceph 0.87
>
> On Mon, Feb 2, 2015 at 7:53 PM, pixelfairy <pixelfairy@xxxxxxxxx> wrote:
>> tried ceph on 3 kvm instances, each with a root 40G drive, and 6
>> virtio disks of 4G each. when i look at available space, instead of
>> some number less than 72G, i get 689G, and 154G used. the journal is
>> in a folder on the root drive. the images were made with virt-builder
>> using ubuntu-14.04 and virsh to attach the osd disks.
>>
>> set up was done with this set of playbooks,
>> https://github.com/ceph/ceph-ansible with these settings,
>>
>> monitor_secret: FFFFFFFFFFFFFFFFFFFFFFFFF==
>> monitor_interface: eth0
>> mon_osd_min_down_reporters: 7
>> mon_osd_full_ratio: .611
>> mon_osd_nearfull_ratio: .60
>>
>> disable_swap: false
>>
>> cluster_network: 192.168.16.0/24
>> public_network:  192.168.16.0/24
>>
>> devices:
>>   - /dev/vdb
>>   - /dev/vdc
>>   - /dev/vdd
>>   - /dev/vde
>>   - /dev/vdf
>>   - /dev/vdg
>>
>> journal_size: 1000
>> journal_collocation: false
>> osd_directory: true
>> osd_directories:
>>   - /var/lib/ceph/osd/j0
>>   - /var/lib/ceph/osd/j1
>>   - /var/lib/ceph/osd/j2
>>   - /var/lib/ceph/osd/j3
>>   - /var/lib/ceph/osd/j4
>>   - /var/lib/ceph/osd/j5
>>
>> root@ceph1:~# ceph -s
>>     cluster 2198abdb-2669-438a-8673-fc4f226a226c
>>      health HEALTH_OK
>>      monmap e1: 3 mons at
>> {ceph1=192.168.16.31:6789/0,ceph2=192.168.16.32:6789/0,ceph3=192.168.16.33:6789/0},
>> election epoch 12, quorum 0,1,2 ceph1,ceph2,ceph3
>>      osdmap e48: 18 osds: 18 up, 18 in
>>       pgmap v14445: 664 pgs, 2 pools, 1280 MB data, 337 objects
>>             154 GB used, 499 GB / 689 GB avail
>>                  664 active+clean
>>
>> host:~/vstore$ for i in ceph*;do qemu-img info $i | grep "virtual
>> size";done # the 40gig disks are the root and journal volumes.
>> virtual size: 40G (42949672960 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 40G (42949672960 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 40G (42949672960 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
>> virtual size: 4.0G (4294967296 bytes)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux