Re: Fwd: Can't get fullpartition space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/08/17 06:10, Maiko de Andrade wrote:
> Hi, 
> 
> I want install ceph in 3 machines. CEPH, CEPH-OSD-1 CEPH-OSD-2, each
> machines have 2 disk in RAID 0 with total 930GiB 
> 
> CEPH is mon and osd too..
> CEPH-OSD-1 osd
> CEPH-OSD-2 osd
> 
> I install and reinstall ceph many times. All installation the CEPH don't
> get full partion space. Take only 1GB. How I change? 
> 
> In first machine a have this:
> 
> 
> CEPH$ df -Ph
> Filesystem      Size  Used Avail Use% Mounted on
> udev            3.9G     0  3.9G   0% /dev
> tmpfs           796M  8.8M  787M   2% /run
> /dev/sda1       182G  2.2G  171G   2% /
> tmpfs           3.9G     0  3.9G   0% /dev/shm
> tmpfs           5.0M     0  5.0M   0% /run/lock
> tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
> tmpfs           796M     0  796M   0% /run/user/1000
> /dev/sda3       738G   33M  738G   1% /var/lib/ceph/osd/ceph-0
> 
> CEPH$ ceph osd tree
> ID CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF
> -1       0.00980 root default
> -3       0.00980     host ceph
>  0   hdd 0.00980         osd.0     up  1.00000 1.00000
> 
> CEPH$ ceph -s
>   cluster:
>     id:     6f3f162b-17ab-49b7-9e4b-904539cfce10
>     health: HEALTH_OK
> 
>   services:
>     mon: 1 daemons, quorum ceph
>     mgr: ceph(active)
>     osd: 1 osds: 1 up, 1 in
> 
>   data:
>     pools:   0 pools, 0 pgs
>     objects: 0 objects, 0 bytes
>     usage:   1053 MB used, 9186 MB / 10240 MB avail
>     pgs:
> 
> 
> I try use:
> CEPH$ ceph osd crush reweight osd.0 .72
> reweighted item id 0 name 'osd.0' to 0.72 in crush map
> 
> $ ceph osd tree
> ID CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF
> -1       0.71999 root default
> -3       0.71999     host ceph
>  0   hdd 0.71999         osd.0     up  1.00000 1.00000
> 
> 
> $ ceph -s
>   cluster:
>     id:     6f3f162b-17ab-49b7-9e4b-904539cfce10
>     health: HEALTH_OK
> 
>   services:
>     mon: 1 daemons, quorum ceph
>     mgr: ceph(active)
>     osd: 1 osds: 1 up, 1 in
> 
>   data:
>     pools:   0 pools, 0 pgs
>     objects: 0 objects, 0 bytes
>     usage:   1054 MB used, 9185 MB / 10240 MB avail
>     pgs:

I had similar problems when installing to disks with existing non-Ceph
partitions on them, and ended up setting 'bluestore_block_size' to the
size (in bytes) that I wanted the OSD to be.

That is very probably not the correct solution, and I'd strongly
recommend passing Ceph full, unused, devices instead of using the same
disks as the OS was installed on.  This was just a cluster for a proof
of concept, and I didn't have any spare disks, so I didn't look any further.

It ended up creating a file 'block' in /var/lib/ceph/osd/ceph-${osd}/,
instead of using a separate partition like it should.

From 'ceph-disk list':

Correct:

/dev/sda :
 /dev/sda1 ceph data, active, cluster ceph, osd.0, block /dev/sda2
 /dev/sda2 ceph block, for /dev/sda1

Shared OS disk:

/dev/sdc :
 /dev/sdc1 other, linux_raid_member
 /dev/sdc2 other, linux_raid_member
 /dev/sdc3 other, linux_raid_member
 /dev/sdc4 other, xfs, mounted on /var/lib/ceph/osd/ceph-2


# ls -lh /var/lib/ceph/osd/ceph-2/block
-rw-r--r-- 1 ceph ceph 932G Aug 18 11:19 /var/lib/ceph/osd/ceph-2/block


-- 
David Clarke
Systems Architect
Catalyst IT

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux