Re: Ceph (Luminous) shows total_space wrong

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Update!

Yeah, That was the problem. I zap the disks (purge) and re-create them according to official documentation. Now everything is OK.

I can see all disk and total sizes properly.

Let's see if this will bring any performance improvements if we compare to previous standard schema (usinbg jewel).

Thanks!,
Gencer.

-----Original Message-----
From: Wido den Hollander [mailto:wido@xxxxxxxx] 
Sent: Monday, July 17, 2017 6:17 PM
To: ceph-users@xxxxxxxxxxxxxx; gencer@xxxxxxxxxxxxx
Subject: RE:  Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 17:03 schreef gencer@xxxxxxxxxxxxx:
> 
> 
> I used this methods:
> 
> $ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 
> .... (one from 09th server one from 10th server..)
> 
> and then;
> 
> $ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...
> 

You should use a whole disk, not a partition. So /dev/sdb without the '1'  at the end.

> This is my second creation for ceph cluster. At first I used bluestore. This time i did not use bluestore (also removed from conf file). Still seen as 200GB.
> 
> How can I make sure BlueStore is disabled (even if i not put any command).
> 

Just use BlueStore with Luminous as all testing is welcome! But in this case you invoked the command with the wrong parameters.

Wido

> -Gencer.
> 
> -----Original Message-----
> From: Wido den Hollander [mailto:wido@xxxxxxxx]
> Sent: Monday, July 17, 2017 5:57 PM
> To: ceph-users@xxxxxxxxxxxxxx; gencer@xxxxxxxxxxxxx
> Subject: RE:  Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 16:41 schreef gencer@xxxxxxxxxxxxx:
> > 
> > 
> > Hi Wido,
> > 
> > Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> > 
> > First let me gave you df -h:
> > 
> > /dev/sdb1           2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> > /dev/sdc1           2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> > /dev/sdd1           2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> > /dev/sde1           2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> > /dev/sdf1           2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> > /dev/sdg1           2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> > /dev/sdh1           2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> > /dev/sdi1           2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> > /dev/sdj1           2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> > /dev/sdk1           2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> > 
> > 
> > Then here is my results from ceph df commands:
> > 
> > ceph df
> > 
> > GLOBAL:
> >     SIZE     AVAIL     RAW USED     %RAW USED
> >     200G      179G       21381M         10.44
> > POOLS:
> >     NAME                ID     USED     %USED     MAX AVAIL     OBJECTS
> >     rbd                 0         0         0        86579M           0
> >     cephfs_data         1         0         0        86579M           0
> >     cephfs_metadata     2      2488         0        86579M          21
> > 
> 
> Ok, that's odd. But I think these disks are using BlueStore since that's what Luminous defaults to.
> 
> The partitions seem to be mixed up, so can you check on how you created the OSDs? Was that with ceph-disk? If so, what additional arguments did you use?
> 
> Wido
> 
> > ceph osd df
> > ID WEIGHT  REWEIGHT SIZE   USE    AVAIL %USE  VAR  PGS
> >  0 0.00980  1.00000 10240M  1070M 9170M 10.45 1.00 173
> >  2 0.00980  1.00000 10240M  1069M 9170M 10.45 1.00 150
> >  4 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 148
> >  6 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 167
> >  8 0.00980  1.00000 10240M  1069M 9171M 10.44 1.00 166
> > 10 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 171
> > 12 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 160
> > 14 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 179
> > 16 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 182
> > 18 0.00980  1.00000 10240M  1069M 9170M 10.44 1.00 168
> >  1 0.00980  1.00000 10240M  1069M 9170M 10.45 1.00 167
> >  3 0.00980  1.00000 10240M  1069M 9170M 10.45 1.00 156
> >  5 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 152
> >  7 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 158
> >  9 0.00980  1.00000 10240M  1069M 9170M 10.44 1.00 174
> > 11 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 153
> > 13 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 179
> > 15 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 186
> > 17 0.00980  1.00000 10240M  1068M 9171M 10.44 1.00 185
> > 19 0.00980  1.00000 10240M  1067M 9172M 10.43 1.00 154
> >               TOTAL   200G 21381M  179G 10.44
> > MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00
> > 
> > 
> > -Gencer.
> > 
> > -----Original Message-----
> > From: Wido den Hollander [mailto:wido@xxxxxxxx]
> > Sent: Monday, July 17, 2017 4:57 PM
> > To: ceph-users@xxxxxxxxxxxxxx; gencer@xxxxxxxxxxxxx
> > Subject: Re:  Ceph (Luminous) shows total_space wrong
> > 
> > 
> > > Op 17 juli 2017 om 15:49 schreef gencer@xxxxxxxxxxxxx:
> > > 
> > > 
> > > Hi,
> > > 
> > >  
> > > 
> > > I successfully managed to work with ceph jewel. Want to try luminous.
> > > 
> > >  
> > > 
> > > I also set experimental bluestore while creating osds. Problem is, 
> > > I have 20x3TB hdd in two nodes and i would expect 55TB usable (as 
> > > on
> > > jewel) on luminous but i see 200GB. Ceph thinks I have only 200GB 
> > > space available in total. I see all osds are up and in.
> > > 
> > >  
> > > 
> > > 20 osd up; 20 osd in. 0 down.
> > > 
> > >  
> > > 
> > > Ceph -s shows HEALTH_OK. I have only one monitor and one mds. 
> > > (1/1/1) and it is up:active.
> > > 
> > >  
> > > 
> > > ceph osd tree gave me all OSDs in nodes are up and results are 
> > > 1.0000... I checked via df -h but all disks ahows 2.7TB. Basically something is wrong.
> > > Same settings and followed schema on jewel is successful except luminous.
> > > 
> > 
> > What do these commands show:
> > 
> > - ceph df
> > - ceph osd df
> > 
> > Might be that you are looking at the wrong numbers.
> > 
> > Wido
> > 
> > >  
> > > 
> > > What might it be?
> > > 
> > >  
> > > 
> > > What do you need to know to solve this problem? Why ceph thinks I 
> > > have 200GB space only?
> > > 
> > >  
> > > 
> > > Thanks,
> > > 
> > > Gencer.
> > > 
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux