Re: Fwd: Can't get fullpartition space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you checked that zapping your disks to remove any and all partitions didn't work?  sgdisk -Z /dev/sda3

On Fri, Aug 18, 2017 at 12:48 PM Maiko de Andrade <maikovisky@xxxxxxxxx> wrote:
Hi,

I try use bluestore_block_size but I recive this error (I used values in byte, kb, mb, gb and 1) :

[ceph][WARNIN] /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: 172: FAILED assert(bdev[id]->get_size() >= offset + length)



ALL LOG
$ ceph-deploy osd activate ceph:/dev/sda3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy osd activate ceph:/dev/sda3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0da923fd88>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f0da96905f0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph', '/dev/sda3', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph:/dev/sda3:
[ceph][DEBUG ] connection detected need for sudo
[ceph][DEBUG ] connected to host: ceph 
[ceph][DEBUG ] detect platform information from remote host
[ceph][DEBUG ] detect machine type
[ceph][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.osd][DEBUG ] activating host ceph disk /dev/sda3
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[ceph][DEBUG ] find the location of an executable
[ceph][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sda3
[ceph][WARNIN] main_activate: path = /dev/sda3
[ceph][WARNIN] get_dm_uuid: get_dm_uuid /dev/sda3 uuid path is /sys/dev/block/8:3/dm/uuid
[ceph][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sda3
[ceph][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sda3
[ceph][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph][WARNIN] mount: Mounting /dev/sda3 on /var/lib/ceph/tmp/mnt.VA1j0C with options noatime,inode64
[ceph][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda3 /var/lib/ceph/tmp/mnt.VA1j0C
[ceph][WARNIN] activate: Cluster uuid is 2d3a3e20-84e9-499d-a604-ab6fa8643387
[ceph][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph][WARNIN] activate: Cluster name is ceph
[ceph][WARNIN] activate: OSD uuid is f48afefb-e146-4fdd-8e8f-1d7dba08ec75
[ceph][WARNIN] activate: OSD id is 0
[ceph][WARNIN] activate: Initializing OSD...
[ceph][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.VA1j0C/activate.monmap
[ceph][WARNIN] got monmap epoch 1
[ceph][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs -i 0 --monmap /var/lib/ceph/tmp/mnt.VA1j0C/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.VA1j0C --osd-uuid f48afefb-e146-4fdd-8e8f-1d7dba08ec75 --setuser ceph --setgroup ceph
[ceph][WARNIN] /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: In function 'void BlueFS::add_block_extent(unsigned int, uint64_t, uint64_t)' thread 7fe2474f0e00 time 2017-08-18 13:36:45.941833
[ceph][WARNIN] /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: 172: FAILED assert(bdev[id]->get_size() >= offset + length)
[ceph][WARNIN]  ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
[ceph][WARNIN]  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x563318c61042]
[ceph][WARNIN]  2: (BlueFS::add_block_extent(unsigned int, unsigned long, unsigned long)+0x4da) [0x563318be753a]
[ceph][WARNIN]  3: (BlueStore::_open_db(bool)+0x964) [0x563318af1de4]
[ceph][WARNIN]  4: (BlueStore::mkfs()+0xcc5) [0x563318b20f65]
[ceph][WARNIN]  5: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x563318661164]
[ceph][WARNIN]  6: (main()+0xe3c) [0x5633185ae0bc]
[ceph][WARNIN]  7: (__libc_start_main()+0xf0) [0x7fe244959830]
[ceph][WARNIN]  8: (_start()+0x29) [0x56331863ba69]
[ceph][WARNIN]  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
[ceph][WARNIN] 2017-08-18 13:36:45.943817 7fe2474f0e00 -1 /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: In function 'void BlueFS::add_block_extent(unsigned int, uint64_t, uint64_t)' thread 7fe2474f0e00 time 2017-08-18 13:36:45.941833
[ceph][WARNIN] /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: 172: FAILED assert(bdev[id]->get_size() >= offset + length)
[ceph][WARNIN] 
[ceph][WARNIN]  ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
[ceph][WARNIN]  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x563318c61042]
[ceph][WARNIN]  2: (BlueFS::add_block_extent(unsigned int, unsigned long, unsigned long)+0x4da) [0x563318be753a]
[ceph][WARNIN]  3: (BlueStore::_open_db(bool)+0x964) [0x563318af1de4]
[ceph][WARNIN]  4: (BlueStore::mkfs()+0xcc5) [0x563318b20f65]
[ceph][WARNIN]  5: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x563318661164]
[ceph][WARNIN]  6: (main()+0xe3c) [0x5633185ae0bc]
[ceph][WARNIN]  7: (__libc_start_main()+0xf0) [0x7fe244959830]
[ceph][WARNIN]  8: (_start()+0x29) [0x56331863ba69]
[ceph][WARNIN]  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
[ceph][WARNIN] 
[ceph][WARNIN]      0> 2017-08-18 13:36:45.943817 7fe2474f0e00 -1 /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: In function 'void BlueFS::add_block_extent(unsigned int, uint64_t, uint64_t)' thread 7fe2474f0e00 time 2017-08-18 13:36:45.941833
[ceph][WARNIN] /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: 172: FAILED assert(bdev[id]->get_size() >= offset + length)
[ceph][WARNIN] 
[ceph][WARNIN]  ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
[ceph][WARNIN]  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x563318c61042]
[ceph][WARNIN]  2: (BlueFS::add_block_extent(unsigned int, unsigned long, unsigned long)+0x4da) [0x563318be753a]
[ceph][WARNIN]  3: (BlueStore::_open_db(bool)+0x964) [0x563318af1de4]
[ceph][WARNIN]  4: (BlueStore::mkfs()+0xcc5) [0x563318b20f65]
[ceph][WARNIN]  5: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x563318661164]
[ceph][WARNIN]  6: (main()+0xe3c) [0x5633185ae0bc]
[ceph][WARNIN]  7: (__libc_start_main()+0xf0) [0x7fe244959830]
[ceph][WARNIN]  8: (_start()+0x29) [0x56331863ba69]
[ceph][WARNIN]  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
[ceph][WARNIN] 
[ceph][WARNIN] *** Caught signal (Aborted) **
[ceph][WARNIN]  in thread 7fe2474f0e00 thread_name:ceph-osd
[ceph][WARNIN]  ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
[ceph][WARNIN]  1: (()+0xa542b4) [0x563318c1e2b4]
[ceph][WARNIN]  2: (()+0x11390) [0x7fe2459d3390]
[ceph][WARNIN]  3: (gsignal()+0x38) [0x7fe24496e428]
[ceph][WARNIN]  4: (abort()+0x16a) [0x7fe24497002a]
[ceph][WARNIN]  5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x28e) [0x563318c611ce]
[ceph][WARNIN]  6: (BlueFS::add_block_extent(unsigned int, unsigned long, unsigned long)+0x4da) [0x563318be753a]
[ceph][WARNIN]  7: (BlueStore::_open_db(bool)+0x964) [0x563318af1de4]
[ceph][WARNIN]  8: (BlueStore::mkfs()+0xcc5) [0x563318b20f65]
[ceph][WARNIN]  9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x563318661164]
[ceph][WARNIN]  10: (main()+0xe3c) [0x5633185ae0bc]
[ceph][WARNIN]  11: (__libc_start_main()+0xf0) [0x7fe244959830]
[ceph][WARNIN]  12: (_start()+0x29) [0x56331863ba69]
[ceph][WARNIN] 2017-08-18 13:36:45.946494 7fe2474f0e00 -1 *** Caught signal (Aborted) **
[ceph][WARNIN]  in thread 7fe2474f0e00 thread_name:ceph-osd
[ceph][WARNIN] 
[ceph][WARNIN]  ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
[ceph][WARNIN]  1: (()+0xa542b4) [0x563318c1e2b4]
[ceph][WARNIN]  2: (()+0x11390) [0x7fe2459d3390]
[ceph][WARNIN]  3: (gsignal()+0x38) [0x7fe24496e428]
[ceph][WARNIN]  4: (abort()+0x16a) [0x7fe24497002a]
[ceph][WARNIN]  5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x28e) [0x563318c611ce]
[ceph][WARNIN]  6: (BlueFS::add_block_extent(unsigned int, unsigned long, unsigned long)+0x4da) [0x563318be753a]
[ceph][WARNIN]  7: (BlueStore::_open_db(bool)+0x964) [0x563318af1de4]
[ceph][WARNIN]  8: (BlueStore::mkfs()+0xcc5) [0x563318b20f65]
[ceph][WARNIN]  9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x563318661164]
[ceph][WARNIN]  10: (main()+0xe3c) [0x5633185ae0bc]
[ceph][WARNIN]  11: (__libc_start_main()+0xf0) [0x7fe244959830]
[ceph][WARNIN]  12: (_start()+0x29) [0x56331863ba69]
[ceph][WARNIN]  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
[ceph][WARNIN] 
[ceph][WARNIN]      0> 2017-08-18 13:36:45.946494 7fe2474f0e00 -1 *** Caught signal (Aborted) **
[ceph][WARNIN]  in thread 7fe2474f0e00 thread_name:ceph-osd
[ceph][WARNIN] 
[ceph][WARNIN]  ceph version 12.1.4 (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc)
[ceph][WARNIN]  1: (()+0xa542b4) [0x563318c1e2b4]
[ceph][WARNIN]  2: (()+0x11390) [0x7fe2459d3390]
[ceph][WARNIN]  3: (gsignal()+0x38) [0x7fe24496e428]
[ceph][WARNIN]  4: (abort()+0x16a) [0x7fe24497002a]
[ceph][WARNIN]  5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x28e) [0x563318c611ce]
[ceph][WARNIN]  6: (BlueFS::add_block_extent(unsigned int, unsigned long, unsigned long)+0x4da) [0x563318be753a]
[ceph][WARNIN]  7: (BlueStore::_open_db(bool)+0x964) [0x563318af1de4]
[ceph][WARNIN]  8: (BlueStore::mkfs()+0xcc5) [0x563318b20f65]
[ceph][WARNIN]  9: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x563318661164]
[ceph][WARNIN]  10: (main()+0xe3c) [0x5633185ae0bc]
[ceph][WARNIN]  11: (__libc_start_main()+0xf0) [0x7fe244959830]
[ceph][WARNIN]  12: (_start()+0x29) [0x56331863ba69]
[ceph][WARNIN]  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
[ceph][WARNIN] 
[ceph][WARNIN] mount_activate: Failed to activate
[ceph][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.VA1j0C
[ceph][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.VA1j0C
[ceph][WARNIN] Traceback (most recent call last):
[ceph][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[ceph][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[ceph][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5704, in run
[ceph][WARNIN]     main(sys.argv[1:])
[ceph][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5655, in main
[ceph][WARNIN]     args.func(args)
[ceph][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3759, in main_activate
[ceph][WARNIN]     reactivate=args.reactivate,
[ceph][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3522, in mount_activate
[ceph][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[ceph][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3699, in activate
[ceph][WARNIN]     keyring=keyring,
[ceph][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3151, in mkfs
[ceph][WARNIN]     '--setgroup', get_ceph_group(),
[ceph][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 558, in command_check_call
[ceph][WARNIN]     return subprocess.check_call(arguments)
[ceph][WARNIN]   File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
[ceph][WARNIN]     raise CalledProcessError(retcode, cmd)
[ceph][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '-i', u'0', '--monmap', '/var/lib/ceph/tmp/mnt.VA1j0C/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.VA1j0C', '--osd-uuid', u'f48afefb-e146-4fdd-8e8f-1d7dba08ec75', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status -6
[ceph][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sda3



[]´s
Maiko de Andrade
MAX Brasil
Desenvolvedor de Sistemas
+55 51 91251756
http://about.me/maiko

2017-08-17 20:49 GMT-03:00 David Clarke <davidc@xxxxxxxxxxxxxxx>:
On 18/08/17 06:10, Maiko de Andrade wrote:
> Hi,
>
> I want install ceph in 3 machines. CEPH, CEPH-OSD-1 CEPH-OSD-2, each
> machines have 2 disk in RAID 0 with total 930GiB
>
> CEPH is mon and osd too..
> CEPH-OSD-1 osd
> CEPH-OSD-2 osd
>
> I install and reinstall ceph many times. All installation the CEPH don't
> get full partion space. Take only 1GB. How I change?
>
> In first machine a have this:
>
>
> CEPH$ df -Ph
> Filesystem      Size  Used Avail Use% Mounted on
> udev            3.9G     0  3.9G   0% /dev
> tmpfs           796M  8.8M  787M   2% /run
> /dev/sda1       182G  2.2G  171G   2% /
> tmpfs           3.9G     0  3.9G   0% /dev/shm
> tmpfs           5.0M     0  5.0M   0% /run/lock
> tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
> tmpfs           796M     0  796M   0% /run/user/1000
> /dev/sda3       738G   33M  738G   1% /var/lib/ceph/osd/ceph-0
>
> CEPH$ ceph osd tree
> ID CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF
> -1       0.00980 root default
> -3       0.00980     host ceph
>  0   hdd 0.00980         osd.0     up  1.00000 1.00000
>
> CEPH$ ceph -s
>   cluster:
>     id:     6f3f162b-17ab-49b7-9e4b-904539cfce10
>     health: HEALTH_OK
>
>   services:
>     mon: 1 daemons, quorum ceph
>     mgr: ceph(active)
>     osd: 1 osds: 1 up, 1 in
>
>   data:
>     pools:   0 pools, 0 pgs
>     objects: 0 objects, 0 bytes
>     usage:   1053 MB used, 9186 MB / 10240 MB avail
>     pgs:
>
>
> I try use:
> CEPH$ ceph osd crush reweight osd.0 .72
> reweighted item id 0 name 'osd.0' to 0.72 in crush map
>
> $ ceph osd tree
> ID CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF
> -1       0.71999 root default
> -3       0.71999     host ceph
>  0   hdd 0.71999         osd.0     up  1.00000 1.00000
>
>
> $ ceph -s
>   cluster:
>     id:     6f3f162b-17ab-49b7-9e4b-904539cfce10
>     health: HEALTH_OK
>
>   services:
>     mon: 1 daemons, quorum ceph
>     mgr: ceph(active)
>     osd: 1 osds: 1 up, 1 in
>
>   data:
>     pools:   0 pools, 0 pgs
>     objects: 0 objects, 0 bytes
>     usage:   1054 MB used, 9185 MB / 10240 MB avail
>     pgs:

I had similar problems when installing to disks with existing non-Ceph
partitions on them, and ended up setting 'bluestore_block_size' to the
size (in bytes) that I wanted the OSD to be.

That is very probably not the correct solution, and I'd strongly
recommend passing Ceph full, unused, devices instead of using the same
disks as the OS was installed on.  This was just a cluster for a proof
of concept, and I didn't have any spare disks, so I didn't look any further.

It ended up creating a file 'block' in /var/lib/ceph/osd/ceph-${osd}/,
instead of using a separate partition like it should.

>From 'ceph-disk list':

Correct:

/dev/sda :
 /dev/sda1 ceph data, active, cluster ceph, osd.0, block /dev/sda2
 /dev/sda2 ceph block, for /dev/sda1

Shared OS disk:

/dev/sdc :
 /dev/sdc1 other, linux_raid_member
 /dev/sdc2 other, linux_raid_member
 /dev/sdc3 other, linux_raid_member
 /dev/sdc4 other, xfs, mounted on /var/lib/ceph/osd/ceph-2


# ls -lh /var/lib/ceph/osd/ceph-2/block
-rw-r--r-- 1 ceph ceph 932G Aug 18 11:19 /var/lib/ceph/osd/ceph-2/block


--
David Clarke
Systems Architect
Catalyst IT


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux