RE: Anybody else hitting this panic in latest master with bluestore?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mark/Sage,
That problem seems to be gone. BTW, rocksdb folder is not cleaned with 'make clean'. I took latest master and manually clean rocksdb folder as you suggested..
But, now I am hitting the following crash in some of my drives. It seems to be related to block alignment.

     0> 2016-06-07 11:50:12.353375 7f5c0fe938c0 -1 os/bluestore/BitmapFreelistManager.cc: In function 'void BitmapFreelistManager::_xor(uint64_t, uint64_t, KeyValueDB::Transaction)' thread 7f5c0fe938c0 time 2016-06-07 11:50:12.349722
os/bluestore/BitmapFreelistManager.cc: 477: FAILED assert((offset & block_mask) == offset)

 ceph version 10.2.0-2021-g55cb608 (55cb608f63787f7969514ad0d7222da68ab84d88)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x80) [0x5652219dd0a0]
 2: (BitmapFreelistManager::_xor(unsigned long, unsigned long, std::shared_ptr<KeyValueDB::TransactionImpl>)+0x12ed) [0x5652216af96d]
 3: (BitmapFreelistManager::create(unsigned long, std::shared_ptr<KeyValueDB::TransactionImpl>)+0x33f) [0x5652216b034f]
 4: (BlueStore::_open_fm(bool)+0xcd3) [0x565221596683]
 5: (BlueStore::mkfs()+0x8b9) [0x5652215d89b9]
 6: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x117) [0x5652212776c7]
 7: (main()+0x1003) [0x565221209533]
 8: (__libc_start_main()+0xf0) [0x7f5c0c8f7830]
 9: (_start()+0x29) [0x5652212588b9]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Here is my disk partitions..

Osd.15 on /dev/sdi crashed..


sdi       8:128  0     7T  0 disk
├─sdi1    8:129  0    10G  0 part /var/lib/ceph/osd/ceph-15
└─sdi2    8:130  0     7T  0 part
nvme0n1 259:0    0  15.4G  0 disk
root@emsnode11:~/ceph-master/src# fdisk /dev/sdi

Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdi: 7 TiB, 7681501126656 bytes, 15002931888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 16384 bytes
Disklabel type: gpt
Disk identifier: 4A3182B9-23EA-441A-A113-FE904E81BF3E

Device        Start         End     Sectors Size Type
/dev/sdi1      2048    20973567    20971520  10G Linux filesystem
/dev/sdi2  20973568 15002931854 14981958287   7T Linux filesystem

Seems to be aligned properly , what alignment bitmap allocator is looking for (Ramesh ?).
I will debug further and update.

Thanks & Regards
Somnath

-----Original Message-----
From: Somnath Roy 
Sent: Tuesday, June 07, 2016 11:06 AM
To: 'Mark Nelson'; Sage Weil
Cc: Ramesh Chander; ceph-devel
Subject: RE: Anybody else hitting this panic in latest master with bluestore?

I will try now and let you know.

Thanks & Regards
Somnath

-----Original Message-----
From: Mark Nelson [mailto:mnelson@xxxxxxxxxx] 
Sent: Tuesday, June 07, 2016 10:57 AM
To: Somnath Roy; Sage Weil
Cc: Ramesh Chander; ceph-devel
Subject: Re: Anybody else hitting this panic in latest master with bluestore?

Hi Somnath,

Did Sage's suggestion fix it for you?  In my tests rocksdb wasn't building properly after an upstream commit to detect when jemalloc isn't
present:

https://github.com/facebook/rocksdb/commit/0850bc514737a64dc8ca13de8510fcad4756616a

I've submitted a fix that is now in master.  If you clean the rocksdb folder and try again with current master I believe it should work for you.

Thanks,
Mark

On 06/07/2016 09:23 AM, Somnath Roy wrote:
> Sage,
> I did a global 'make clean' before build, isn't that sufficient ? Still need to go to rocksdb folder and clean ?
>
>
> -----Original Message-----
> From: Sage Weil [mailto:sage@xxxxxxxxxxxx]
> Sent: Tuesday, June 07, 2016 6:06 AM
> To: Mark Nelson
> Cc: Somnath Roy; Ramesh Chander; ceph-devel
> Subject: Re: Anybody else hitting this panic in latest master with bluestore?
>
> On Tue, 7 Jun 2016, Mark Nelson wrote:
>> I believe this is due to the rocksdb submodule update in PR #9466.
>> I'm working on tracking down the commit in rocksdb that's causing it.
>
> Is it possible that the problem is that your build *didn't* update rocksdb?
>
> The ceph makefile isn't smart enough to notice changes in the rocksdb/ dir and rebuild.  You have to 'cd rocksdb ; make clean ; cd ..' after the submodule updates to get a fresh build.
>
> Maybe you didn't do that, and some of the ceph code is build using the new headers and data structures that don't match the previously compiled rocksdb code?
>
> sage
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux