Re: Test crimson-osd and coredump

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yep, my tests were with bluestore.  Probably best to just create a tracker ticket for this and attach the logs for Sam.

Mark


On 5/5/22 08:17, Radoslaw Zarzynski wrote:
Hello!

I guess Mark is testing with BlueStore while you went with SeaStore:

  #5  0x00007fe543b6adb5 in abort () from /lib64/libc.so.6
  #6  0x0000000000cd6085 in crimson::os::seastore::SeaStore::on_error (t=...) at /home/ceph/src/crimson/os/seastore/seastore.cc:970   #7  0x0000000001b52707 in crimson::os::seastore::SeaStore::<lambda(auto:114&, auto:115&)>::<lambda()>::<lambda(auto:116)>::operator()<std::error_code> (       this=<optimized out>, e=...) at /home/ceph/src/crimson/os/seastore/seastore.h:226

How about giving a try to / retesting with the former one?

Regards,
Radek

On Thu, May 5, 2022 at 3:03 PM 韩峰哲 <hanfengzhe@xxxxxxxxxx> wrote:

    Hello !
    When I create 7 crimson-osd, the cluster is normal.
    Next, I create 8 crimson.
    The pg_1.0 of .mgr pool is created with osds(1, 0, 2). When osd.7
    joins, pg_1.0{1,0,2} change to pg_1.0{7,0,2}, and then, osd.7 crashed.

    I am confused that Why the pg_1.0 is changed when osd.7 joined the
    cluster and why crimson-osd crashed.

    I pulled master branch today. The attachments are log and backtrace.

        ------------------------------------------------------------------
        From:Mark Nelson <mnelson@xxxxxxxxxx>
        Send Time:2022年4月25日(星期一) 21:56
        To:dev <dev@xxxxxxx>
        Subject:Re: Test crimson-osd and coredump

        Also, fwiw I am testing crimson-osd right now with 60 OSDs and
        1024
        PGs.  There are plenty of other problems, but not this one.
        :)  Logs,
        version, and core dump would all potentially be useful! Thank you.


        Mark


        On 4/25/22 02:39, Radoslaw Zarzynski wrote:
        > Hello!
        >
        > First of all, thanks for your testing!
        > There shouldn't be a hard limit. Could you please provide
        backtraces / logs
        > from the crashes you're seeing?
        >
        > Regards,
        > Radek
        >
        > On Mon, Apr 25, 2022 at 8:20 AM 韩峰哲 <hanfengzhe@xxxxxxxxxx>
        wrote:
        >>
        >> I am testing crimson-osd by vstart, the code is master
        branch which cloned last week.
        >>
        >> When I start more than 3 crimsons, if the PG number per
        crimson is equal to or more than 8, crimson process will
        coredump when running "rados bench ********" everytime, i.e 4
        crimsons and 32 pgs in RBS pool.
        >>
        >> Is there any PG restrict or other restricts when using
        crimson-osd ?
        >>
        >> _______________________________________________
        >> Dev mailing list -- dev@xxxxxxx
        >> To unsubscribe send an email to dev-leave@xxxxxxx
        > _______________________________________________
        > Dev mailing list -- dev@xxxxxxx
        > To unsubscribe send an email to dev-leave@xxxxxxx

        _______________________________________________
        Dev mailing list -- dev@xxxxxxx
        To unsubscribe send an email to dev-leave@xxxxxxx

    _______________________________________________
    Dev mailing list -- dev@xxxxxxx
    To unsubscribe send an email to dev-leave@xxxxxxx


_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux