I don't think it's a commit from yesterday, I had this issue since last week, the command "ceph features" shows me that my clients have the luminous versions, but I don't know how to upgrade client version (ceph osd set-require-min-compat-client is not upgrading client version) --- Nguetchouang Ngongang Kevin ENS de Lyon https://perso.ens-lyon.fr/kevin.nguetchouang/ Le 2023-04-26 15:58, Gregory Farnum a écrit : > Looks like you've somehow managed to enable the upmap balancer while > allowing a client that's too told to understand it to mount. > > Radek, this is a commit from yesterday; is it a known issue? > > On Wed, Apr 26, 2023 at 7:49 AM Nguetchouang Ngongang Kevin > <kevin.nguetchouang@xxxxxxxxxxx> wrote: > Good morning, i found a bug on ceph reef > > After installing ceph and deploying 9 osds with a cephfs layer. I got > this error after many writing and reading operations on the ceph fs i > deployed. > > ```{ > "assert_condition": "pg_upmap_primaries.empty()", > "assert_file": > "/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/18.0.0-3593-g1e73409b/rpm/el8/BUILD/ceph-18.0.0-3593-g1e73409b/src/osd/OSDMap.cc", > "assert_func": "void OSDMap::encode(ceph::buffer::v15_2_0::list&, > uint64_t) const", > "assert_line": 3239, > "assert_msg": > "/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/18.0.0-3593-g1e73409b/rpm/el8/BUILD/ceph-18.0.0-3593-g1e73409b/src/osd/OSDMap.cc: > In function 'void OSDMap::encode(ceph::buffer::v15_2_0::list&, uint64_t) > const' thread 7f86cb8e5700 time > 2023-04-26T12:25:12.278025+0000\n/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/18.0.0-3593-g1e73409b/rpm/el8/BUILD/ceph-18.0.0-3593-g1e73409b/src/osd/OSDMap.cc: > 3239: FAILED ceph_assert(pg_upmap_primaries.empty())\n", > "assert_thread_name": "msgr-worker-0", > "backtrace": [ > "/lib64/libpthread.so.0(+0x12cf0) [0x7f86d0d21cf0]", > "gsignal()", > "abort()", > "(ceph::__ceph_assert_fail(char const*, char const*, int, char > const*)+0x18f) [0x55ce1794774b]", > "/usr/bin/ceph-osd(+0x6368b7) [0x55ce179478b7]", > "(OSDMap::encode(ceph::buffer::v15_2_0::list&, unsigned long) > const+0x1229) [0x55ce183e0449]", > "(MOSDMap::encode_payload(unsigned long)+0x396) > [0x55ce17ae2576]", > "(Message::encode(unsigned long, int, bool)+0x2e) > [0x55ce1825dbee]", > "(ProtocolV1::prepare_send_message(unsigned long, Message*, > ceph::buffer::v15_2_0::list&)+0x54) [0x55ce184e5914]", > "(ProtocolV1::write_event()+0x511) [0x55ce184f4ce1]", > "(EventCenter::process_events(unsigned int, > std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> *)+0xa64) [0x55ce182eb484]", "/usr/bin/ceph-osd(+0xfdf276) [0x55ce182f0276]", > "/lib64/libstdc++.so.6(+0xc2b13) [0x7f86d0369b13]", > "/lib64/libpthread.so.0(+0x81ca) [0x7f86d0d171ca]", > "clone()" > ], > "ceph_version": "18.0.0-3593-g1e73409b", > "crash_id": > "2023-04-26T12:25:12.286947Z_55675d7c-7833-4e91-b0eb-6df705104c2e", > "entity_name": "osd.0", > "os_id": "centos", > "os_name": "CentOS Stream", > "os_version": "8", > "os_version_id": "8", > "process_name": "ceph-osd", > "stack_sig": > "0ffad2c4bc07caf68ff1e124d3911823bc6fa6f5772444754b7f0a998774c8fe", > "timestamp": "2023-04-26T12:25:12.286947Z", > "utsname_hostname": "node1-link-1", > "utsname_machine": "x86_64", > "utsname_release": "5.4.0-100-generic", > "utsname_sysname": "Linux", > "utsname_version": "#113-Ubuntu SMP Thu Feb 3 18:43:29 UTC 2022" > } > > ``` > > I really don't know what is this error for, Will appreciate any help. > > Cordially, > > -- > Nguetchouang Ngongang Kevin > ENS de Lyon > https://perso.ens-lyon.fr/kevin.nguetchouang/ > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx