Re: ceph-ansible question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think if the device is detected as non-rotational, it's treated the same
as NVMe, but I don't have any to test with. I did all the provisioning
ahead of time because I couldn't get Ansible to also create a regular OSD
on the NVMe as well as use it for DB. I provided it as an example to show
the worst case.
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Apr 29, 2020 at 8:17 PM Szabo, Istvan (Agoda) <
Istvan.Szabo@xxxxxxxxx> wrote:

> Ok, so this is how it works with LVM.
>
>
>
> I was playing around a bit with the following config:
>
> ---
>
> dummy:
>
> osd_scenario: lvm
>
> crush_device_class: "nvme"
>
> osds_per_device: 4
>
> devices:
>
>   - /dev/sde
>
> lvm_volumes:
>
>   - data: /dev/sdc
>
>     db: db_osd1
>
>     db_vg: journal
>
>     crush_device_class: "hdd"
>
>   - data: /dev/sdd
>
>     db: db_osd2
>
>     db_vg: journal
>
>     crush_device_class: "ssd"
>
>
>
> Another thing came up in my mind, so looks like the devices option if we
> want to use separate faster drive for db+wal is only possible with nvme
> drive based on this scenario from ceph-ansible doc:
>
>
>
> *“Other provisioning strategies are possible, by mixing spinning and solid
> state devices, for example:*
>
> *osd_scenario: lvm*
>
> *devices:*
>
> *  - /dev/sda*
>
> *  - /dev/sdb*
>
> *  - /dev/nvme0n1*
>
> *Similar to the initial example, this would end up producing 2 OSDs, but
> data would be placed on the slower spinning drives (/dev/sda, and /dev/sdb)
> and journals would be placed on the faster solid state device
> /dev/nvme0n1.”*
>
>
>
> I haven’t really found anything in ceph ansible, how it determines that
> which drive is faster. So If I don’t have nvme, but I have ssd, it can
> automatically recognize, that hey, sda,sdb,sdc are hdds, sde is ssd and it
> will put the db on SSD?
>
>
>
> What I’d like to achieve, I have let’s say this hardware:
>
>
>
> OS: 480GB SSD SFF x 2 RAID 1
>
> --------------------------------------------
>
> Journal: 3,84TB LFF SSD
>
> -------------------------------------------
>
> SSD OSD: 3,84TB LFF SSD x 3
>
> -------------------------------------------
>
> HDD OSD: 6TB HDD LFF x 6 + 10TB HDD LFF x2
>
>
>
> I’d like to deploy it with ceph-ansible with the devices option to put the
> db for the HDDs on the 3.84TB journal drive.
>
>
>
> ---
>
> dummy:
>
> osd_scenario: lvm
>
> crush_device_class: "hdd"
>
> osds_per_device: 1
>
> devices:
>
>   - /dev/sdc
>
>   - /dev/sdd
>
>   - /dev/sde
>
>   - /dev/sdf
>
>   - /dev/sdg
>
>   - /dev/sdh
>
>   - /dev/sdi
>
>   - /dev/sdj
>
>   - /dev/sdk // this is the 3.84TB ssd
>
>
>
> lvm_volumes: // these would be the 3x3.84TB SSD OSDs
>
>   - data: /dev/sdl
>
>   - data: /dev/sdm
>
>   - data: /dev/sdn
>
>
>
> I don’t have the HW yet to test, so that’s why I’m curious could this work
> without NVME?
>
>
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---------------------------------------------------
> Agoda Services Co., Ltd.
> e: istvan.szabo@xxxxxxxxx
> ---------------------------------------------------
>
>
>
> *From:* Robert LeBlanc <robert@xxxxxxxxxxxxx>
> *Sent:* Thursday, April 30, 2020 7:03 AM
> *To:* Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
> *Cc:* ceph-users@xxxxxxx
> *Subject:* Re:  ceph-ansible question
>
>
>
> Email received from outside the company. If in doubt don't click links nor
> open attachments!
> ------------------------------
>
> Yes, but they are just LVs, so you can not create them or delete them
> easily so that it returns the space to the VG for something else.
>
> ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>
>
>
>
>
> On Tue, Apr 28, 2020 at 6:55 PM Szabo, Istvan (Agoda) <
> Istvan.Szabo@xxxxxxxxx> wrote:
>
> Hi,
>
>
>
> So actually you’ve created in the DB VG many lv for the OSDs? This is that
> I want to avoid actually, because if some of the osds are not is use it is
> still holding the space, isn’t it?
>
>
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---------------------------------------------------
> Agoda Services Co., Ltd.
> e: istvan.szabo@xxxxxxxxx
> ---------------------------------------------------
>
>
>
> *From:* Robert LeBlanc <robert@xxxxxxxxxxxxx>
> *Sent:* Wednesday, April 29, 2020 3:55 AM
> *To:* Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
> *Cc:* ceph-users@xxxxxxx
> *Subject:* Re:  ceph-ansible question
>
>
>
> Email received from outside the company. If in doubt don't click links nor
> open attachments!
> ------------------------------
>
> I'm sure there is a simpler way, but I wanted DBs of a certain size and a
> data OSD on the NVMe as well. I wrote a script to create all the VGs and
> LVs of the sizes that I wanted then added this to my Ansible inventory (I
> prefer to have as much config in the inventory rather than scattered
> throughout a bunch of files, this is the osd group.
>
>
>
> osds:
>   hosts:
>     osd[01:43]:
>     osd[45:60]:
>   vars:
>     block_db_size: 32212254720
>     lvm_volumes:
>       - data: 'blk-00'
>         data_vg: 'ceph-blk-00'
>         db: 'db-00'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-01'
>         data_vg: 'ceph-blk-01'
>         db: 'db-01'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-02'
>         data_vg: 'ceph-blk-02'
>         db: 'db-02'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-03'
>         data_vg: 'ceph-blk-03'
>         db: 'db-03'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-04'
>         data_vg: 'ceph-blk-04'
>         db: 'db-04'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-05'
>         data_vg: 'ceph-blk-05'
>         db: 'db-05'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-06'
>         data_vg: 'ceph-blk-06'
>         db: 'db-06'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-07'
>         data_vg: 'ceph-blk-07'
>         db: 'db-07'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-08'
>         data_vg: 'ceph-blk-08'
>         db: 'db-08'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-09'
>         data_vg: 'ceph-blk-09'
>         db: 'db-09'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-10'
>         data_vg: 'ceph-blk-10'
>         db: 'db-10'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-11'
>         data_vg: 'ceph-blk-11'
>         db: 'db-11'
>         db_vg: 'ceph-db-00'
>         crush_device_class: 'hdd'
>       - data: 'blk-12'      # Use the rest for metadata
>         data_vg: 'ceph-db-00'
>         crush_device_class: 'nvme'
>
> ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>
>
>
>
>
> On Tue, Apr 28, 2020 at 3:14 AM Szabo, Istvan (Agoda) <
> Istvan.Szabo@xxxxxxxxx> wrote:
>
> Hi,
>
> I've tried to create ceph luminous cluster for testing porpuses with
> ceph-ansible on my 3 machines hyperv vms, but I've got the below error with
> the following with the following osd configuration:
>
> ---
> dummy:
> osd_scenario: lvm
> lvm_volumes:
>   - data: osd1lv
>     data_vg: osd1
>     db: journal_lv
>     db_vg: journal
>     wal: journal_lv
>     wal_vg: journal
>   - data: osd2lv
>     data_vg: osd2
>     db: journal_lv
>     db_vg: journal
>     wal: journal_lv
>     wal_vg: journal
>
> I have 3 disks, 2 for osd 1 for journal, I'd like to use the journal (sdb)
> for both (sdc sdd) device.
>
> I've also tried just simply use:
> lvm_volumes:
>   - data: /dev/sdc
>     db: /dev/sdb
>     wal: /dev/sdb
>   - data: /dev/sdd
>     db: /dev/sdb
>     wal: /dev/sdb
>
> But this one complains about partuuid not found for sdb.
> I've also tried to create with parted a gpt partition to have the
> partuuid, but that time it complains about gpt flag.
>
> My setup is a centos 7, ansible 2.6, checked out the stable-3.2
> ceph-ansible.
>
> The all.yml is like this:
> ---
> dummy:
> ceph_origin: repository
> ceph_repository: community
> ceph_stable_release: luminous
> monitor_interface: eth0
> public_network: 172.19.15.0/24
> cluster_network: 192.168.1.0/24
>
> Is there anybody can show me a working configuration for ceph-volume with
> wal and db configuration?
>
> Thank you in advance.
>
> The ansible dump is here:
> failed: [ceph3] (item={'data': 'osd2lv', 'data_vg': 'osd2', 'db':
> 'journal_lv', 'db_vg': 'journal', 'wal': 'journal_lv', 'wal_vg':
> 'journal'}) => {"changed": true, "cmd": ["ceph-volume", "--cluster",
> "ceph", "lvm", "create", "--bluestore", "--data", "osd2/osd2lv",
> "--block.db", "journal/journal_lv", "--block.wal", "journal/journal_lv"],
> "delta": "0:00:06.029411", "end": "2020-04-28 16:48:13.252098", "item":
> {"data": "osd2lv", "data_vg": "osd2", "db": "journal_lv", "db_vg":
> "journal", "wal": "journal_lv", "wal_vg": "journal"}, "msg": "non-zero
> return code", "rc": 1, "start": "2020-04-28 16:48:07.222687", "stderr":
> "Running command: /bin/ceph-authtool --gen-print-key\nRunning command:
> /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> 842cbbe5-3b34-4ea4-95d4-b0c7cab5fce3\nRunning command: /bin/ceph-authtool
> --gen-print-key\nRunning command: mount -t tmpfs tmpfs
> /var/lib/ceph/osd/ceph-2\nRunning command: restorecon /var/l
>  ib/ceph/osd/ceph-2\nRunning command: chown -h ceph:ceph
> /dev/osd2/osd2lv\nRunning command: chown -R ceph:ceph /dev/dm-4\nRunning
> command: ln -s /dev/osd2/osd2lv /var/lib/ceph/osd/ceph-2/block\nRunning
> command: ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
> /var/lib/ceph/osd/ceph-2/activate.monmap\n stderr: got monmap epoch
> 1\nRunning command: ceph-authtool /var/lib/ceph/osd/ceph-2/keyring
> --create-keyring --name osd.2 --add-key
> AQDX+6de1vEIOhAA8Fy22wWp7Z0d/lnYWpgenA==\n stdout: creating
> /var/lib/ceph/osd/ceph-2/keyring\nadded entity osd.2 auth auth(auid =
> 18446744073709551615 key=AQDX+6de1vEIOhAA8Fy22wWp7Z0d/lnYWpgenA== with 0
> caps)\nRunning command: chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-2/keyring\nRunning command: chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-2/\nRunning command: chown -h ceph:ceph
> /dev/journal/journal_lv\nRunning command: chown -R ceph:ceph
> /dev/dm-5\nRunning command: chown -h ceph:ceph /dev/journal
>  /journal_lv\nRunning command: chown -R ceph:ceph /dev/dm-5\nRunning
> command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i
> 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile -
> --bluestore-block-wal-path /dev/journal/journal_lv
> --bluestore-block-db-path /dev/journal/journal_lv --osd-data
> /var/lib/ceph/osd/ceph-2/ --osd-uuid 842cbbe5-3b34-4ea4-95d4-b0c7cab5fce3
> --setuser ceph --setgroup ceph\n stderr: 2020-04-28 16:48:11.895597
> 7fb89cb53ec0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable
> uuid\n stderr:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t)
> [with L1 = AllocatorLevel01Loose; uint64_t = long unsigned int]' thread
> 7fb89cb53ec0 time 2020-04-28 16:48:12.908859\n stderr: /home/jenkins-build/b
>  uild/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> 757: FAILED assert(available >= allocated)\n stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)\n stderr: 1:
> (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x110) [0x55f13e067ae0]\n stderr: 2:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]\n stderr: 3: (BlueFS::mount()+0x271) [0x55f13dff3411]\n
> stderr: 4: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]\n stderr: 5:
> (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]\n stderr: 6:
> (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]\n stderr: 7:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d,
> int)+0x29b) [0x55f13dabc99b]\n stderr: 8: (main()+0x11a5)
> [0x55f13d9d4975]\n stderr: 9: (__libc_start_main()+0xf5) [0x7fb899005505]\n
>  stderr: 10: (()+0x4c4383) [0x55f13da76383]\n stderr: NOTE: a copy of the
> executable, or `objdump -rdS <executable>` is needed to interpret this.\n
> stderr: 2020-04-28 16:48:12.911181 7fb89cb53ec0 -1
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t)
> [with L1 = AllocatorLevel01Loose; uint64_t = long unsigned int]' thread
> 7fb89cb53ec0 time 2020-04-28 16:48:12.908859\n stderr:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> 757: FAILED assert(available >= allocated)\n stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)\n stderr: 1:
> (ceph
>  ::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110)
> [0x55f13e067ae0]\n stderr: 2: (BitmapAllocator::init_rm_free(unsigned long,
> unsigned long)+0x674) [0x55f13e01ae54]\n stderr: 3: (BlueFS::mount()+0x271)
> [0x55f13dff3411]\n stderr: 4: (BlueStore::_open_db(bool)+0x1857)
> [0x55f13df01827]\n stderr: 5: (BlueStore::_fsck(bool, bool)+0x3c7)
> [0x55f13df37707]\n stderr: 6: (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]\n
> stderr: 7: (OSD::mkfs(CephContext*, ObjectStore*, std::string const&,
> uuid_d, int)+0x29b) [0x55f13dabc99b]\n stderr: 8: (main()+0x11a5)
> [0x55f13d9d4975]\n stderr: 9: (__libc_start_main()+0xf5) [0x7fb899005505]\n
> stderr: 10: (()+0x4c4383) [0x55f13da76383]\n stderr: NOTE: a copy of the
> executable, or `objdump -rdS <executable>` is needed to interpret this.\n
> stderr: -253> 2020-04-28 16:48:11.895597 7fb89cb53ec0 -1
> bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid\n stderr:
> 0> 2020-04-28 16:48:12.911181 7fb89cb53ec0 -1 /home/jenkins-build/build/work
>  space/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t)
> [with L1 = AllocatorLevel01Loose; uint64_t = long unsigned int]' thread
> 7fb89cb53ec0 time 2020-04-28 16:48:12.908859\n stderr:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> 757: FAILED assert(available >= allocated)\n stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)\n stderr: 1:
> (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x110) [0x55f13e067ae0]\n stderr: 2:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]\n stderr: 3: (BlueFS::mount()+0x271) [0x5
>  5f13dff3411]\n stderr: 4: (BlueStore::_open_db(bool)+0x1857)
> [0x55f13df01827]\n stderr: 5: (BlueStore::_fsck(bool, bool)+0x3c7)
> [0x55f13df37707]\n stderr: 6: (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]\n
> stderr: 7: (OSD::mkfs(CephContext*, ObjectStore*, std::string const&,
> uuid_d, int)+0x29b) [0x55f13dabc99b]\n stderr: 8: (main()+0x11a5)
> [0x55f13d9d4975]\n stderr: 9: (__libc_start_main()+0xf5) [0x7fb899005505]\n
> stderr: 10: (()+0x4c4383) [0x55f13da76383]\n stderr: NOTE: a copy of the
> executable, or `objdump -rdS <executable>` is needed to interpret this.\n
> stderr: *** Caught signal (Aborted) **\n stderr: in thread 7fb89cb53ec0
> thread_name:ceph-osd\n stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)\n stderr: 1:
> (()+0xa76e51) [0x55f13e028e51]\n stderr: 2: (()+0xf5f0) [0x7fb899ff95f0]\n
> stderr: 3: (gsignal()+0x37) [0x7fb899019337]\n stderr: 4: (abort()+0x148)
> [0x7fb89901aa28]\n stderr: 5: (ceph::__ceph_assert_fail(char const*, char
> const*, int, c
>  har const*)+0x284) [0x55f13e067c54]\n stderr: 6:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]\n stderr: 7: (BlueFS::mount()+0x271) [0x55f13dff3411]\n
> stderr: 8: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]\n stderr: 9:
> (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]\n stderr: 10:
> (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]\n stderr: 11:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d,
> int)+0x29b) [0x55f13dabc99b]\n stderr: 12: (main()+0x11a5)
> [0x55f13d9d4975]\n stderr: 13: (__libc_start_main()+0xf5)
> [0x7fb899005505]\n stderr: 14: (()+0x4c4383) [0x55f13da76383]\n stderr:
> 2020-04-28 16:48:12.914234 7fb89cb53ec0 -1 *** Caught signal (Aborted) **\n
> stderr: in thread 7fb89cb53ec0 thread_name:ceph-osd\n stderr: ceph version
> 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)\n
> stderr: 1: (()+0xa76e51) [0x55f13e028e51]\n stderr: 2: (()+0xf5f0)
> [0x7fb899ff95f0]\n stderr: 3: (gsignal()+0x37) [0x7fb89901933
>  7]\n stderr: 4: (abort()+0x148) [0x7fb89901aa28]\n stderr: 5:
> (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x284) [0x55f13e067c54]\n stderr: 6:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]\n stderr: 7: (BlueFS::mount()+0x271) [0x55f13dff3411]\n
> stderr: 8: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]\n stderr: 9:
> (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]\n stderr: 10:
> (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]\n stderr: 11:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d,
> int)+0x29b) [0x55f13dabc99b]\n stderr: 12: (main()+0x11a5)
> [0x55f13d9d4975]\n stderr: 13: (__libc_start_main()+0xf5)
> [0x7fb899005505]\n stderr: 14: (()+0x4c4383) [0x55f13da76383]\n stderr:
> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.\n stderr: 0> 2020-04-28 16:48:12.914234 7fb89cb53ec0 -1 ***
> Caught signal (Aborted) **\n stderr: in thread 7fb89cb53ec0
> thread_name:ceph-o
>  sd\n stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)\n stderr: 1:
> (()+0xa76e51) [0x55f13e028e51]\n stderr: 2: (()+0xf5f0) [0x7fb899ff95f0]\n
> stderr: 3: (gsignal()+0x37) [0x7fb899019337]\n stderr: 4: (abort()+0x148)
> [0x7fb89901aa28]\n stderr: 5: (ceph::__ceph_assert_fail(char const*, char
> const*, int, char const*)+0x284) [0x55f13e067c54]\n stderr: 6:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]\n stderr: 7: (BlueFS::mount()+0x271) [0x55f13dff3411]\n
> stderr: 8: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]\n stderr: 9:
> (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]\n stderr: 10:
> (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]\n stderr: 11:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d,
> int)+0x29b) [0x55f13dabc99b]\n stderr: 12: (main()+0x11a5)
> [0x55f13d9d4975]\n stderr: 13: (__libc_start_main()+0xf5)
> [0x7fb899005505]\n stderr: 14: (()+0x4c4383) [0x55f13da76383]\n stderr:
> NOTE:
>   a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.\n--> Was unable to complete a new OSD, will rollback
> changes\n--> OSD will be fully purged from the cluster, because the ID was
> generated\nRunning command: ceph osd purge osd.2 --yes-i-really-mean-it\n
> stderr: purged osd.2\nTraceback (most recent call last):\n  File
> \"/sbin/ceph-volume\", line 9, in <module>\n
> load_entry_point('ceph-volume==1.0.0', 'console_scripts',
> 'ceph-volume')()\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/main.py\", line 38, in
> __init__\n    self.main(self.argv)\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 59, in
> newfunc\n    return f(*a, **kw)\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/main.py\", line 149, in
> main\n    terminal.dispatch(self.mapper, subcommand_args)\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/terminal.py\", line 194, in
> dispatch\n    instance.main()\n  File \"/usr/lib/python2.7/site-package
>  s/ceph_volume/devices/lvm/main.py\", line 40, in main\n
> terminal.dispatch(self.mapper, self.argv)\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/terminal.py\", line 194, in
> dispatch\n    instance.main()\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/create.py\",
> line 69, in main\n    self.create(args)\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 16, in
> is_root\n    return func(*a, **kw)\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/create.py\",
> line 26, in create\n    prepare_step.safe_prepare(args)\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/prepare.py\",
> line 219, in safe_prepare\n    self.prepare()\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 16, in
> is_root\n    return func(*a, **kw)\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/prepare.py\",
> line 320, in prepare\n    osd_fsid,\n  File
> \"/usr/lib/python2.7/site-packages/ce
>  ph_volume/devices/lvm/prepare.py\", line 119, in prepare_bluestore\n
> db=db\n  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/util/prepare.py\", line 397,
> in osd_mkfs_bluestore\n    raise RuntimeError('Command failed with exit
> code %s: %s' % (returncode, ' '.join(command)))\nRuntimeError: Command
> failed with exit code -6: /bin/ceph-osd --cluster ceph --osd-objectstore
> bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap
> --keyfile - --bluestore-block-wal-path /dev/journal/journal_lv
> --bluestore-block-db-path /dev/journal/journal_lv --osd-data
> /var/lib/ceph/osd/ceph-2/ --osd-uuid 842cbbe5-3b34-4ea4-95d4-b0c7cab5fce3
> --setuser ceph --setgroup ceph", "stderr_lines": ["Running command:
> /bin/ceph-authtool --gen-print-key", "Running command: /bin/ceph --cluster
> ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> 842cbbe5-3b34-4ea4-95d4-b0c7cab5fce3", "Running command: /bin/ceph-authtool
> --gen-print-key", "Running comman
>  d: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2", "Running command:
> restorecon /var/lib/ceph/osd/ceph-2", "Running command: chown -h ceph:ceph
> /dev/osd2/osd2lv", "Running command: chown -R ceph:ceph /dev/dm-4",
> "Running command: ln -s /dev/osd2/osd2lv /var/lib/ceph/osd/ceph-2/block",
> "Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
> /var/lib/ceph/osd/ceph-2/activate.monmap", " stderr: got monmap epoch 1",
> "Running command: ceph-authtool /var/lib/ceph/osd/ceph-2/keyring
> --create-keyring --name osd.2 --add-key
> AQDX+6de1vEIOhAA8Fy22wWp7Z0d/lnYWpgenA==", " stdout: creating
> /var/lib/ceph/osd/ceph-2/keyring", "added entity osd.2 auth auth(auid =
> 18446744073709551615 key=AQDX+6de1vEIOhAA8Fy22wWp7Z0d/lnYWpgenA== with 0
> caps)", "Running command: chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-2/keyring", "Running command: chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-2/", "Running command: chown -h ceph:ceph
> /dev/journal
>  /journal_lv", "Running command: chown -R ceph:ceph /dev/dm-5", "Running
> command: chown -h ceph:ceph /dev/journal/journal_lv", "Running command:
> chown -R ceph:ceph /dev/dm-5", "Running command: /bin/ceph-osd --cluster
> ceph --osd-objectstore bluestore --mkfs -i 2 --monmap
> /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile -
> --bluestore-block-wal-path /dev/journal/journal_lv
> --bluestore-block-db-path /dev/journal/journal_lv --osd-data
> /var/lib/ceph/osd/ceph-2/ --osd-uuid 842cbbe5-3b34-4ea4-95d4-b0c7cab5fce3
> --setuser ceph --setgroup ceph", " stderr: 2020-04-28 16:48:11.895597
> 7fb89cb53ec0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable
> uuid", " stderr:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t)
> [with L1 = AllocatorLevel01L
>  oose; uint64_t = long unsigned int]' thread 7fb89cb53ec0 time 2020-04-28
> 16:48:12.908859", " stderr:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> 757: FAILED assert(available >= allocated)", " stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)", " stderr: 1:
> (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x110) [0x55f13e067ae0]", " stderr: 2:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]", " stderr: 3: (BlueFS::mount()+0x271) [0x55f13dff3411]",
> " stderr: 4: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]", "
> stderr: 5: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]", "
> stderr: 6: (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]", " stderr: 7:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid
>  _d, int)+0x29b) [0x55f13dabc99b]", " stderr: 8: (main()+0x11a5)
> [0x55f13d9d4975]", " stderr: 9: (__libc_start_main()+0xf5)
> [0x7fb899005505]", " stderr: 10: (()+0x4c4383) [0x55f13da76383]", " stderr:
> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.", " stderr: 2020-04-28 16:48:12.911181 7fb89cb53ec0 -1
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t)
> [with L1 = AllocatorLevel01Loose; uint64_t = long unsigned int]' thread
> 7fb89cb53ec0 time 2020-04-28 16:48:12.908859", " stderr:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> 757
>  : FAILED assert(available >= allocated)", " stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)", " stderr: 1:
> (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x110) [0x55f13e067ae0]", " stderr: 2:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]", " stderr: 3: (BlueFS::mount()+0x271) [0x55f13dff3411]",
> " stderr: 4: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]", "
> stderr: 5: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]", "
> stderr: 6: (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]", " stderr: 7:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d,
> int)+0x29b) [0x55f13dabc99b]", " stderr: 8: (main()+0x11a5)
> [0x55f13d9d4975]", " stderr: 9: (__libc_start_main()+0xf5)
> [0x7fb899005505]", " stderr: 10: (()+0x4c4383) [0x55f13da76383]", " stderr:
> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.", " stderr: -253> 2020-04-28 16:48:
>  11.895597 7fb89cb53ec0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid
> unparsable uuid", " stderr: 0> 2020-04-28 16:48:12.911181 7fb89cb53ec0 -1
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t)
> [with L1 = AllocatorLevel01Loose; uint64_t = long unsigned int]' thread
> 7fb89cb53ec0 time 2020-04-28 16:48:12.908859", " stderr:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.13/rpm/el7/BUILD/ceph-12.2.13/src/os/bluestore/fastbmap_allocator_impl.h:
> 757: FAILED assert(available >= allocated)", " stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)", " stderr: 1:
> (ceph::__ceph_assert_fail(char const*, char const
>  *, int, char const*)+0x110) [0x55f13e067ae0]", " stderr: 2:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]", " stderr: 3: (BlueFS::mount()+0x271) [0x55f13dff3411]",
> " stderr: 4: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]", "
> stderr: 5: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]", "
> stderr: 6: (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]", " stderr: 7:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d,
> int)+0x29b) [0x55f13dabc99b]", " stderr: 8: (main()+0x11a5)
> [0x55f13d9d4975]", " stderr: 9: (__libc_start_main()+0xf5)
> [0x7fb899005505]", " stderr: 10: (()+0x4c4383) [0x55f13da76383]", " stderr:
> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.", " stderr: *** Caught signal (Aborted) **", " stderr: in
> thread 7fb89cb53ec0 thread_name:ceph-osd", " stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)", " stderr: 1:
> (()+0xa76e51) [0x55f13e02
>  8e51]", " stderr: 2: (()+0xf5f0) [0x7fb899ff95f0]", " stderr: 3:
> (gsignal()+0x37) [0x7fb899019337]", " stderr: 4: (abort()+0x148)
> [0x7fb89901aa28]", " stderr: 5: (ceph::__ceph_assert_fail(char const*, char
> const*, int, char const*)+0x284) [0x55f13e067c54]", " stderr: 6:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]", " stderr: 7: (BlueFS::mount()+0x271) [0x55f13dff3411]",
> " stderr: 8: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]", "
> stderr: 9: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]", "
> stderr: 10: (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]", " stderr: 11:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d,
> int)+0x29b) [0x55f13dabc99b]", " stderr: 12: (main()+0x11a5)
> [0x55f13d9d4975]", " stderr: 13: (__libc_start_main()+0xf5)
> [0x7fb899005505]", " stderr: 14: (()+0x4c4383) [0x55f13da76383]", " stderr:
> 2020-04-28 16:48:12.914234 7fb89cb53ec0 -1 *** Caught signal (Aborted) **",
> " stderr: in thread 7fb89cb53ec0 t
>  hread_name:ceph-osd", " stderr: ceph version 12.2.13
> (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)", " stderr: 1:
> (()+0xa76e51) [0x55f13e028e51]", " stderr: 2: (()+0xf5f0)
> [0x7fb899ff95f0]", " stderr: 3: (gsignal()+0x37) [0x7fb899019337]", "
> stderr: 4: (abort()+0x148) [0x7fb89901aa28]", " stderr: 5:
> (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x284) [0x55f13e067c54]", " stderr: 6:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]", " stderr: 7: (BlueFS::mount()+0x271) [0x55f13dff3411]",
> " stderr: 8: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]", "
> stderr: 9: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]", "
> stderr: 10: (BlueStore::mkfs()+0xe5a) [0x55f13df0d3ba]", " stderr: 11:
> (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d,
> int)+0x29b) [0x55f13dabc99b]", " stderr: 12: (main()+0x11a5)
> [0x55f13d9d4975]", " stderr: 13: (__libc_start_main()+0xf5)
> [0x7fb899005505]", " stderr: 14:
>   (()+0x4c4383) [0x55f13da76383]", " stderr: NOTE: a copy of the
> executable, or `objdump -rdS <executable>` is needed to interpret this.", "
> stderr: 0> 2020-04-28 16:48:12.914234 7fb89cb53ec0 -1 *** Caught signal
> (Aborted) **", " stderr: in thread 7fb89cb53ec0 thread_name:ceph-osd", "
> stderr: ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e)
> luminous (stable)", " stderr: 1: (()+0xa76e51) [0x55f13e028e51]", " stderr:
> 2: (()+0xf5f0) [0x7fb899ff95f0]", " stderr: 3: (gsignal()+0x37)
> [0x7fb899019337]", " stderr: 4: (abort()+0x148) [0x7fb89901aa28]", "
> stderr: 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x284) [0x55f13e067c54]", " stderr: 6:
> (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x674)
> [0x55f13e01ae54]", " stderr: 7: (BlueFS::mount()+0x271) [0x55f13dff3411]",
> " stderr: 8: (BlueStore::_open_db(bool)+0x1857) [0x55f13df01827]", "
> stderr: 9: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55f13df37707]", "
> stderr: 10: (BlueStore::mkfs()
>  +0xe5a) [0x55f13df0d3ba]", " stderr: 11: (OSD::mkfs(CephContext*,
> ObjectStore*, std::string const&, uuid_d, int)+0x29b) [0x55f13dabc99b]", "
> stderr: 12: (main()+0x11a5) [0x55f13d9d4975]", " stderr: 13:
> (__libc_start_main()+0xf5) [0x7fb899005505]", " stderr: 14: (()+0x4c4383)
> [0x55f13da76383]", " stderr: NOTE: a copy of the executable, or `objdump
> -rdS <executable>` is needed to interpret this.", "--> Was unable to
> complete a new OSD, will rollback changes", "--> OSD will be fully purged
> from the cluster, because the ID was generated", "Running command: ceph osd
> purge osd.2 --yes-i-really-mean-it", " stderr: purged osd.2", "Traceback
> (most recent call last):", "  File \"/sbin/ceph-volume\", line 9, in
> <module>", "    load_entry_point('ceph-volume==1.0.0', 'console_scripts',
> 'ceph-volume')()", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/main.py\", line 38, in
> __init__", "    self.main(self.argv)", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line
>  59, in newfunc", "    return f(*a, **kw)", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/main.py\", line 149, in
> main", "    terminal.dispatch(self.mapper, subcommand_args)", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/terminal.py\", line 194, in
> dispatch", "    instance.main()", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/main.py\", line
> 40, in main", "    terminal.dispatch(self.mapper, self.argv)", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/terminal.py\", line 194, in
> dispatch", "    instance.main()", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/create.py\",
> line 69, in main", "    self.create(args)", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 16, in
> is_root", "    return func(*a, **kw)", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/create.py\",
> line 26, in create", "    prepare_step.safe_prepare(args)", "  File
> \"/usr/lib/python2.7/site-packages/cep
>  h_volume/devices/lvm/prepare.py\", line 219, in safe_prepare", "
> self.prepare()", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 16, in
> is_root", "    return func(*a, **kw)", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/prepare.py\",
> line 320, in prepare", "    osd_fsid,", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/prepare.py\",
> line 119, in prepare_bluestore", "    db=db", "  File
> \"/usr/lib/python2.7/site-packages/ceph_volume/util/prepare.py\", line 397,
> in osd_mkfs_bluestore", "    raise RuntimeError('Command failed with exit
> code %s: %s' % (returncode, ' '.join(command)))", "RuntimeError: Command
> failed with exit code -6: /bin/ceph-osd --cluster ceph --osd-objectstore
> bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap
> --keyfile - --bluestore-block-wal-path /dev/journal/journal_lv
> --bluestore-block-db-path /dev/journal/journal_lv --osd-data
> /var/lib/ceph/osd/ceph-2/ --osd-uuid 842c
>  bbe5-3b34-4ea4-95d4-b0c7cab5fce3 --setuser ceph --setgroup ceph"],
> "stdout": "", "stdout_lines": []}
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---------------------------------------------------
> Agoda Services Co., Ltd.
> e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
> ---------------------------------------------------
>
>
> ________________________________
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by copyright
> or other legal rules. If you have received it by mistake please let us know
> by reply email and delete it from your system. It is prohibited to copy
> this message or disclose its content to anyone. Any confidentiality or
> privilege is not waived or lost by any mistaken delivery or unauthorized
> disclosure of the message. All messages sent to and from Agoda may be
> monitored to ensure compliance with company policies, to protect the
> company's interests and to remove potential malware. Electronic messages
> may be intercepted, amended, lost or deleted, or contain viruses.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux