Exporting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Evening,

We are running into issues exporting a disk image from ceph rbd. When we attempt to export an rbd image in a cache tiered erasure-coded pool on Luminus.

All the other disks are working fine but this one is acting up. We have a bit of important data on other disks so obviously want to make sure this doesn't happen to those.

[root@ceph-p-mon1 home]# rbd export one/one-177-588-0 one-177-588-0
Exporting image: 8% complete...rbd: error reading from source image at offset 5456789504: (5) Input/output error
2020-03-23 20:11:29.210718 7f2f3effd700 -1 librbd::io::ObjectRequest: 0x7f2f2c128f90 handle_read_object: failed to read from object: (5) Input/output error
2020-03-23 20:11:29.565184 7f2f3e7fc700 -1 librbd::io::ObjectRequest: 0x7f2f280c84d0 handle_read_cache: failed to read from cache: (5) Input/output error
Exporting image: 8% complete...failed.
rbd: export error: (5) Input/output error


Any thoughts would be appreciated.


Some info:

[root@ceph-p-mon1 home]# rbd info one/one-177-588-0
rbd image 'one-177-588-0':
size 58.6GiB in 15000 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.84a01279e2a9e3
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Fri Apr 20 17:06:09 2018
parent: one/one-177@snap
overlap: 2.20GiB

[root@ceph-p-mon1 home]# ceph status
  cluster:
    id:     6a2e8f21-bca2-492b-8869-eecc995216cc
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-p-mon2,ceph-p-mon1,ceph-p-mon3
    mgr: ceph-p-mon2(active)
    mds: cephfsec-1/1/1 up  {0=ceph-p-mon2=up:active}, 6 up:standby
    osd: 155 osds: 154 up, 154 in

  data:
    pools:   6 pools, 5904 pgs
    objects: 145.53M objects, 192TiB
    usage:   253TiB used, 290TiB / 543TiB avail
    pgs:     5896 active+clean
             8    active+clean+scrubbing+deep

  io:
    client:   921KiB/s rd, 5.68MiB/s wr, 110op/s rd, 29op/s wr
    cache:    5.65MiB/s flush, 0op/s promote


[root@ceph-p-mon1 home]# rpm -qa | grep ceph
ceph-common-12.2.9-0.el7.x86_64
ceph-mds-12.2.9-0.el7.x86_64
ceph-radosgw-12.2.9-0.el7.x86_64
ceph-mgr-12.2.9-0.el7.x86_64
ceph-12.2.9-0.el7.x86_64
collectd-ceph-5.8.1-1.el7.x86_64
ceph-deploy-2.0.1-0.noarch
libcephfs2-12.2.9-0.el7.x86_64
python-cephfs-12.2.9-0.el7.x86_64
ceph-selinux-12.2.9-0.el7.x86_64
ceph-osd-12.2.9-0.el7.x86_64
ceph-base-12.2.9-0.el7.x86_64
ceph-mon-12.2.9-0.el7.x86_64
ceph-release-1-1.el7.noarch


Rhian Resnick

Associate Director Research Computing

Enterprise Systems

Office of Information Technology


Florida Atlantic University

777 Glades Road, CM22, Rm 173B

Boca Raton, FL 33431

Phone 561.297.2647

Fax 561.297.0222

 [image] <https://hpc.fau.edu/wp-content/uploads/2015/01/image.jpg>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux