Hello ceph-users! My setup is that I’d like to use RBD images as a replication target of a FreeNAS zfs pool. I have a 2nd FreeNAS (in a VM) to act as a backup target in which I mount the RBD image. All this (except the source FreeNAS server) is in Proxmox. Since I am using RBD as a backup target, performance is not really critical, but I still don’t want it to take months to complete the backup. My source pool size is in the order of ~30TB. I’ve set up an EC RBD pool (and the matching replicated pool) and created image with no problems. However, with the stock 4MB object size, backup speed in quite slow. I tried creating an image with 4K object size, but even for a relatively small image size (of 1TB), I get: # rbd -p rbd_backup create vm-118-disk-0 --size 1T --object-size 4K --data-pool rbd_ec 2020-01-09 07:40:27.120 7f3e4aa15f40 -1 librbd::image::CreateRequest: validate_layout: image size not compatible with object map rbd: create error: (22) Invalid argument # Creating a smaller image (for example 1G) works fine, so I can only imagine that with an object size of 4K, there are way too many objects for the create. Given that I’d like to start with having a 40TB image, there is a significant size gap here. The source pool has mainly big files, but there are quite a few smaller (<4KB) files that I’m afraid will create waste if I create the destination zpool with ashift > 12 (>4K blocks). I am not sure, though, if ZFS will actually write big files in consecutive blocks (through a send/receive), so maybe the blocking factor is not the actual file size, but rather the zfs block size. I am planning on using zfs gzip-9 compression on the destination pool, if it matters. Any thoughts from the community on best methods to approach this? Thank you! George _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com