Re: Size of RBD images

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nicolas
just fyi
rbd format 2 is not supported yet by the linux kernel (module)
it only can be used as a target for virtual machines using librbd
see: man rbd --> --image-format

shrinking time: same happend to me,
rbd (v1) device
took about a week to shrink from 1PB  to 10TB
the good news: I had already about 5TB data on it
and ongoing processes using the device and
neither was there any data loss nor was there
a significant performance issue.
(3mons + 4machines with different amount of OSDs each)

Bernhard

EDIT: sorry about the "No such file" error

Now, it seems this is a separate issue: the system I was using was
apparently unable to map devices to images in format 2. I will be
investigating that further before mentioning it again.

I would still appreciate answers about the 1PB image and the time to
shrink it.

Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)


On 11/19/2013 03:20 PM, nicolasc wrote:
Hi every one,

In the course of playing with RBD, I noticed a few things:

* The RBD images are so thin-provisioned, you can create arbitrarily
large ones.
On my 0.72.1 freshly-installed empty 200TB cluster, I was able to
create a 1PB image:

$ rbd create --image-format 2 --size 1073741824 test_img

This command is successful, and I can check the image status:

$ rbd info test
rbd image 'test':
size 1024 TB in 268435456 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.19f76b8b4567
format: 2
features: layering

* Such an oversized image seems unmountable on my 3.2.46 kernel.
However the error message is not very explicit:

$ rbd map test_img
rbd: add failed: (2) No such file or directory

There is no error or explanation to be seen anywhere in the logs.
dmesg reports the connection to the cluster through RBD as usual,
and that's it.
Using the exact same commands with image size 32GB will successfully
map the device.

* Such an oversize image takes an awfully long time to shrink or remove.
However, the image has just been created and is empty.
In RADOS, I only see the corresponding rbd_id and rbd_header, but no
data object at all.
Still, removing the 1PB image takes roughly 8 hours.

Cluster config:
3 mons, 8nodes * 72osds, about 4800pgs (2400pgs in pool "rbd")
cluster and public network are 10GbE, each node has 8 cores and 64GB mem

Oh, so my questions:
- why is it possible to create an image five times the size of the
cluster without warning?
- where could this "No such file" error come from?
- why does it take long to shrink/delete a
large-but-empty-and-thin-provisioned image?

I know that 1PB is oversized ("No such file" when trying to map), and
32GB is not, so I am currently looking for the oversize threshold.
More info coming soon.

Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--

*Ecologic Institute* Bernhard Glomm
IT Administration

Phone: +49 (30) 86880 134
Fax: +49 (30) 86880 100
Skype: bernhard.glomm.ecologic
Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: | YouTube: | Google+:
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux