Rados maximum object size issue since Luminous?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

 

Having to interrupt my bluestore test, I have another issue since upgrading from Jewel to Luminous: My backup system (Bareos with RadosFile backend) can no longer write Volumes (objects) larger than around 128MB.

(Of course, I did not test that on my test cluster prior to upgrading the production one :/ )

 

At first, I suspected an incompatibility between the Bareos storage daemon and the newer Ceph version, but I could replicate it with the rados tool:

 

Create a large file (1GB)

 

Put it with rados

 

rados --pool backup put rados-testfile rados-testfile-1G

error putting backup-fra1/rados-testfile: (27) File too large

 

Read it back:

 

rados  --pool backup get rados-testfile rados-testfile-readback

 

Indeed, it wrote just about 128MB

 

Adding the “—striper” option to both get and put command lines, it works:

 

-rw-r--r-- 1 root root 1073741824  3. Jul 18:47 rados-testfile-1G

-rw-r--r-- 1 root root  134217728  3. Jul 19:12 rados-testfile-readback

 

The error message I get from the backup system looks similar:

block.c:659-29028 === Write error. fd=0 size=64512 rtn=-1 dev_blk=134185235 blk_blk=10401 errno=28: ERR=Auf dem Gerät ist kein Speicherplatz mehr verfügbar

 

(German for „No space left on device”)

 

The service worked fine with Ceph jewel, nicely writing 50GB objects. Did the API change somehow?

 

Thanks,

 

Martin

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux