RadosGW performance and disk space usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

I have successfully prototyped read/write access to ceph from Windows
using the S3 API, thanks so much for the help.

Now I would like to do some prototypes targeting performance
evaluation. My scenario typically requires parallel storage of data
from tens of thousands of loggers, but scalability to hundreds of
thousands is the main reason for investigating ceph.

My tests using a single laptop running ceph with 2 local OSDs and
local radosgw allows writing in average 2.5 small objects per second
(100 objects in 40 seconds). Is this the expected performance? It
seems to be I/O bound because the HDD led keeps on during the
PutObject requests. Any suggestion or documentation pointers for
profiling are very appreciated.

I am afraid the S3 API is not good for my scenario, because there is
no way to append data to existing objects (so I won't be able to model
a single object for each data collector). If this is the case, then I
would need to store billions of small objects. I would like to know
how much disk space each object instance requires other than the
object content length.

If the S3 API is not well suited to my scenario, then my effort should
be better directed to porting or writing a native ceph client for
Windows. I just need an API to read and write/append blocks to files.
Any comments are really appreciated.

Thank you a lot for the attention!

Best regards
Mello
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux