Re: Workload that delete 100 M object daily via lifecycle

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>>> [...] S3 workload, that will need to delete 100M file
>>>> daily [...]

>> [...] average (what about peaks?) around 1,200 committed
>> deletions per second (across the traditional 3 metadata
>> OSDs) sustained, that may not leave a lot of time for file
> creation, writing or reading. :-)[...]

>>> [...] So many people seem to think that distributed (or
>>> even local) filesystems (and in particular their metadata
>>> servers) can sustain the same workload as high volume
>>> transactional DBMSes. [...]

> Index pool distributed over a large number of NVMe OSDs?
> Multiple, dedicated RGW instances that only run LC?

As long as that guarantees a total maximum network+write
latency of well below 800µs across all of them that might
result in a committed rate of a deletion every 800µs (and there
are no peaks and the metadata server only does deletions and
does not do creations or opens or any "maintenance" operations
like checks and backups). :-)

Sometimes I suggest somewhat seriously entirely RAM based
metadata OSDs, which given a suitable environment may be
feasible. But I still wonder why "So many people seem to think
... can sustain the same workload as high volume transactional
DBMSes" :-).
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux