On Thu, Feb 4, 2016 at 5:07 PM, Stephen Lord <Steve.Lord@xxxxxxxxxxx> wrote: > >> On Feb 4, 2016, at 6:51 PM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: >> >> I presume we're doing reads in order to gather some object metadata >> from the cephfs-data pool; and the (small) newly-created objects in >> cache-data are definitely whiteout objects indicating the object no >> longer exists logically. >> >> What kinds of reads are you actually seeing? Does it appear to be >> transferring data, or merely doing a bunch of seeks? I thought we were >> trying to avoid doing reads-to-delete, but perhaps the way we're >> handling snapshots or something is invoking behavior that isn't >> amicable to a full-FS delete. >> >> I presume you're trying to characterize the system's behavior, but of >> course if you just want to empty it out entirely you're better off >> deleting the pools and the CephFS instance entirely and then starting >> it over again from scratch. >> -Greg > > I believe it is reading all the data, just from the volume of traffic and > the cpu load on the OSDs maybe suggests it is doing more than > just that. > > iostat is showing a lot of data moving, I am seeing about the same volume > of read and write activity here. Because the OSDs underneath both pools > are the same ones, I know that’s not exactly optimal, it is hard to tell what > which pool is responsible for which I/O. Large reads and small writes suggest > it is reading up all the data from the objects, the write traffic is I presume all > journal activity relating to deleting objects and creating the empty ones. > > The 9:1 ratio between things being deleted and created seems odd though. > > A previous version of this exercise with just a regular replicated data pool > did not read anything, just a lot of write activity and eventually the content > disappeared. So definitely related to the pool configuration here and probably > not to the filesystem layer. Sam, does this make any sense to you in terms of how RADOS handles deletes? -Greg _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com