Re: Cephfs snapshot work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are there any existing fuzzing tools you'd recommend? I know about ceph osd thrash, which could be tested against, but what about on the client side? I could just use something pre-built for posix, but that wouldn't coordinate simulated failures on the storage side with actions against the fs. If there is not any current tooling for coordinating server and client simulation, maybe that's where I start. 

On Nov 7, 2017 5:57 AM, "John Spray" <jspray@xxxxxxxxxx> wrote:
On Sun, Nov 5, 2017 at 4:19 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
> My organization has a production  cluster primarily used for cephfs upgraded
> from jewel to luminous. We would very much like to have snapshots on that
> filesystem, but understand that there are risks.
>
> What kind of work could cephfs admins do to help the devs stabilize this
> feature?

If you have a disposable test system, then you could install the
latest master branch of Ceph (which has a stream of snapshot fixes in
it) and run a replica of your intended workload.  If you can find
snapshot bugs (especially crashes) on master then they will certainly
attract interest.

John

>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux