Re: Cephfs snapshot work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 7, 2017 at 2:40 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
> Are there any existing fuzzing tools you'd recommend? I know about ceph osd
> thrash, which could be tested against, but what about on the client side? I
> could just use something pre-built for posix, but that wouldn't coordinate
> simulated failures on the storage side with actions against the fs. If there
> is not any current tooling for coordinating server and client simulation,
> maybe that's where I start.

We do have "thrasher" classes for randomly failing MDS daemons in the
main test suite, but those will only be useful for you if you're
working on automated tests that run inside that framework
(https://github.com/ceph/ceph/blob/master/qa/tasks/mds_thrash.py)

If you're working on a local cluster, then it's pretty simple (and
useful) to write a small shell script that e.g. sleeps a few minutes,
then does a "ceph mds fail" on a random rank.

Something more sophisticated that would include client failures is a
long standing wishlist item for the automated testing.

John

>
> On Nov 7, 2017 5:57 AM, "John Spray" <jspray@xxxxxxxxxx> wrote:
>>
>> On Sun, Nov 5, 2017 at 4:19 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
>> > My organization has a production  cluster primarily used for cephfs
>> > upgraded
>> > from jewel to luminous. We would very much like to have snapshots on
>> > that
>> > filesystem, but understand that there are risks.
>> >
>> > What kind of work could cephfs admins do to help the devs stabilize this
>> > feature?
>>
>> If you have a disposable test system, then you could install the
>> latest master branch of Ceph (which has a stream of snapshot fixes in
>> it) and run a replica of your intended workload.  If you can find
>> snapshot bugs (especially crashes) on master then they will certainly
>> attract interest.
>>
>> John
>>
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux