Re: Crash and strange things on MDS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 27, 2013 at 5:58 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> On Tue, Feb 26, 2013 at 1:57 PM, Kevin Decherf <kevin@xxxxxxxxxxxx> wrote:
>> On Tue, Feb 26, 2013 at 12:26:17PM -0800, Gregory Farnum wrote:
>>> On Tue, Feb 26, 2013 at 11:58 AM, Kevin Decherf <kevin@xxxxxxxxxxxx> wrote:
>>> > We have one folder per application (php, java, ruby). Every application has
>>> > small (<1M) files. The folder is mounted by only one client by default.
>>> >
>>> > In case of overload, another clients spawn to mount the same folder and
>>> > access the same files.
>>> >
>>> > In the following test, only one client was used to serve the
>>> > application (a website using wordpress).
>>> >
>>> > I made the test with strace to see the time of each IO request (strace -T
>>> > -e trace=file) and I noticed the same pattern:
>>> >
>>> > ...
>>> > [pid  4378] stat("/data/wp-includes/user.php", {st_mode=S_IFREG|0750, st_size=28622, ...}) = 0 <0.033409>
>>> > [pid  4378] lstat("/data/wp-includes/user.php", {st_mode=S_IFREG|0750, st_size=28622, ...}) = 0 <0.081642>
>>> > [pid  4378] open("/data/wp-includes/user.php", O_RDONLY) = 5 <0.041138>
>>> > [pid  4378] stat("/data/wp-includes/meta.php", {st_mode=S_IFREG|0750, st_size=10896, ...}) = 0 <0.082303>
>>> > [pid  4378] lstat("/data/wp-includes/meta.php", {st_mode=S_IFREG|0750, st_size=10896, ...}) = 0 <0.004090>
>>> > [pid  4378] open("/data/wp-includes/meta.php", O_RDONLY) = 5 <0.081929>
>>> > ...
>>> >
>>> > ~250 files were accessed for only one request (thanks Wordpress.).
>>>
>>> Okay, that is slower than I'd expect, even for an across-the-wire request...
>>>
>>> > The fs is mounted with these options: rw,noatime,name=<hidden>,secret=<hidden>,nodcache.
>>>
>>> What kernel and why are you using nodcache?
>>
>> We use kernel 3.7.0. nodcache is enabled by default (we only specify user and
>> secretfile as mount options) and I didn't find it in the documentation of
>> mount.ceph.
>>
>>> Did you have problems
>>> without that mount option? That's forcing an MDS access for most
>>> operations, rather than using local data.
>>
>> Good question, I will try it (-o dcache?).
>
> Oh right — I forgot Sage had enabled that by default; I don't recall
> how necessary it is. (Sage?)
>

That code is buggy, see ceph_dir_test_complete(), it always return false.

Yan, Zheng


>>> > I have a debug (debug_mds=20) log of the active mds during this test if you want.
>>>
>>> Yeah, can you post it somewhere?
>>
>> Upload in progress :-)
>
> Looking forward to it. ;)
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux