[XFS updates] XFS development tree branch, master, updated. for-linus-v3.8-rc1-13587-g56431cd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "XFS development tree".

The branch, master has been updated
  d69043c xfs: drop buffer io reference when a bad bio is built
  3daed8b xfs: fix broken error handling in xfs_vm_writepage
  42e2976 xfs: fix attr tree double split corruption
  6ce377a xfs: fix reading of wrapped log data
  03b1293 xfs: fix buffer shudown reference count mismatch
  4b62acf xfs: don't vmap inode cluster buffers during free
  ca250b1 xfs: invalidate allocbt blocks moved to the free list
  1e7acbb xfs: silence uninitialised f.file warning.
  eaef854 xfs: growfs: don't read garbage for new secondary superblocks
  1f3c785 xfs: move allocation stack switch up to xfs_bmapi_allocate
  326c035 xfs: introduce XFS_BMAPI_STACK_SWITCH
  408cc4e xfs: zero allocation_args on the kernel stack
  7e9620f xfs: only update the last_sync_lsn when a transaction completes
      from  ec47eb6b0b450a4e82340b6de674104de3f0dc0a (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
commit d69043c42d8c6414fa28ad18d99973aa6c1c2e24
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Mon Nov 12 22:09:46 2012 +1100

    xfs: drop buffer io reference when a bad bio is built
    
    Error handling in xfs_buf_ioapply_map() does not handle IO reference
    counts correctly. We increment the b_io_remaining count before
    building the bio, but then fail to decrement it in the failure case.
    This leads to the buffer never running IO completion and releasing
    the reference that the IO holds, so at unmount we can leak the
    buffer. This leak is captured by this assert failure during unmount:
    
    XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 273
    
    This is not a new bug - the b_io_remaining accounting has had this
    problem for a long, long time - it's just very hard to get a
    zero length bio being built by this code...
    
    Further, the buffer IO error can be overwritten on a multi-segment
    buffer by subsequent bio completions for partial sections of the
    buffer. Hence we should only set the buffer error status if the
    buffer is not already carrying an error status. This ensures that a
    partial IO error on a multi-segment buffer will not be lost. This
    part of the problem is a regression, however.
    
    cc: <stable@xxxxxxxxxxxxxxx>
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 3daed8bc3e49b9695ae931b9f472b5b90d1965b3
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Mon Nov 12 22:09:45 2012 +1100

    xfs: fix broken error handling in xfs_vm_writepage
    
    When we shut down the filesystem, it might first be detected in
    writeback when we are allocating a inode size transaction. This
    happens after we have moved all the pages into the writeback state
    and unlocked them. Unfortunately, if we fail to set up the
    transaction we then abort writeback and try to invalidate the
    current page. This then triggers are BUG() in block_invalidatepage()
    because we are trying to invalidate an unlocked page.
    
    Fixing this is a bit of a chicken and egg problem - we can't
    allocate the transaction until we've clustered all the pages into
    the IO and we know the size of it (i.e. whether the last block of
    the IO is beyond the current EOF or not). However, we don't want to
    hold pages locked for long periods of time, especially while we lock
    other pages to cluster them into the write.
    
    To fix this, we need to make a clear delineation in writeback where
    errors can only be handled by IO completion processing. That is,
    once we have marked a page for writeback and unlocked it, we have to
    report errors via IO completion because we've already started the
    IO. We may not have submitted any IO, but we've changed the page
    state to indicate that it is under IO so we must now use the IO
    completion path to report errors.
    
    To do this, add an error field to xfs_submit_ioend() to pass it the
    error that occurred during the building on the ioend chain. When
    this is non-zero, mark each ioend with the error and call
    xfs_finish_ioend() directly rather than building bios. This will
    immediately push the ioends through completion processing with the
    error that has occurred.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 42e2976f131d65555d5c1d6c3d47facc63577814
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Mon Nov 12 22:09:44 2012 +1100

    xfs: fix attr tree double split corruption
    
    In certain circumstances, a double split of an attribute tree is
    needed to insert or replace an attribute. In rare situations, this
    can go wrong, leaving the attribute tree corrupted. In this case,
    the attr being replaced is the last attr in a leaf node, and the
    replacement is larger so doesn't fit in the same leaf node.
    When we have the initial condition of a node format attribute
    btree with two leaves at index 1 and 2. Call them L1 and L2.  The
    leaf L1 is completely full, there is not a single byte of free space
    in it. L2 is mostly empty.  The attribute being replaced - call it X
    - is the last attribute in L1.
    
    The way an attribute replace is executed is that the replacement
    attribute - call it Y - is first inserted into the tree, but has an
    INCOMPLETE flag set on it so that list traversals ignore it. Once
    this transaction is committed, a second transaction it run to
    atomically mark Y as COMPLETE and X as INCOMPLETE, so that a
    traversal will now find Y and skip X. Once that transaction is
    committed, attribute X is then removed.
    
    So, the initial condition is:
    
         +--------+     +--------+
         |   L1   |     |   L2   |
         | fwd: 2 |---->| fwd: 0 |
         | bwd: 0 |<----| bwd: 1 |
         | fsp: 0 |     | fsp: N |
         |--------|     |--------|
         | attr A |     | attr 1 |
         |--------|     |--------|
         | attr B |     | attr 2 |
         |--------|     |--------|
         ..........     ..........
         |--------|     |--------|
         | attr X |     | attr n |
         +--------+     +--------+
    
    So now we go to replace X, and see that L1:fsp = 0 - it is full so
    we can't insert Y in the same leaf. So we record the the location of
    attribute X so we can track it for later use, then we split L1 into
    L1 and L3 and reblance across the two leafs. We end with:
    
         +--------+     +--------+     +--------+
         |   L1   |     |   L3   |     |   L2   |
         | fwd: 3 |---->| fwd: 2 |---->| fwd: 0 |
         | bwd: 0 |<----| bwd: 1 |<----| bwd: 3 |
         | fsp: M |     | fsp: J |     | fsp: N |
         |--------|     |--------|     |--------|
         | attr A |     | attr X |     | attr 1 |
         |--------|     +--------+     |--------|
         | attr B |                    | attr 2 |
         |--------|                    |--------|
         ..........                    ..........
         |--------|                    |--------|
         | attr W |                    | attr n |
         +--------+                    +--------+
    
    And we track that the original attribute is now at L3:0.
    
    We then try to insert Y into L1 again, and find that there isn't
    enough room because the new attribute is larger than the old one.
    Hence we have to split again to make room for Y. We end up with
    this:
    
         +--------+     +--------+     +--------+     +--------+
         |   L1   |     |   L4   |     |   L3   |     |   L2   |
         | fwd: 4 |---->| fwd: 3 |---->| fwd: 2 |---->| fwd: 0 |
         | bwd: 0 |<----| bwd: 1 |<----| bwd: 4 |<----| bwd: 3 |
         | fsp: M |     | fsp: J |     | fsp: J |     | fsp: N |
         |--------|     |--------|     |--------|     |--------|
         | attr A |     | attr Y |     | attr X |     | attr 1 |
         |--------|     + INCOMP +     +--------+     |--------|
         | attr B |     +--------+                    | attr 2 |
         |--------|                                   |--------|
         ..........                                   ..........
         |--------|                                   |--------|
         | attr W |                                   | attr n |
         +--------+                                   +--------+
    
    And now we have the new (incomplete) attribute @ L4:0, and the
    original attribute at L3:0. At this point, the first transaction is
    committed, and we move to the flipping of the flags.
    
    This is where we are supposed to end up with this:
    
         +--------+     +--------+     +--------+     +--------+
         |   L1   |     |   L4   |     |   L3   |     |   L2   |
         | fwd: 4 |---->| fwd: 3 |---->| fwd: 2 |---->| fwd: 0 |
         | bwd: 0 |<----| bwd: 1 |<----| bwd: 4 |<----| bwd: 3 |
         | fsp: M |     | fsp: J |     | fsp: J |     | fsp: N |
         |--------|     |--------|     |--------|     |--------|
         | attr A |     | attr Y |     | attr X |     | attr 1 |
         |--------|     +--------+     + INCOMP +     |--------|
         | attr B |                    +--------+     | attr 2 |
         |--------|                                   |--------|
         ..........                                   ..........
         |--------|                                   |--------|
         | attr W |                                   | attr n |
         +--------+                                   +--------+
    
    But that doesn't happen properly - the attribute tracking indexes
    are not pointing to the right locations. What we end up with is both
    the old attribute to be removed pointing at L4:0 and the new
    attribute at L4:1.  On a debug kernel, this assert fails like so:
    
    XFS: Assertion failed: args->index2 < be16_to_cpu(leaf2->hdr.count), file: fs/xfs/xfs_attr_leaf.c, line: 2725
    
    because the new attribute location does not exist. On a production
    kernel, this goes unnoticed and the code proceeds ahead merrily and
    removes L4 because it thinks that is the block that is no longer
    needed. This leaves the hash index node pointing to entries
    L1, L4 and L2, but only blocks L1, L3 and L2 to exist. Further, the
    leaf level sibling list is L1 <-> L4 <-> L2, but L4 is now free
    space, and so everything is busted. This corruption is caused by the
    removal of the old attribute triggering a join - it joins everything
    correctly but then frees the wrong block.
    
    xfs_repair will report something like:
    
    bad sibling back pointer for block 4 in attribute fork for inode 131
    problem with attribute contents in inode 131
    would clear attr fork
    bad nblocks 8 for inode 131, would reset to 3
    bad anextents 4 for inode 131, would reset to 0
    
    The problem lies in the assignment of the old/new blocks for
    tracking purposes when the double leaf split occurs. The first split
    tries to place the new attribute inside the current leaf (i.e.
    "inleaf == true") and moves the old attribute (X) to the new block.
    This sets up the old block/index to L1:X, and newly allocated
    block to L3:0. It then moves attr X to the new block and tries to
    insert attr Y at the old index. That fails, so it splits again.
    
    With the second split, the rebalance ends up placing the new attr in
    the second new block - L4:0 - and this is where the code goes wrong.
    What is does is it sets both the new and old block index to the
    second new block. Hence it inserts attr Y at the right place (L4:0)
    but overwrites the current location of the attr to replace that is
    held in the new block index (currently L3:0). It over writes it with
    L4:1 - the index we later assert fail on.
    
    Hopefully this table will show this in a foramt that is a bit easier
    to understand:
    
    Split		old attr index		new attr index
    		vanilla	patched		vanilla	patched
    before 1st	L1:26	L1:26		N/A	N/A
    after 1st	L3:0	L3:0		L1:26	L1:26
    after 2nd	L4:0	L3:0		L4:1	L4:0
                    ^^^^			^^^^
    		wrong			wrong
    
    The fix is surprisingly simple, for all this analysis - just stop
    the rebalance on the out-of leaf case from overwriting the new attr
    index - it's already correct for the double split case.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 6ce377afd1755eae5c93410ca9a1121dfead7b87
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Nov 2 11:38:44 2012 +1100

    xfs: fix reading of wrapped log data
    
    Commit 4439647 ("xfs: reset buffer pointers before freeing them") in
    3.0-rc1 introduced a regression when recovering log buffers that
    wrapped around the end of log. The second part of the log buffer at
    the start of the physical log was being read into the header buffer
    rather than the data buffer, and hence recovery was seeing garbage
    in the data buffer when it got to the region of the log buffer that
    was incorrectly read.
    
    Cc: <stable@xxxxxxxxxxxxxxx> # 3.0.x, 3.2.x, 3.4.x 3.6.x
    Reported-by: Torsten Kaiser <just.for.lkml@xxxxxxxxxxxxxx>
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 03b1293edad462ad1ad62bcc5160c76758e450d5
Author: Dave Chinner <david@xxxxxxxxxxxxx>
Date:   Fri Nov 2 14:23:12 2012 +1100

    xfs: fix buffer shudown reference count mismatch
    
    When we shut down the filesystem, we have to unpin and free all the
    buffers currently active in the CIL. To do this we unpin and remove
    them in one operation as a result of a failed iclogbuf write. For
    buffers, we do this removal via a simultated IO completion of after
    marking the buffer stale.
    
    At the time we do this, we have two references to the buffer - the
    active LRU reference and the buf log item.  The LRU reference is
    removed by marking the buffer stale, and the active CIL reference is
    by the xfs_buf_iodone() callback that is run by
    xfs_buf_do_callbacks() during ioend processing (via the bp->b_iodone
    callback).
    
    However, ioend processing requires one more reference - that of the
    IO that it is completing. We don't have this reference, so we free
    the buffer prematurely and use it after it is freed. For buffers
    marked with XBF_ASYNC, this leads to assert failures in
    xfs_buf_rele() on debug kernels because the b_hold count is zero.
    
    Fix this by making sure we take the necessary IO reference before
    starting IO completion processing on the stale buffer, and set the
    XBF_ASYNC flag to ensure that IO completion processing removes all
    the active references from the buffer to ensure it is fully torn
    down.
    
    Cc: <stable@xxxxxxxxxxxxxxx>
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 4b62acfe99e158fb7812982d1cf90a075710a92c
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Nov 2 11:38:42 2012 +1100

    xfs: don't vmap inode cluster buffers during free
    
    Inode buffers do not need to be mapped as inodes are read or written
    directly from/to the pages underlying the buffer. This fixes a
    regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
    default behaviour").
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit ca250b1b3d711936d7dae9e97871f2261347f82d
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Nov 2 11:38:41 2012 +1100

    xfs: invalidate allocbt blocks moved to the free list
    
    When we free a block from the alloc btree tree, we move it to the
    freelist held in the AGFL and mark it busy in the busy extent tree.
    This typically happens when we merge btree blocks.
    
    Once the transaction is committed and checkpointed, the block can
    remain on the free list for an indefinite amount of time.  Now, this
    isn't the end of the world at this point - if the free list is
    shortened, the buffer is invalidated in the transaction that moves
    it back to free space. If the buffer is allocated as metadata from
    the free list, then all the modifications getted logged, and we have
    no issues, either. And if it gets allocated as userdata direct from
    the freelist, it gets invalidated and so will never get written.
    
    However, during the time it sits on the free list, pressure on the
    log can cause the AIL to be pushed and the buffer that covers the
    block gets pushed for write. IOWs, we end up writing a freed
    metadata block to disk. Again, this isn't the end of the world
    because we know from the above we are only writing to free space.
    
    The problem, however, is for validation callbacks. If the block was
    on old btree root block, then the level of the block is going to be
    higher than the current tree root, and so will fail validation.
    There may be other inconsistencies in the block as well, and
    currently we don't care because the block is in free space. Shutting
    down the filesystem because a freed block doesn't pass write
    validation, OTOH, is rather unfriendly.
    
    So, make sure we always invalidate buffers as they move from the
    free space trees to the free list so that we guarantee they never
    get written to disk while on the free list.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Phil White <pwhite@xxxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 1e7acbb7bc1ae7c1c62fd1310b3176a820225056
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Thu Oct 25 17:22:30 2012 +1100

    xfs: silence uninitialised f.file warning.
    
    Uninitialised variable build warning introduced by 2903ff0 ("switch
    simple cases of fget_light to fdget"), gcc is not smart enough to
    work out that the variable is not used uninitialised, and the commit
    removed the initialisation at declaration that the old variable had.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit eaef854335ce09956e930fe4a193327417edc6c9
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Tue Oct 9 14:50:52 2012 +1100

    xfs: growfs: don't read garbage for new secondary superblocks
    
    When updating new secondary superblocks in a growfs operation, the
    superblock buffer is read from the newly grown region of the
    underlying device. This is not guaranteed to be zero, so violates
    the underlying assumption that the unused parts of superblocks are
    zero filled. Get a new buffer for these secondary superblocks to
    ensure that the unused regions are zero filled correctly.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Carlos Maiolino <cmaiolino@xxxxxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 1f3c785c3adb7d2b109ec7c8f10081d1294b03d3
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Oct 5 11:06:59 2012 +1000

    xfs: move allocation stack switch up to xfs_bmapi_allocate
    
    Switching stacks are xfs_alloc_vextent can cause deadlocks when we
    run out of worker threads on the allocation workqueue. This can
    occur because xfs_bmap_btalloc can make multiple calls to
    xfs_alloc_vextent() and even if xfs_alloc_vextent() fails it can
    return with the AGF locked in the current allocation transaction.
    
    If we then need to make another allocation, and all the allocation
    worker contexts are exhausted because the are blocked waiting for
    the AGF lock, holder of the AGF cannot get it's xfs-alloc_vextent
    work completed to release the AGF.  Hence allocation effectively
    deadlocks.
    
    To avoid this, move the stack switch one layer up to
    xfs_bmapi_allocate() so that all of the allocation attempts in a
    single switched stack transaction occur in a single worker context.
    This avoids the problem of an allocation being blocked waiting for
    a worker thread whilst holding the AGF.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 326c03555b914ff153ba5b40df87fd6e28e7e367
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Fri Oct 5 11:06:58 2012 +1000

    xfs: introduce XFS_BMAPI_STACK_SWITCH
    
    Certain allocation paths through xfs_bmapi_write() are in situations
    where we have limited stack available. These are almost always in
    the buffered IO writeback path when convertion delayed allocation
    extents to real extents.
    
    The current stack switch occurs for userdata allocations, which
    means we also do stack switches for preallocation, direct IO and
    unwritten extent conversion, even those these call chains have never
    been implicated in a stack overrun.
    
    Hence, let's target just the single stack overun offended for stack
    switches. To do that, introduce a XFS_BMAPI_STACK_SWITCH flag that
    the caller can pass xfs_bmapi_write() to indicate it should switch
    stacks if it needs to do allocation.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 408cc4e97a3ccd172d2d676e4b585badf439271b
Author: Mark Tinguely <tinguely@xxxxxxx>
Date:   Thu Sep 20 13:16:45 2012 -0500

    xfs: zero allocation_args on the kernel stack
    
    Zero the kernel stack space that makes up the xfs_alloc_arg structures.
    
    Signed-off-by: Mark Tinguely <tinguely@xxxxxxx>
    Reviewed-by: Ben Myers <bpm@xxxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

commit 7e9620f21d8c9e389fd6845487e07d5df898a2e4
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date:   Mon Oct 8 21:56:12 2012 +1100

    xfs: only update the last_sync_lsn when a transaction completes
    
    The log write code stamps each iclog with the current tail LSN in
    the iclog header so that recovery knows where to find the tail of
    thelog once it has found the head. Normally this is taken from the
    first item on the AIL - the log item that corresponds to the oldest
    active item in the log.
    
    The problem is that when the AIL is empty, the tail lsn is dervied
    from the the l_last_sync_lsn, which is the LSN of the last iclog to
    be written to the log. In most cases this doesn't happen, because
    the AIL is rarely empty on an active filesystem. However, when it
    does, it opens up an interesting case when the transaction being
    committed to the iclog spans multiple iclogs.
    
    That is, the first iclog is stamped with the l_last_sync_lsn, and IO
    is issued. Then the next iclog is setup, the changes copied into the
    iclog (takes some time), and then the l_last_sync_lsn is stamped
    into the header and IO is issued. This is still the same
    transaction, so the tail lsn of both iclogs must be the same for log
    recovery to find the entire transaction to be able to replay it.
    
    The problem arises in that the iclog buffer IO completion updates
    the l_last_sync_lsn with it's own LSN. Therefore, If the first iclog
    completes it's IO before the second iclog is filled and has the tail
    lsn stamped in it, it will stamp the LSN of the first iclog into
    it's tail lsn field. If the system fails at this point, log recovery
    will not see a complete transaction, so the transaction will no be
    replayed.
    
    The fix is simple - the l_last_sync_lsn is updated when a iclog
    buffer IO completes, and this is incorrect. The l_last_sync_lsn
    shoul dbe updated when a transaction is completed by a iclog buffer
    IO. That is, only iclog buffers that have transaction commit
    callbacks attached to them should update the l_last_sync_lsn. This
    means that the last_sync_lsn will only move forward when a commit
    record it written, not in the middle of a large transaction that is
    rolling through multiple iclog buffers.
    
    Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Reviewed-by: Mark Tinguely <tinguely@xxxxxxx>
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Signed-off-by: Ben Myers <bpm@xxxxxxx>

-----------------------------------------------------------------------

Summary of changes:


hooks/post-receive
-- 
XFS development tree

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux