v12.1.1 Luminous RC released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is the second release candidate for Luminous, the next long term stable
release. Please note that this is still a *release candidate* and not
the final release, and hence not yet recommended on production clusters,
testing is welcome & we would love feedback and bug reports.

Ceph Luminous (v12.2.0) will be the foundation for the next long-term
stable release series.  There have been major changes since Kraken
(v11.2.z) and Jewel (v10.2.z), and the upgrade process is non-trivial.
Please read these release notes carefully. 

Major Changes from Kraken
-------------------------

- *General*:

  * Ceph now has a simple, built-in web-based dashboard for monitoring
    cluster status. 

- *RADOS*:

  * *BlueStore*:

    - The new *BlueStore* backend for *ceph-osd* is now stable and the new
      default for newly created OSDs.  BlueStore manages data stored by each OSD
      by directly managing the physical HDDs or SSDs without the use of an
      intervening file system like XFS.  This provides greater performance
      and features. 
    - BlueStore supports *full data and metadata checksums* of all
      data stored by Ceph.
    - BlueStore supports inline compression using zlib, snappy, or LZ4.  (Ceph
      also supports zstd for RGW compression but zstd is not recommended for
      BlueStore for performance reasons.) 

  * *Erasure coded* pools now have full support for *overwrites*,
    allowing them to be used with RBD and CephFS. 

  * The configuration option "osd pool erasure code stripe width" has
    been replaced by "osd pool erasure code stripe unit", and given the
    ability to be overridden by the erasure code profile setting
    "stripe_unit". For more details see "Erasure Code Profiles" in the
    documentation.

  * rbd and cephfs can use erasure coding with bluestore. This may be
    enabled by setting 'allow_ec_overwrites' to 'true' for a pool. Since
    this relies on bluestore's checksumming to do deep scrubbing,
    enabling this on a pool stored on filestore is not allowed.

  * The 'rados df' JSON output now prints numeric values as numbers instead of
    strings.

  * The `mon_osd_max_op_age` option has been renamed to
    `mon_osd_warn_op_age` (default: 32 seconds), to indicate we
    generate a warning at this age.  There is also a new
    `mon_osd_err_op_age_ratio` that is a expressed as a multitple of
    `mon_osd_warn_op_age` (default: 128, for roughly 60 minutes) to
    control when an error is generated.

  * The default maximum size for a single RADOS object has been reduced from
    100GB to 128MB.  The 100GB limit was completely impractical in practice
    while the 128MB limit is a bit high but not unreasonable.  If you have an
    application written directly to librados that is using objects larger than
    128MB you may need to adjust `osd_max_object_size`.

  * The semantics of the 'rados ls' and librados object listing
    operations have always been a bit confusing in that "whiteout"
    objects (which logically don't exist and will return ENOENT if you
    try to access them) are included in the results.  Previously
    whiteouts only occurred in cache tier pools.  In luminous, logically
    deleted but snapshotted objects now result in a whiteout object, and
    as a result they will appear in 'rados ls' results, even though
    trying to read such an object will result in ENOENT.  The 'rados
    listsnaps' operation can be used in such a case to enumerate which
    snapshots are present.

    This may seem a bit strange, but is less strange than having a
    deleted-but-snapshotted object not appear at all and be completely
    hidden from librados's ability to enumerate objects.  Future
    versions of Ceph will likely include an alternative object
    enumeration interface that makes it more natural and efficient to
    enumerate all objects along with their snapshot and clone metadata.


  * *ceph-mgr*:

    - There is a new daemon, *ceph-mgr*, which is a required part of any
      Ceph deployment.  Although IO can continue when *ceph-mgr* is
      down, metrics will not refresh and some metrics-related calls
      (e.g., `ceph df`) may block.  We recommend deploying several instances of
      *ceph-mgr* for reliability.  See the notes on `Upgrading`_ below.
    - The *ceph-mgr* daemon includes a REST-based management API.  The
      API is still experimental and somewhat limited but will form the basis
      for API-based management of Ceph going forward.  

    - The `status` ceph-mgr module is enabled by default, and initially provides two
      commands: `ceph tell mgr osd status` and `ceph tell mgr fs status`.  These
      are high level colorized views to complement the existing CLI.


  * The overall *scalability* of the cluster has improved. We have
    successfully tested clusters with up to 10,000 OSDs.
  * Each OSD can now have a *device class* associated with it (e.g., `hdd` or
    `ssd`), allowing CRUSH rules to trivially map data to a subset of devices
    in the system.  Manually writing CRUSH rules or manual editing of the CRUSH
    is normally not required. 
  * You can now *optimize CRUSH weights* can now be optimized to
    maintain a *near-perfect distribution of data* across OSDs.  
  * There is also a new `upmap` exception mechanism that allows
    individual PGs to be moved around to achieve a *perfect
    distribution* (this requires luminous clients). 
  * Each OSD now adjusts its default configuration based on whether the
    backing device is an HDD or SSD.  Manual tuning generally not required.
  * The prototype *mclock QoS queueing algorithm* is now available.  
  * There is now a *backoff* mechanism that prevents OSDs from being
    overloaded by requests to objects or PGs that are not currently able to
    process IO.
  * There is a *simplified OSD replacement process* that is more robust.  
  * You can query the supported features and (apparent) releases of
    all connected daemons and clients with `ceph features`. 
  * You can configure the oldest Ceph client version you wish to allow to
    connect to the cluster via `ceph osd set-require-min-compat-client` and
    Ceph will prevent you from enabling features that will break compatibility
    with those clients.  
  * Several `sleep` settings, include `osd_recovery_sleep`,
    `osd_snap_trim_sleep`, and `osd_scrub_sleep` have been
    reimplemented to work efficiently.  (These are used in some cases
    to work around issues throttling background work.)

  * The deprecated 'crush_ruleset' property has finally been removed; please use
    'crush_rule' instead for the 'osd pool get ...' and 'osd pool set ..' commands.

   * The 'osd pool default crush replicated ruleset' option has been
     removed and replaced by the 'osd pool default crush rule' option.
     By default it is -1, which means the mon will pick the first type
     replicated rule in the CRUSH map for replicated pools.  Erasure
     coded pools have rules that are automatically created for them if they are
     not specified at pool creation time.

- *RGW*:

  * RGW *metadata search* backed by ElasticSearch now supports end
    user requests service via RGW itself, and also supports custom
    metadata fields. A query language a set of RESTful APIs were
    created for users to be able to search objects by their
    metadata. New APIs that allow control of custom metadata fields
    were also added.
  * RGW now supports *dynamic bucket index sharding*.  As the number
    of objects in a bucket grows, RGW will automatically reshard the
    bucket index in response.  No user intervention or bucket size
    capacity planning is required.
  * RGW introduces *server side encryption* of uploaded objects with
    three options for the management of encryption keys: automatic
    encryption (only recommended for test setups), customer provided
    keys similar to Amazon SSE-C specification, and through the use of
    an external key management service (Openstack Barbican) similar
    to Amazon SSE-KMS specification.
  * RGW now has preliminary AWS-like bucket policy API support.  For
    now, policy is a means to express a range of new authorization
    concepts.  In the future it will be the foundation for additional
    auth capabilities such as STS and group policy.
  * RGW has consolidated the several metadata index pools via the use of rados
    namespaces.

- *RBD*:

  * RBD now has full, stable support for *erasure coded pools* via the new
    `--data-pool` option to `rbd create`.
  * RBD mirroring's rbd-mirror daemon is now highly available. We
    recommend deploying several instances of rbd-mirror for
    reliability.
  * The default 'rbd' pool is no longer created automatically during
    cluster creation. Additionally, the name of the default pool used
    by the rbd CLI when no pool is specified can be overridden via a
    new `rbd default pool = <pool name>` configuration option.
  * Initial support for deferred image deletion via new `rbd
    trash` CLI commands. Images, even ones actively in-use by
    clones, can be moved to the trash and deleted at a later time.
  * New pool-level `rbd mirror pool promote` and `rbd mirror pool
    demote` commands to batch promote/demote all mirrored images
    within a pool.
  * Mirroring now optionally supports a configurable replication delay
    via the `rbd mirroring replay delay = <seconds>` configuration
    option.
  * Improved discard handling when the object map feature is enabled.
  * rbd CLI `import` and `copy` commands now detect sparse and
    preserve sparse regions.
  * Images and Snapshots will now include a creation timestamp

- *CephFS*:

  * *Multiple active MDS daemons* is now considered stable.  The number
    of active MDS servers may be adjusted up or down on an active CephFS file
    system.
  * CephFS *directory fragmentation* is now stable and enabled by
    default on new filesystems.  To enable it on existing filesystems
    use "ceph fs set <fs_name> allow_dirfrags".  Large or very busy
    directories are sharded and (potentially) distributed across
    multiple MDS daemons automatically.
  * Directory subtrees can be explicitly pinned to specific MDS daemons in
    cases where the automatic load balancing is not desired or effective.

- *Miscellaneous*:

  * Release packages are now being built for *Debian Stretch*.  The
    distributions we build for now includes:

    - CentOS 7 (x86_64 and aarch64)
    - Debian 8 Jessie (x86_64)
    - Debian 9 Stretch (x86_64)
    - Ubuntu 16.04 Xenial (x86_64 and aarch64)
    - Ubuntu 14.04 Trusty (x86_64)

    Note that QA is limited to CentOS and Ubuntu (xenial and trusty).

  * *CLI changes*:

    - The `ceph -s` or `ceph status` command has a fresh look.
    - `ceph {osd,mds,mon} versions` summarizes versions of running daemons.
    - `ceph {osd,mds,mon} count-metadata <property>` similarly
      tabulates any other daemon metadata visible via the `ceph
      {osd,mds,mon} metadata` commands.
    - `ceph features` summarizes features and releases of connected
      clients and daemons.
    - `ceph osd require-osd-release <release>` replaces the old
      `require_RELEASE_osds` flags.
    - `ceph osd pg-upmap`, `ceph osd rm-pg-upmap`, `ceph osd
      pg-upmap-items`, `ceph osd rm-pg-upmap-items` can explicitly
      manage `upmap` items.
    - `ceph osd getcrushmap` returns a crush map version number on
      stderr, and `ceph osd setcrushmap [version]` will only inject
      an updated crush map if the version matches.  This allows crush
      maps to be updated offline and then reinjected into the cluster
      without fear of clobbering racing changes (e.g., by newly added
      osds or changes by other administrators).
    - `ceph osd create` has been replaced by `ceph osd new`.  This
      should be hidden from most users by user-facing tools like
      `ceph-disk`.
    - `ceph osd destroy` will mark an OSD destroyed and remove its
      cephx and lockbox keys.  However, the OSD id and CRUSH map entry
      will remain in place, allowing the id to be reused by a
      replacement device with minimal data rebalancing.
    - `ceph osd purge` will remove all traces of an OSD from the
      cluster, including its cephx encryption keys, dm-crypt lockbox
      keys, OSD id, and crush map entry.
    - `ceph osd ls-tree <name>` will output a list of OSD ids under
      the given CRUSH name (like a host or rack name).  This is useful
      for applying changes to entire subtrees.  For example, `ceph
      osd down `ceph osd ls-tree rack1``.
    - `ceph osd {add,rm}-{noout,noin,nodown,noup}` allow the
      `noout`, `nodown`, `noin`, and `noup` flags to be applied to
      specific OSDs.
    - `ceph log last [n]` will output the last *n* lines of the cluster
      log.
    - `ceph mgr dump` will dump the MgrMap, including the currently active
      ceph-mgr daemon and any standbys.
    - `ceph mgr module ls` will list active ceph-mgr modules.
    - `ceph mgr module {enable,disable} <name>` will enable or
      disable the named mgr module.  The module must be present in the
      configured `mgr_module_path` on the host(s) where `ceph-mgr` is
      running.
    - `ceph osd crush swap-bucket <src> <dest>` will swap the
      contents of two CRUSH buckets in the hierarchy while preserving
      the buckets' ids.  This allows an entire subtree of devices to
      be replaced (e.g., to replace an entire host of FileStore OSDs
      with newly-imaged BlueStore OSDs) without disrupting the
      distribution of data across neighboring devices.
    - `ceph osd set-require-min-compat-client <release>` configures
      the oldest client release the cluster is required to support.
      Other changes, like CRUSH tunables, will fail with an error if
      they would violate this setting.  Changing this setting also
      fails if clients older than the specified release are currently
      connected to the cluster.
    - `ceph config-key dump` dumps config-key entries and their
      contents.  (The existing `ceph config-key list` only dumps the key
      names, not the values.)
    - `ceph osd set-{full,nearfull,backfillfull}-ratio` sets the
      cluster-wide ratio for various full thresholds (when the cluster
      refuses IO, when the cluster warns about being close to full,
      when an OSD will defer rebalancing a PG to itself,
      respectively).
    - `ceph osd reweightn` will specify the `reweight` values for
      multiple OSDs in a single command.  This is equivalent to a series of
      `ceph osd reweight` commands.
    - `ceph osd crush class {create,rm,ls,rename}` manage the new
      CRUSH *device class* feature.  `ceph crush set-device-class
      <class> <osd> [<osd>...]` will set the class for particular devices.
    - `ceph osd crush rule create-replicated` replaces the old
      `ceph osd crush rule create-simple` command to create a CRUSH
      rule for a replicated pool.  Notably it takes a `class` argument
      for the *device class* the rule should target (e.g., `ssd` or
      `hdd`).
    - `ceph mon feature ls` will list monitor features recorded in the
      MonMap.  `ceph mon feature set` will set an optional feature (none of
      these exist yet).
    - `ceph tell <daemon> help` will now return a usage summary.

Major Changes from Jewel
------------------------

- *RADOS*:

  * We now default to the AsyncMessenger (`ms type = async`) instead
    of the legacy SimpleMessenger.  The most noticeable difference is
    that we now use a fixed sized thread pool for network connections
    (instead of two threads per socket with SimpleMessenger).
  * Some OSD failures are now detected almost immediately, whereas
    previously the heartbeat timeout (which defaults to 20 seconds)
    had to expire.  This prevents IO from blocking for an extended
    period for failures where the host remains up but the ceph-osd
    process is no longer running.
  * The size of encoded OSDMaps has been reduced.
  * The OSDs now quiesce scrubbing when recovery or rebalancing is in progress.

- *RGW*:

  * RGW now supports the S3 multipart object copy-part API.
  * It is possible now to reshard an existing bucket offline. Offline
    bucket resharding currently requires that all IO (especially
    writes) to the specific bucket is quiesced.  (For automatic online
    resharding, see the new feature in Luminous above.)
  * RGW now supports data compression for objects.
  * Civetweb version has been upgraded to 1.8
  * The Swift static website API is now supported (S3 support has been added
    previously).
  * S3 bucket lifecycle API has been added. Note that currently it only supports
    object expiration.
  * Support for custom search filters has been added to the LDAP auth
    implementation.
  * Support for NFS version 3 has been added to the RGW NFS gateway.
  * A Python binding has been created for librgw.

- *RBD*:

  * The rbd-mirror daemon now supports replicating dynamic image
    feature updates and image metadata key/value pairs from the
    primary image to the non-primary image.
  * The number of image snapshots can be optionally restricted to a
    configurable maximum.
  * The rbd Python API now supports asynchronous IO operations.

- *CephFS*:

  * libcephfs function definitions have been changed to enable proper
    uid/gid control.  The library version has been increased to reflect the
    interface change.
  * Standby replay MDS daemons now consume less memory on workloads
    doing deletions.
  * Scrub now repairs backtrace, and populates `damage ls` with
    discovered errors.
  * A new `pg_files` subcommand to `cephfs-data-scan` can identify
    files affected by a damaged or lost RADOS PG.
  * The false-positive "failing to respond to cache pressure" warnings have
    been fixed.


Upgrade from Jewel or Kraken
----------------------------
.. _Upgrading:

#. Ensure that the `sortbitwise` flag is enabled::

     # ceph osd set sortbitwise

#. Make sure your cluster is stable and healthy (no down or
   recoverying OSDs).  (Optional, but recommended.)

#. Do not create any new erasure-code pools while upgrading the monitors.

#. Set the `noout` flag for the duration of the upgrade. (Optional
   but recommended.)::

     # ceph osd set noout

#. Upgrade monitors by installing the new packages and restarting the
   monitor daemons.  Note that, unlike prior releases, the ceph-mon
   daemons *must* be upgraded first.::

     # systemctl restart ceph-mon.target

   Verify the monitor upgrade is complete once all monitors are up by
   looking for the `luminous` feature string in the mon map.  For
   example::

     # ceph mon feature ls

   should include `luminous` under persistent features::

     on current monmap (epoch NNN)
        persistent: [kraken,luminous]
        required: [kraken,luminous]

#. Add or restart `ceph-mgr` daemons.  If you are upgrading from
   kraken, upgrade packages and restart ceph-mgr daemons with::

     # systemctl restart ceph-mgr.target

   If you are upgrading from kraken, you may already have ceph-mgr
   daemons deployed.  If not, or if you are upgrading from jewel, you
   can deploy new daemons with tools like ceph-deploy or ceph-ansible.
   For example,::

     # ceph-deploy mgr create HOST

   Verify the ceph-mgr daemons are running by checking `ceph -s`::

     # ceph -s

     ...
       services:
        mon: 3 daemons, quorum foo,bar,baz
        mgr: foo(active), standbys: bar, baz
     ...

#. Upgrade all OSDs by installing the new packages and restarting the
   ceph-osd daemons on all hosts.::

     # systemctl restart ceph-osd.target

   You can monitor the progress of the OSD upgrades with the new
   `ceph osd versions` command.::

     # ceph osd versions
     {
        "ceph version 12.2.0 (...) luminous (stable)": 12,
        "ceph version 10.2.6 (...)": 3,
     }

#. Upgrade all CephFS daemons by upgrading packages and restarting
   daemons on all hosts.::

     # systemctl restart ceph-mds.target

#. Upgrade all radosgw daemons by upgrading packages and restarting
   daemons on all hosts.::

     # systemctl restart radosgw.target

#. Complete the upgrade by disallowing pre-luminous OSDs::

     # ceph osd require-osd-release luminous

   If you set `noout` at the beginning, be sure to clear it with::

     # ceph osd unset noout

#. Verify the cluster is healthy with `ceph health`.


Upgrading from pre-Jewel releases (like Hammer)
-----------------------------------------------

You *must* first upgrade to Jewel (10.2.z) before attempting an
upgrade to Luminous.


Upgrade compatibility notes, Kraken to Luminous
-----------------------------------------------

* We no longer test the FileStore ceph-osd backend in combination with
  `btrfs`.  We recommend against using btrfs.  If you are using
  btrfs-based OSDs and want to upgrade to luminous you will need to
  add the follwing to your ceph.conf::

    enable experimental unrecoverable data corrupting features = btrfs

  The code is mature and unlikely to change, but we are only
  continuing to test the Jewel stable branch against btrfs.  We
  recommend moving these OSDs to FileStore with XFS or BlueStore.
* The `ruleset-*` properties for the erasure code profiles have been
  renamed to `crush-*` to (1) move away from the obsolete 'ruleset'
  term and to be more clear about their purpose.  There is also a new
  optional `crush-device-class` property to specify a CRUSH device
  class to use for the erasure coded pool.  Existing erasure code
  profiles will be converted automatically when upgrade completes
  (when the `ceph osd require-osd-release luminous` command is run)
  but any provisioning tools that create erasure coded pools may need
  to be updated.
* When assigning a network to the public network and not to
  the cluster network the network specification of the public
  network will be used for the cluster network as well.
  In older versions this would lead to cluster services
  being bound to 0.0.0.0:<port>, thus making the
  cluster service even more publicly available than the
  public services. When only specifying a cluster network it
  will still result in the public services binding to 0.0.0.0.

* In previous versions, if a client sent an op to the wrong OSD, the OSD
  would reply with ENXIO.  The rationale here is that the client or OSD is
  clearly buggy and we want to surface the error as clearly as possible.
  We now only send the ENXIO reply if the osd_enxio_on_misdirected_op option
  is enabled (it's off by default).  This means that a VM using librbd that
  previously would have gotten an EIO and gone read-only will now see a
  blocked/hung IO instead.

* The "journaler allow split entries" config setting has been removed.

- *librados*:

  * Some variants of the omap_get_keys and omap_get_vals librados
    functions have been deprecated in favor of omap_get_vals2 and
    omap_get_keys2.  The new methods include an output argument
    indicating whether there are additional keys left to fetch.
    Previously this had to be inferred from the requested key count vs
    the number of keys returned, but this breaks with new OSD-side
    limits on the number of keys or bytes that can be returned by a
    single omap request.  These limits were introduced by kraken but
    are effectively disabled by default (by setting a very large limit
    of 1 GB) because users of the newly deprecated interface cannot
    tell whether they should fetch more keys or not.  In the case of
    the standalone calls in the C++ interface
    (IoCtx::get_omap_{keys,vals}), librados has been updated to loop on
    the client side to provide a correct result via multiple calls to
    the OSD.  In the case of the methods used for building
    multi-operation transactions, however, client-side looping is not
    practical, and the methods have been deprecated.  Note that use of
    either the IoCtx methods on older librados versions or the
    deprecated methods on any version of librados will lead to
    incomplete results if/when the new OSD limits are enabled.

  * The original librados rados_objects_list_open (C) and objects_begin
    (C++) object listing API, deprecated in Hammer, has finally been
    removed.  Users of this interface must update their software to use
    either the rados_nobjects_list_open (C) and nobjects_begin (C++) API or
    the new rados_object_list_begin (C) and object_list_begin (C++) API
    before updating the client-side librados library to Luminous.
    Object enumeration (via any API) with the latest librados version
    and pre-Hammer OSDs is no longer supported.  Note that no in-tree
    Ceph services rely on object enumeration via the deprecated APIs, so
    only external librados users might be affected.

    The newest (and recommended) rados_object_list_begin (C) and
    object_list_begin (C++) API is only usable on clusters with the
    SORTBITWISE flag enabled (Jewel and later).  (Note that this flag is
    required to be set before upgrading beyond Jewel.)

- *CephFS*:

  * When configuring ceph-fuse mounts in /etc/fstab, a new syntax is
    available that uses "ceph.<arg>=<val>" in the options column, instead
    of putting configuration in the device column.  The old style syntax
    still works.  See the documentation page "Mount CephFS in your
    file systems table" for details.

  * CephFS clients without the 'p' flag in their authentication capability
    string will no longer be able to set quotas or any layout fields.  This
    flag previously only restricted modification of the pool and namespace
    fields in layouts.
  * CephFS will generate a health warning if you have fewer standby daemons
    than it thinks you wanted.  By default this will be 1 if you ever had
    a standby, and 0 if you did not.  You can customize this using
    `ceph fs set <fs> standby_count_wanted <number>`.  Setting it
    to zero will effectively disable the health check.
  * The "ceph mds tell ..." command has been removed.  It is superceded
    by "ceph tell mds.<id> ..."


Notable Changes since v12.1.1 (RC1)
-----------------------------------

* choose_args encoding has been changed to make it architecture-independent.
  If you deployed Luminous dev releases or 12.1.0 rc release and made use of
  the CRUSH choose_args feature, you need to remove all choose_args mappings
  from your CRUSH map before starting the upgrade.

* The 'ceph health' structured output (JSON or XML) no longer contains
  a 'timechecks' section describing the time sync status.  This
  information is now available via the 'ceph time-sync-status'
  command.

* Certain extra fields in the 'ceph health' structured output that
  used to appear if the mons were low on disk space (which duplicated
  the information in the normal health warning messages) are now gone.

* The "ceph -w" output no longer contains audit log entries by default.
  Add a "--watch-channel=audit" or "--watch-channel=*" to see them.

* The 'apply' mode of cephfs-journal-tool has been removed

* Added new configuration "public bind addr" to support dynamic environments
  like Kubernetes. When set the Ceph MON daemon could bind locally to an IP
  address and advertise a different IP address "public addr" on the network.


For a detailed changelog refer to the blog post entry at
http://ceph.com/releases/v12-1-1-luminous-rc-released/

Getting Ceph
------------

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.1.1.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: f3e663a190bf2ed12c7e3cda288b9a159572c800

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux