Re: db_devices doesn't show up in exported osd service spec

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



According to your "pvs" you still have a VG on your sdb device. As long as that is on there, it will not be available to ceph. I have had to do a lvremove, like this:
lvremove ceph-78c78efb-af86-427c-8be1-886fa1d54f8a osd-db-72784b7a-b5c0-46e6-8566-74758c297adc

Do a lvs command to see the right parameters.

Regards

Jens

-----Original Message-----
From: Tony Liu <tonyliu0592@xxxxxxxxxxx> 
Sent: 10. februar 2021 22:59
To: David Orman <ormandj@xxxxxxxxxxxx>
Cc: Jens Hyllegaard (Soft Design A/S) <jens.hyllegaard@xxxxxxxxxxxxx>; ceph-users@xxxxxxx
Subject: Re:  Re: db_devices doesn't show up in exported osd service spec

Hi David,

===============================
# pvs
  PV         VG                                                  Fmt  Attr PSize    PFree
  /dev/sda3  vg0                                                 lvm2 a--     1.09t      0
  /dev/sdb   ceph-block-dbs-f8d28f1f-2dd3-47d0-9110-959e88405112 lvm2 a--  <447.13g 127.75g
  /dev/sdc   ceph-block-8f85121e-98bf-4466-aaf3-d888bcc938f6     lvm2 a--     2.18t      0
  /dev/sde   ceph-block-0b47f685-a60b-42fb-b679-931ef763b3c8     lvm2 a--     2.18t      0
  /dev/sdf   ceph-block-c526140d-c75f-4b0d-8c63-fbb2a8abfaa2     lvm2 a--     2.18t      0
  /dev/sdg   ceph-block-52b422f7-900a-45ff-a809-69fadabe12fa     lvm2 a--     2.18t      0
  /dev/sdh   ceph-block-da269f0d-ae11-4178-bf1e-6441b8800336     lvm2 a--     2.18t      0
===============================
After "orch osd rm", which doesn't clean up DB LV on OSD node, I manually clean it up by running "ceph-volume lvm zap --osd-id 12", which does the cleanup.
Is "orch device ls" supposed to show SSD device available if there is free space?
That could be another issue.

Thanks!
Tony
________________________________________
From: David Orman <ormandj@xxxxxxxxxxxx>
Sent: February 10, 2021 01:19 PM
To: Tony Liu
Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@xxxxxxx
Subject: Re:  Re: db_devices doesn't show up in exported osd service spec

It's displaying sdb (what I assume you want to be used as a DB device) as unavailable. What's "pvs" output look like on that "ceph-osd-1" host? Perhaps it is full. I see the other email you sent regarding replacement; I suspect the pre-existing LV from your previous OSD is not re-used. You may need to delete it then the service specification should re-create it along with the OSD. If I remember correctly, I stopped the automatic application of the service spec (ceph orch rm osd.servicespec) when I had to replace a failed OSD, removed the OSD, nuked the LV on the db device in question, put in the new drive, then re-enabled the service-spec (ceph orch apply osd -i) and the OSD + DB/WAL were created appropriately. I don't remember the exact sequence, and it may depend on the ceph version. I'm also unsure if the "orch osd rm <svc_id(s)> --replace [--force]" will allow preservation of the db/wal mapping, it might be worth looking at in the future.

On Wed, Feb 10, 2021 at 2:22 PM Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>> wrote:
Hi David,

Request info is below.

# ceph orch device ls ceph-osd-1
HOST        PATH      TYPE   SIZE  DEVICE_ID                           MODEL            VENDOR   ROTATIONAL  AVAIL  REJECT REASONS
ceph-osd-1  /dev/sdd  hdd   2235G  SEAGATE_DL2400MM0159_WBM2VL2G       DL2400MM0159     SEAGATE  1           True
ceph-osd-1  /dev/sda  hdd   1117G  SEAGATE_ST1200MM0099_WFK4NNDY       ST1200MM0099     SEAGATE  1           False  LVM detected, Insufficient space (<5GB) on vgs, locked
ceph-osd-1  /dev/sdb  ssd    447G  ATA_MZ7KH480HAHQ0D3_S5CNNA0N305738  MZ7KH480HAHQ0D3  ATA      0           False  LVM detected, locked
ceph-osd-1  /dev/sdc  hdd   2235G  SEAGATE_DL2400MM0159_WBM2WNSE       DL2400MM0159     SEAGATE  1           False  LVM detected, Insufficient space (<5GB) on vgs, locked
ceph-osd-1  /dev/sde  hdd   2235G  SEAGATE_DL2400MM0159_WBM2WP2S       DL2400MM0159     SEAGATE  1           False  LVM detected, Insufficient space (<5GB) on vgs, locked
ceph-osd-1  /dev/sdf  hdd   2235G  SEAGATE_DL2400MM0159_WBM2VK99       DL2400MM0159     SEAGATE  1           False  LVM detected, Insufficient space (<5GB) on vgs, locked
ceph-osd-1  /dev/sdg  hdd   2235G  SEAGATE_DL2400MM0159_WBM2VJBT       DL2400MM0159     SEAGATE  1           False  LVM detected, Insufficient space (<5GB) on vgs, locked
ceph-osd-1  /dev/sdh  hdd   2235G  SEAGATE_DL2400MM0159_WBM2VMFK       DL2400MM0159     SEAGATE  1           False  LVM detected, Insufficient space (<5GB) on vgs, locked
# cat osd-spec.yaml
service_type: osd
service_id: osd-spec
placement:
 hosts:
 - ceph-osd-1
spec:
  objectstore: bluestore
  #block_db_size: 32212254720
  block_db_size: 64424509440
  data_devices:
    #rotational: 1
    paths:
    - /dev/sdd
  db_devices:
    #rotational: 0
    size: ":1T"
#unmanaged: true

+---------+------+------+------+----+-----+
# ceph orch apply osd -i osd-spec.yaml --dry-run
+---------+----------+------------+----------+----+-----+
|SERVICE  |NAME      |HOST        |DATA      |DB  |WAL  |
+---------+----------+------------+----------+----+-----+
|osd      |osd-spec  |ceph-osd-1  |/dev/sdd  |-   |-    |
+---------+----------+------------+----------+----+-----+

Thanks!
Tony
________________________________________
From: David Orman <ormandj@xxxxxxxxxxxx<mailto:ormandj@xxxxxxxxxxxx>>
Sent: February 10, 2021 11:02 AM
To: Tony Liu
Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
Subject: Re:  Re: db_devices doesn't show up in exported osd service spec

What's "ceph orch device ls" look like, and please show us your specification that you've used.

Jens was correct, his example is how we worked-around this problem, pending patch/new release.

On Wed, Feb 10, 2021 at 12:05 AM Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>> wrote:
With db_devices.size, db_devices shows up from "orch ls --export", but no DB device/lvm created for the OSD. Any clues?

Thanks!
Tony
________________________________________
From: Jens Hyllegaard (Soft Design A/S) <jens.hyllegaard@xxxxxxxxxxxxx<mailto:jens.hyllegaard@xxxxxxxxxxxxx><mailto:jens.hyllegaard@xxxxxxxxxxxxx<mailto:jens.hyllegaard@xxxxxxxxxxxxx>>>
Sent: February 9, 2021 01:16 AM
To: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
Subject:  Re: db_devices doesn't show up in exported osd service spec

Hi Tony.

I assume they used a size constraint instead of rotational. So if all your SSD's are 1TB or less , and all HDD's are more than that you could use:

spec:
  objectstore: bluestore
  data_devices:
    rotational: true
  filter_logic: AND
  db_devices:
    size: ':1TB'

It was usable in my test environment, and seems to work.

Regards

Jens


-----Original Message-----
From: Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>>
Sent: 9. februar 2021 02:09
To: David Orman <ormandj@xxxxxxxxxxxx<mailto:ormandj@xxxxxxxxxxxx><mailto:ormandj@xxxxxxxxxxxx<mailto:ormandj@xxxxxxxxxxxx>>>
Cc: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
Subject:  Re: db_devices doesn't show up in exported osd service spec

Hi David,

Could you show me an example of OSD service spec YAML to workaround it by specifying size?

Thanks!
Tony
________________________________________
From: David Orman <ormandj@xxxxxxxxxxxx<mailto:ormandj@xxxxxxxxxxxx><mailto:ormandj@xxxxxxxxxxxx<mailto:ormandj@xxxxxxxxxxxx>>>
Sent: February 8, 2021 04:06 PM
To: Tony Liu
Cc: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
Subject: Re:  Re: db_devices doesn't show up in exported osd service spec

Adding ceph-users:

We ran into this same issue, and we used a size specification to workaround for now.

Bug and patch:

https://tracker.ceph.com/issues/49014
https://github.com/ceph/ceph/pull/39083

Backport to Octopus:

https://github.com/ceph/ceph/pull/39171

On Sat, Feb 6, 2021 at 7:05 PM Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>>> wrote:
Add dev to comment.

With 15.2.8, when apply OSD service spec, db_devices is gone.
Here is the service spec file.
==========================================
service_type: osd
service_id: osd-spec
placement:
  hosts:
  - ceph-osd-1
spec:
  objectstore: bluestore
  data_devices:
    rotational: 1
  db_devices:
    rotational: 0
==========================================

Here is the logging from mon. The message with "Tony" is added by me in mgr to confirm. The audit from mon shows db_devices is gone.
Is there anything in mon to filter that out based on host info?
How can I trace it?
==========================================
audit 2021-02-07T00:45:38.106171+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4020 : audit [DBG] from='client.24184218 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "target": ["mon-mgr", ""]}]: dispatch cephadm 2021-02-07T00:45:38.108546+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4021 : cephadm [INF] Marking host: ceph-osd-1 for OSDSpec preview refresh.
cephadm 2021-02-07T00:45:38.108798+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4022 : cephadm [INF] Saving service osd.osd-spec spec with placement ceph-osd-1 cephadm 2021-02-07T00:45:38.108893+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4023 : cephadm [INF] Tony: spec: <bound method ServiceSpec.to_json of DriveGroupSpec(name=osd-spec->placement=PlacementSpec(hosts=[HostPlacementSpec(hostname='ceph-osd-1', network='', name='')]), service_id='osd-spec', service_type='osd', data_devices=DeviceSelection(rotational=1, all=False), db_devices=DeviceSelection(rotational=0, all=False), osd_id_claims={}, unmanaged=False, filter_logic='AND', preview_only=False)> audit 2021-02-07T00:45:38.109782+0000 mon.ceph-control-3 (mon.2) 25 : audit [INF] from='mgr.24142551 10.6.50.30:0/2838166251<http://10.6.50.30:0/2838166251><http://10.6.50.30:0/2838166251><http://10.6.50.30:0/2838166251>' entity='mgr.ceph-control-1.nxjnzz' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-spec","val":
 "{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"plac
 ement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]: dispatch audit 2021-02-07T00:45:38.110133+0000 mon.ceph-control-1 (mon.0) 107 : audit [INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]: dispatch audit 2021-02-07T00:45:38.152756+0000 mon.ceph-control-1 (mon.0) 108 : audit [INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-
 spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]': finished ==========================================

Thanks!
Tony
> -----Original Message-----
> From: Jens Hyllegaard (Soft Design A/S) 
> <jens.hyllegaard@xxxxxxxxxxxxx<mailto:jens.hyllegaard@xxxxxxxxxxxxx><m
> ailto:jens.hyllegaard@xxxxxxxxxxxxx<mailto:jens.hyllegaard@softdesign.
> dk>><mailto:jens.hyllegaard@xxxxxxxxxxxxx<mailto:jens.hyllegaard@softd
> esign.dk><mailto:jens.hyllegaard@xxxxxxxxxxxxx<mailto:jens.hyllegaard@
> softdesign.dk>>>>
> Sent: Thursday, February 4, 2021 6:31 AM
> To: 
> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@ceph.i
> o<mailto:ceph-users@xxxxxxx>><mailto:ceph-users@xxxxxxx<mailto:ceph-us
> ers@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>>
> Subject:  Re: db_devices doesn't show up in exported osd 
> service spec
>
> Hi.
>
> I have the same situation. Running 15.2.8 I created a specification 
> that looked just like it. With rotational in the data and 
> non-rotational in the db.
>
> First use applied fine. Afterwards it only uses the hdd, and not the ssd.
> Also, is there a way to remove an unused osd service.
> I manages to create osd.all-available-devices, when I tried to stop 
> the autocreation of OSD's. Using ceph orch apply osd 
> --all-available-devices --unmanaged=true
>
> I created the original OSD using the web interface.
>
> Regards
>
> Jens
> -----Original Message-----
> From: Eugen Block 
> <eblock@xxxxxx<mailto:eblock@xxxxxx><mailto:eblock@xxxxxx<mailto:ebloc
> k@xxxxxx>><mailto:eblock@xxxxxx<mailto:eblock@xxxxxx><mailto:eblock@nd
> e.ag<mailto:eblock@xxxxxx>>>>
> Sent: 3. februar 2021 11:40
> To: Tony Liu 
> <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyli
> u0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>><mailto:tonyliu0592@
> hotmail.com<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@hotmail
> .com<mailto:tonyliu0592@xxxxxxxxxxx>>>>
> Cc: 
> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@ceph.i
> o<mailto:ceph-users@xxxxxxx>><mailto:ceph-users@xxxxxxx<mailto:ceph-us
> ers@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>>
> Subject:  Re: db_devices doesn't show up in exported osd 
> service spec
>
> How do you manage the db_sizes of your SSDs? Is that managed 
> automatically by ceph-volume? You could try to add another config and 
> see what it does, maybe try to add block_db_size?
>
>
> Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>>>:
>
> > All mon, mgr, crash and osd are upgraded to 15.2.8. It actually 
> > fixed another issue (no device listed after adding host).
> > But this issue remains.
> > ```
> > # cat osd-spec.yaml
> > service_type: osd
> > service_id: osd-spec
> > placement:
> >   host_pattern: ceph-osd-[1-3]
> > data_devices:
> >   rotational: 1
> > db_devices:
> >   rotational: 0
> >
> > # ceph orch apply osd -i osd-spec.yaml Scheduled osd.osd-spec 
> > update...
> >
> > # ceph orch ls --service_name osd.osd-spec --export
> > service_type: osd
> > service_id: osd-spec
> > service_name: osd.osd-spec
> > placement:
> >   host_pattern: ceph-osd-[1-3]
> > spec:
> >   data_devices:
> >     rotational: 1
> >   filter_logic: AND
> >   objectstore: bluestore
> > ```
> > db_devices still doesn't show up.
> > Keep scratching my head...
> >
> >
> > Thanks!
> > Tony
> >> -----Original Message-----
> >> From: Eugen Block 
> >> <eblock@xxxxxx<mailto:eblock@xxxxxx><mailto:eblock@xxxxxx<mailto:eb
> >> lock@xxxxxx>><mailto:eblock@xxxxxx<mailto:eblock@xxxxxx><mailto:ebl
> >> ock@xxxxxx<mailto:eblock@xxxxxx>>>>
> >> Sent: Tuesday, February 2, 2021 2:20 AM
> >> To: 
> >> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@cep
> >> h.io<mailto:ceph-users@xxxxxxx>><mailto:ceph-users@xxxxxxx<mailto:c
> >> eph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@ceph
> >> .io>>>
> >> Subject:  Re: db_devices doesn't show up in exported 
> >> osd service spec
> >>
> >> Hi,
> >>
> >> I would recommend to update (again), here's my output from a 15.2.8 
> >> test
> >> cluster:
> >>
> >>
> >> host1:~ # ceph orch ls --service_name osd.default --export
> >> service_type: osd
> >> service_id: default
> >> service_name: osd.default
> >> placement:
> >>    hosts:
> >>    - host4
> >>    - host3
> >>    - host1
> >>    - host2
> >> spec:
> >>    block_db_size: 4G
> >>    data_devices:
> >>      rotational: 1
> >>      size: '20G:'
> >>    db_devices:
> >>      size: '10G:'
> >>    filter_logic: AND
> >>    objectstore: bluestore
> >>
> >>
> >> Regards,
> >> Eugen
> >>
> >>
> >> Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx><mailto:tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>>>:
> >>
> >> > Hi,
> >> >
> >> > When build cluster Octopus 15.2.5 initially, here is the OSD 
> >> > service spec file applied.
> >> > ```
> >> > service_type: osd
> >> > service_id: osd-spec
> >> > placement:
> >> >   host_pattern: ceph-osd-[1-3]
> >> > data_devices:
> >> >   rotational: 1
> >> > db_devices:
> >> >   rotational: 0
> >> > ```
> >> > After applying it, all HDDs were added and DB of each hdd is 
> >> > created on SSD.
> >> >
> >> > Here is the export of OSD service spec.
> >> > ```
> >> > # ceph orch ls --service_name osd.osd-spec --export
> >> > service_type: osd
> >> > service_id: osd-spec
> >> > service_name: osd.osd-spec
> >> > placement:
> >> >   host_pattern: ceph-osd-[1-3]
> >> > spec:
> >> >   data_devices:
> >> >     rotational: 1
> >> >   filter_logic: AND
> >> >   objectstore: bluestore
> >> > ```
> >> > Why db_devices doesn't show up there?
> >> >
> >> > When I replace a disk recently, when the new disk was installed 
> >> > and zapped, OSD was automatically re-created, but DB was created 
> >> > on HDD, not SSD. I assume this is because of that missing db_devices?
> >> >
> >> > I tried to update service spec, the same result, db_devices 
> >> > doesn't show up when export it.
> >> >
> >> > Is this some known issue or something I am missing?
> >> >
> >> >
> >> > Thanks!
> >> > Tony
> >> > _______________________________________________
> >> > ceph-users mailing list --
> >> > ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@c
> >> > eph.io<mailto:ceph-users@xxxxxxx>><mailto:ceph-users@xxxxxxx<mail
> >> > to:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-user
> >> > s@xxxxxxx>>> To unsubscribe send an email to 
> >> > ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:
> >> > ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>><mailto
> >> > :ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto
> >> > :ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>>>
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list --
> >> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@cep
> >> h.io<mailto:ceph-users@xxxxxxx>><mailto:ceph-users@xxxxxxx<mailto:c
> >> eph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@ceph
> >> .io>>> To unsubscribe send an email to 
> >> ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ce
> >> ph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>><mailto:cep
> >> h-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-
> >> users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>>>
>
>
> _______________________________________________
> ceph-users mailing list --
> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@ceph.i
> o<mailto:ceph-users@xxxxxxx>><mailto:ceph-users@xxxxxxx<mailto:ceph-us
> ers@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>> To 
> unsubscribe send an email to 
> ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-
> users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>><mailto:ceph-user
> s-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-users-lea
> ve@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>>>
> _______________________________________________
> ceph-users mailing list --
> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@ceph.i
> o<mailto:ceph-users@xxxxxxx>><mailto:ceph-users@xxxxxxx<mailto:ceph-us
> ers@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>> To 
> unsubscribe send an email to 
> ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-
> users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>><mailto:ceph-user
> s-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-users-lea
> ve@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>><mailto:ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>> To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx><mailto:ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx><mailto:ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux