Re: [Ceph-Devel] NO pg created for erasure-coded pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ghislain,

On 17/10/2014 02:58, ghislain.chevalier@xxxxxxxxxx wrote:> Hi,
> 
> I think that Bug #8599 is more relevant (the one we are at the source)

Yes, but it is already in firefly therefore cannot be the source of your problem.

> Otherwise, managing rules and ruleset is confusing in Ceph.

Right and the plan is to make it so there is no distinction from the user point of view. A few patches went in giant already to make that happen. 

> First of all, it's curious to create a erasure rule giving its name and to get a confirmation giving a name of ruleset and a ruleset_id
> root@p-sbceph11:~# ceph osd crush rule create-erasure ecruleset
> created ruleset ecruleset at 52
> A rule has a name, not a ruleset.
> And 2 or more rules can have an identical ruleset_id

The ruleset number is provided because legacy commands such as ceph osd pool set crush_ruleset do not support names, only ruleset_ids. Although it is possible in theory to have multiple rules with the same ruleset, it serves no useful purpose and giant will make sure the ruleset_id always matches the rule_id.

> Note that the rule_id was set to 8 (i.e. last rule_id +1) associated to ruleset_id 52 (last ruleset_id +1)
> 
> Secondly, it's also confusing to create an erasure-coded pool with a rule name if we consider that setting a ruleset_id is more relevant
> root@p-sbceph11:~# ceph osd pool create ecpool2 12 12 erasure default ecruleset
> pool 'ecpool2' created

This command was create more recently and names were prefered over numerical ruleset_id.

> If we change to another erasure rule (erasure-code rule_id:7 ruleset_id:7), we use the ruleset_id.
> 
> root@p-sbceph11:~# ceph osd pool set ecpool2 crush_ruleset 7
> set pool 115 crush_ruleset to 7
> 
> Finally, I think that until the sequence respects rule_id=ruleset_id, everything is OK.
> But, in case of adapting the crushmap to fulfill specific requirements, i.e. breaking the sequence, it becomes difficult to manage the crushmap correctly.

I cannot agree more.

Cheers

> Best regards 
> 
> -----Message d'origine-----
> De : Loic Dachary [mailto:loic@xxxxxxxxxxx] 
> Envoyé : jeudi 16 octobre 2014 18:11
> À : CHEVALIER Ghislain IMT/OLPS; ceph-devel@xxxxxxxxxxxxxxx
> Objet : Re: [Ceph-Devel] NO pg created for erasure-coded pool
> 
> Ok. That's enough information for me to look into this. I think you're hitting the same problem as http://tracker.ceph.com/issues/9675
> 
> On 16/10/2014 09:07, ghislain.chevalier@xxxxxxxxxx wrote:
>> Hi Loic,
>>
>> Eureka...
>>
>> Remember the bug related to the rule_id and ruleset_id., we (Alain and 
>> I) detected some weeks ago
>>
>> It aIways exists for  erasure-code pool creation
>>
>> We altered the crushmap by updating the ruleset_id 52 (set by the 
>> system i.e. last ruleset_id +1) to 7 in order to be equal to the 
>> rule_id 7
>>
>> And then, ceph created the pg and we can put objects in this pool
>>
>> Best regards
>>
>> -----Message d'origine-----
>> De : CHEVALIER Ghislain IMT/OLPS
>> Envoyé : jeudi 16 octobre 2014 17:40
>> À : Loic Dachary; ceph-devel@xxxxxxxxxxxxxxx Objet : RE: [Ceph-Devel] 
>> NO pg created for erasure-coded pool
>>
>> Hi Loic,
>>
>> Excuse me for replying late
>>
>> First of all, Ii upgraded the platform to 0.80.7.
>>
>> I turned osd and mon  in debug mode as mentionned
>>
>> I re create the erasure-coded pool ecpool
>>
>> At pool creation no "create_lock_pg" in osd logs ; no message in mon 
>> log At object creation (rados put) I got
>> 2014-10-16 16:29:29.700916 7f060accc700  7 mon.monitor03@2(peon).log v891323 update_from_paxos applying incremental log 891323 2014-10-16 16:29:28.369129 osd.5 10.192.134.123:6801/369 141 : [WRN] slow request 480.926547 seconds old, received at 2014-10-16 16:21:27.442543: osd_op(client.1238183.0:
>> 1 chat.wmv [writefull 0~3189321] 112.952dd230 ondisk+write e11938) v4 
>> currently waiting for pg to exist locally
>>
>> Without pg what could I expect...
>>
>> The pool is listed by rados lspools or I can get some information by 
>> ceph osd pool stats ecpool (id=113)
>>
>> I created a replicated pool (poupool:114) and I got a lot of message 
>> as followed on osd  targeted by the ruleset 0 (5,6,7,8,9)
>> 2014-10-16 16:49:53.268083 7f1c31bb8700 20 osd.8 11942 _create_lock_pg 
>> pgid 114.6d
>> 2014-10-16 16:49:53.268254 7f1c31bb8700  7 osd.8 11942 _create_lock_pg 
>> pg[114.6d( empty local-les=0 n=0 ec=11941 les/c 0/11941 
>> 11941/11941/11941) [9,8,5] r=1 lpr=0 crt=0'0 inactive]
>>
>> I checked again the crushmap and nothing seems incorrect. So, I can't understand where the problem is.
>>
>> Best regards
>> NB : How can I switch back to a normal level of log?
>>
>>
>> -----Message d'origine-----
>> De : Loic Dachary [mailto:loic@xxxxxxxxxxx] Envoyé : mercredi 15 
>> octobre 2014 19:09 À : CHEVALIER Ghislain IMT/OLPS; 
>> ceph-devel@xxxxxxxxxxxxxxx Objet : Re: [Ceph-Devel] NO pg created for 
>> erasure-coded pool
>>
>> Hi,
>>
>> And nothing in any of the OSDS ? Since there are no errors in the MON there must be something wrong in the OSD.
>>
>> When the OSD is creating the PG you should see
>>
>>    _create_lock_pg pgid
>>
>> from
>>
>>     https://github.com/ceph/ceph/blob/firefly/src/osd/OSD.cc#L1995
>>
>> if you temporarily set the debug level to 20 with
>>
>> ceph tell osd.* injectargs -- --debug-osd 20
>>
>> If you still don't get anything at least this will narrow down the 
>> search ;-)
>>
>> Cheers
>>
>> On 15/10/2014 08:52, ghislain.chevalier@xxxxxxxxxx wrote:
>>> Hi,
>>>
>>> oups..
>>>
>>> nothing relevant in mon logs.
>>>
>>> this message in some osd logs.
>>> 2014-10-15 17:03:45.303295 7fb296a21700  0 --
>>> 10.192.134.122:6804/16878 >> 10.192.134.123:6809/21505 pipe(0x2219c80
>>> sd=36 :41933 s=2 pgs=626 cs=355 l=0 c=0x398a580).fault with nothing 
>>> to send, going to standby
>>>
>>> FYi, I can store in another pool (e.g. data).
>>>
>>>  
>>> ________________________________________
>>> De : Loic Dachary [loic@xxxxxxxxxxx]
>>> Envoyé : mercredi 15 octobre 2014 17:32 À : CHEVALIER Ghislain 
>>> IMT/OLPS; ceph-devel@xxxxxxxxxxxxxxx Objet : Re: [Ceph-Devel] NO pg 
>>> created for erasure-coded pool
>>>
>>> Hi Ghislain,
>>>
>>> Any error messages in the mon / osd ?
>>>
>>> Cheers
>>>
>>> On 15/10/2014 07:01, ghislain.chevalier@xxxxxxxxxx wrote:
>>>> Hi...
>>>>
>>>> Strange, you said strange...
>>>>
>>>> I created a replicated pool (if it was what you asked for) as 
>>>> followed root@p-sbceph11:~# ceph osd pool create strangepool 128 128 
>>>> replicated pool 'strangepool' created root@p-sbceph11:~# ceph osd 
>>>> pool set strangepool crush_ruleset 53 set pool 108 crush_ruleset to
>>>> 53 root@p-sbceph11:~# ceph osd pool get strangepool size
>>>> size: 3
>>>> root@p-sbceph11:~#  rados lspools | grep strangepool strangepool 
>>>> root@p-sbceph11:~# ceph df
>>>> GLOBAL:
>>>>     SIZE       AVAIL      RAW USED     %RAW USED
>>>>     97289M     69667M       27622M         28.39
>>>> POOLS:
>>>>     NAME                   ID      USED       %USED     MAX AVAIL     OBJECTS
>>>>     data                   0       12241M     12.58        11090M         186
>>>>     metadata               1            0         0        11090M           0
>>>>     rbd                    2            0         0        13548M           0
>>>>     .rgw.root              3         1223         0        11090M           4
>>>>     .rgw.control           4            0         0        11090M           8
>>>>     .rgw                   5        13036         0        11090M          87
>>>>     .rgw.gc                6            0         0        11090M          32
>>>>     .log                   7            0         0        11090M           0
>>>>     .intent-log            8            0         0        11090M           0
>>>>     .usage                 9            0         0        11090M           0
>>>>     .users                 10         139         0        11090M          13
>>>>     .users.email           11         100         0        11090M           9
>>>>     .users.swift           12          43         0        11090M           4
>>>>     .users.uid             13        3509         0        11090M          22
>>>>     .rgw.buckets.index     15           0         0        11090M          31
>>>>     .rgw.buckets           16       1216M      1.25        11090M        2015
>>>>     atelier01              87           0         0         7393M           0
>>>>     atelier02              94      28264k      0.03        11090M           4
>>>>     atelier02cache         98       6522k         0        20322M           2
>>>>     strangepool            108          0         0            5E           0
>>>>
>>>> The pool is created and it doesn't work...
>>>> rados -p strangepool put remains inactive...
>>>>
>>>> If there are active pgs for strangepool, it's surely because they were created with the default ruleset = 0.
>>>>
>>>> The problem seems to be in the control of the rule 53 ;  note that, for debugging, the ruleset-failure-domain was previously set to osd instead of host. I don't think it's relevant.
>>>>
>>>> Finally, I don't know if you wanted me to create a replicated pool using a erasure ruleset or simply a new erasure-coded pool.
>>>>
>>>> Creating a new erasure-coded pool also fails.
>>>>
>>>> We also tried to create an erasure-coded pool on another platform using a standard crushmap, and it fails too.
>>>>
>>>> Best regards
>>>>
>>>> -----Message d'origine-----
>>>> De : Loic Dachary [mailto:loic@xxxxxxxxxxx] Envoyé : mercredi 15 
>>>> octobre 2014 13:55 À : CHEVALIER Ghislain IMT/OLPS; 
>>>> ceph-devel@xxxxxxxxxxxxxxx Objet : Re: [Ceph-Devel] NO pg created 
>>>> for erasure-coded pool
>>>>
>>>> Hi Ghislain,
>>>>
>>>> This is indeed strange, the pool exists
>>>>
>>>> pool 100 'ecpool' erasure size 3 min_size 2 crush_ruleset 52 
>>>> object_hash rjenkins pg_num 128 pgp_num 128 last_change 11849 flags 
>>>> hashpspool stripe_width 4096
>>>>
>>>> but ceph pg dump shows no sign of the expected PG (i.e. starting with 100. in the output if I'm not mistaken).
>>>>
>>>> Could you create another pool using the same ruleset and check if you see errors in the mon / osd logs when you do so ?
>>>>
>>>> Cheers
>>>>
>>>> On 15/10/2014 01:00, ghislain.chevalier@xxxxxxxxxx wrote:
>>>>> Hi,
>>>>>
>>>>> Cause erasure-code is at the top of your mind...
>>>>>
>>>>> Here are the files
>>>>>
>>>>> Best regards
>>>>>
>>>>> -----Message d'origine-----
>>>>> De : Loic Dachary [mailto:loic@xxxxxxxxxxx] Envoyé : mardi 14 
>>>>> octobre
>>>>> 2014 18:01 À : CHEVALIER Ghislain IMT/OLPS; 
>>>>> ceph-devel@xxxxxxxxxxxxxxx Objet : Re: [Ceph-Devel] NO pg created 
>>>>> for erasure-coded pool
>>>>>
>>>>> Ah, my bad, did not go to the end of the list ;-)
>>>>>
>>>>> could you share the output of ceph pg dump and ceph osd dump ?
>>>>>
>>>>> On 14/10/2014 08:14, ghislain.chevalier@xxxxxxxxxx wrote:
>>>>>> Hi,
>>>>>>
>>>>>> Here is the list of the types. host is type 1
>>>>>>   "types": [
>>>>>>         { "type_id": 0,
>>>>>>           "name": "osd"},
>>>>>>         { "type_id": 1,
>>>>>>           "name": "host"},
>>>>>>         { "type_id": 2,
>>>>>>           "name": "platform"},
>>>>>>         { "type_id": 3,
>>>>>>           "name": "datacenter"},
>>>>>>         { "type_id": 4,
>>>>>>           "name": "root"},
>>>>>>         { "type_id": 5,
>>>>>>           "name": "appclient"},
>>>>>>         { "type_id": 10,
>>>>>>           "name": "diskclass"},
>>>>>>         { "type_id": 50,
>>>>>>           "name": "appclass"}],
>>>>>>
>>>>>> And there are 5 hosts with 2 osds each at the end of the tree.
>>>>>>
>>>>>> Best regards
>>>>>> -----Message d'origine-----
>>>>>> De : Loic Dachary [mailto:loic@xxxxxxxxxxx] Envoyé : mardi 14 
>>>>>> octobre
>>>>>> 2014 16:44 À : CHEVALIER Ghislain IMT/OLPS; 
>>>>>> ceph-devel@xxxxxxxxxxxxxxx Objet : Re: [Ceph-Devel] NO pg created 
>>>>>> for eruasre-coded pool
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> The ruleset has
>>>>>>
>>>>>> { "op": "chooseleaf_indep",
>>>>>>           "num": 0,
>>>>>>           "type": "host"},
>>>>>>
>>>>>> but it does not look like your tree has a bucket of type host in it.
>>>>>>
>>>>>> Cheers
>>>>>>
>>>>>> On 14/10/2014 06:20, ghislain.chevalier@xxxxxxxxxx wrote:
>>>>>>> HI,
>>>>>>>
>>>>>>> THX Loïc for your quick reply.
>>>>>>>
>>>>>>> Here is the result of ceph osd tree
>>>>>>>
>>>>>>> As showed at the last ceph day in Paris, we have multiple root but the ruleset 52 entered the crushmap on root default.
>>>>>>>
>>>>>>> # id    weight  type name       up/down reweight
>>>>>>> -100    0.09998 root diskroot
>>>>>>> -110    0.04999         diskclass fastsata
>>>>>>> 0       0.009995                        osd.0   up      1
>>>>>>> 1       0.009995                        osd.1   up      1
>>>>>>> 2       0.009995                        osd.2   up      1
>>>>>>> 3       0.009995                        osd.3   up      1
>>>>>>> -120    0.04999         diskclass slowsata
>>>>>>> 4       0.009995                        osd.4   up      1
>>>>>>> 5       0.009995                        osd.5   up      1
>>>>>>> 6       0.009995                        osd.6   up      1
>>>>>>> 7       0.009995                        osd.7   up      1
>>>>>>> 8       0.009995                        osd.8   up      1
>>>>>>> 9       0.009995                        osd.9   up      1
>>>>>>> -5      0.2     root approot
>>>>>>> -50     0.09999         appclient apprgw
>>>>>>> -501    0.04999                 appclass fastrgw
>>>>>>> 0       0.009995                                osd.0   up      1
>>>>>>> 1       0.009995                                osd.1   up      1
>>>>>>> 2       0.009995                                osd.2   up      1
>>>>>>> 3       0.009995                                osd.3   up      1
>>>>>>> -502    0.04999                 appclass slowrgw
>>>>>>> 4       0.009995                                osd.4   up      1
>>>>>>> 5       0.009995                                osd.5   up      1
>>>>>>> 6       0.009995                                osd.6   up      1
>>>>>>> 7       0.009995                                osd.7   up      1
>>>>>>> 8       0.009995                                osd.8   up      1
>>>>>>> 9       0.009995                                osd.9   up      1
>>>>>>> -51     0.09999         appclient appstd
>>>>>>> -511    0.04999                 appclass faststd
>>>>>>> 0       0.009995                                osd.0   up      1
>>>>>>> 1       0.009995                                osd.1   up      1
>>>>>>> 2       0.009995                                osd.2   up      1
>>>>>>> 3       0.009995                                osd.3   up      1
>>>>>>> -512    0.04999                 appclass slowstd
>>>>>>> 4       0.009995                                osd.4   up      1
>>>>>>> 5       0.009995                                osd.5   up      1
>>>>>>> 6       0.009995                                osd.6   up      1
>>>>>>> 7       0.009995                                osd.7   up      1
>>>>>>> 8       0.009995                                osd.8   up      1
>>>>>>> 9       0.009995                                osd.9   up      1
>>>>>>> -1      0.09999 root default
>>>>>>> -2      0.09999         datacenter nanterre
>>>>>>> -3      0.09999                 platform sandbox
>>>>>>> -13     0.01999                         host p-sbceph13
>>>>>>> 0       0.009995                                        osd.0   up      1
>>>>>>> 5       0.009995                                        osd.5   up      1
>>>>>>> -14     0.01999                         host p-sbceph14
>>>>>>> 1       0.009995                                        osd.1   up      1
>>>>>>> 6       0.009995                                        osd.6   up      1
>>>>>>> -15     0.01999                         host p-sbceph15
>>>>>>> 2       0.009995                                        osd.2   up      1
>>>>>>> 7       0.009995                                        osd.7   up      1
>>>>>>> -12     0.01999                         host p-sbceph12
>>>>>>> 3       0.009995                                        osd.3   up      1
>>>>>>> 8       0.009995                                        osd.8   up      1
>>>>>>> -11     0.01999                         host p-sbceph11
>>>>>>> 4       0.009995                                        osd.4   up      1
>>>>>>> 9       0.009995                                        osd.9   up      1
>>>>>>>
>>>>>>> Best regards
>>>>>>>
>>>>>>> -----Message d'origine-----
>>>>>>> De : Loic Dachary [mailto:loic@xxxxxxxxxxx] Envoyé : mardi 14 
>>>>>>> octobre
>>>>>>> 2014 12:12 À : CHEVALIER Ghislain IMT/OLPS; 
>>>>>>> ceph-devel@xxxxxxxxxxxxxxx Objet : Re: [Ceph-Devel] NO pg created 
>>>>>>> for eruasre-coded pool
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 14/10/2014 02:07, ghislain.chevalier@xxxxxxxxxx wrote:
>>>>>>>> Hi all,
>>>>>>>>
>>>>>>>> Context :
>>>>>>>> Ceph : Firefly 0.80.6
>>>>>>>> Sandbox Platform  : Ubuntu 12.04 LTS, 5 VM (VMware), 3 mons, 10 
>>>>>>>> osd
>>>>>>>>
>>>>>>>>
>>>>>>>> Issue:
>>>>>>>> I created an erasure-coded pool using the default profile
>>>>>>>> --> ceph osd pool create ecpool 128 128 erasure default
>>>>>>>> the erasure-code rule was dynamically created and associated to the pool.
>>>>>>>> root@p-sbceph14:/etc/ceph# ceph osd crush rule dump erasure-code 
>>>>>>>> {
>>>>>>>> "rule_id": 7,
>>>>>>>>   "rule_name": "erasure-code",
>>>>>>>>   "ruleset": 52,
>>>>>>>>   "type": 3,
>>>>>>>>   "min_size": 3,
>>>>>>>>   "max_size": 20,
>>>>>>>>   "steps": [
>>>>>>>>         { "op": "set_chooseleaf_tries",
>>>>>>>>           "num": 5},
>>>>>>>>         { "op": "take",
>>>>>>>>           "item": -1,
>>>>>>>>           "item_name": "default"},
>>>>>>>>         { "op": "chooseleaf_indep",
>>>>>>>>           "num": 0,
>>>>>>>>           "type": "host"},
>>>>>>>>         { "op": "emit"}]}
>>>>>>>> root@p-sbceph14:/var/log/ceph# ceph osd pool get ecpool 
>>>>>>>> crush_ruleset
>>>>>>>> crush_ruleset: 52
>>>>>>>
>>>>>>>> No error message was displayed at pool creation but no pgs were created.
>>>>>>>> --> rados lspools confirms the pool is created but rados/ceph df 
>>>>>>>> --> shows no pg for this pool
>>>>>>>>
>>>>>>>> The command  "rados -p ecpool put services /etc/services" is 
>>>>>>>> inactive
>>>>>>>> (stalled) and the following message is encountered in ceph.log
>>>>>>>> 2014-10-14 10:36:50.189432 osd.5 10.192.134.123:6804/21505 799 :
>>>>>>>> [WRN] slow request 960.230073 seconds old, received at 
>>>>>>>> 2014-10-14
>>>>>>>> 10:20:49.959255: osd_op(client.1192643.0:1 services [writefull 
>>>>>>>> 0~19281] 100.5a48a9c2 ondisk+write e11869) v4 currently waiting 
>>>>>>>> for pg to exist locally
>>>>>>>>
>>>>>>>> I don't know if I missed something or if the problem is somewhere else..
>>>>>>>
>>>>>>> The erasure-code rule displayed will need at least three hosts. If there are not enough hosts with OSDs the mapping will fail and put will hang until an OSD becomes available to complete the mapping of OSDs to the PGs. What does your ceph osd tree shows ?
>>>>>>>
>>>>>>> Cheers
>>>>>>>
>>>>>>>>
>>>>>>>> Best regards
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ________________________________________________________________
>>>>>>>> _ __ _ _ _ ___________________________________________________
>>>>>>>>
>>>>>>>> Ce message et ses pieces jointes peuvent contenir des 
>>>>>>>> informations confidentielles ou privilegiees et ne doivent donc 
>>>>>>>> pas etre diffuses, exploites ou copies sans autorisation. Si 
>>>>>>>> vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>>>>>>>
>>>>>>>> This message and its attachments may contain confidential or 
>>>>>>>> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
>>>>>>>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>>>>>>>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>>>>>>>> Thank you.
>>>>>>>>
>>>>>>>> --
>>>>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>>>>>>> in the body of a message to majordomo@xxxxxxxxxxxxxxx More 
>>>>>>>> majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Loïc Dachary, Artisan Logiciel Libre
>>>>>>>
>>>>>>>
>>>>>>> _________________________________________________________________
>>>>>>> _ __ _ _ ___________________________________________________
>>>>>>>
>>>>>>> Ce message et ses pieces jointes peuvent contenir des 
>>>>>>> informations confidentielles ou privilegiees et ne doivent donc 
>>>>>>> pas etre diffuses, exploites ou copies sans autorisation. Si vous 
>>>>>>> avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>>>>>>
>>>>>>> This message and its attachments may contain confidential or 
>>>>>>> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
>>>>>>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>>>>>>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>>>>>>> Thank you.
>>>>>>>
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>>>>>> in the body of a message to majordomo@xxxxxxxxxxxxxxx More 
>>>>>>> majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Loïc Dachary, Artisan Logiciel Libre
>>>>>>
>>>>>>
>>>>>> __________________________________________________________________
>>>>>> _ __ _ ___________________________________________________
>>>>>>
>>>>>> Ce message et ses pieces jointes peuvent contenir des informations 
>>>>>> confidentielles ou privilegiees et ne doivent donc pas etre 
>>>>>> diffuses, exploites ou copies sans autorisation. Si vous avez recu 
>>>>>> ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>>>>>
>>>>>> This message and its attachments may contain confidential or 
>>>>>> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
>>>>>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>>>>>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>>>>>> Thank you.
>>>>>>
>>>>>
>>>>> --
>>>>> Loïc Dachary, Artisan Logiciel Libre
>>>>>
>>>>>
>>>>> ___________________________________________________________________
>>>>> _ __ ___________________________________________________
>>>>>
>>>>> Ce message et ses pieces jointes peuvent contenir des informations 
>>>>> confidentielles ou privilegiees et ne doivent donc pas etre 
>>>>> diffuses, exploites ou copies sans autorisation. Si vous avez recu 
>>>>> ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>>>>
>>>>> This message and its attachments may contain confidential or 
>>>>> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
>>>>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>>>>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>>>>> Thank you.
>>>>>
>>>>
>>>> --
>>>> Loïc Dachary, Artisan Logiciel Libre
>>>>
>>>>
>>>> ____________________________________________________________________
>>>> _ ____________________________________________________
>>>>
>>>> Ce message et ses pieces jointes peuvent contenir des informations 
>>>> confidentielles ou privilegiees et ne doivent donc pas etre 
>>>> diffuses, exploites ou copies sans autorisation. Si vous avez recu 
>>>> ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>>>
>>>> This message and its attachments may contain confidential or 
>>>> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
>>>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>>>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>>>> Thank you.
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>>>> in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo 
>>>> info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>
>>> --
>>> Loïc Dachary, Artisan Logiciel Libre
>>>
>>>
>>> _____________________________________________________________________
>>> _ ___________________________________________________
>>>
>>> Ce message et ses pieces jointes peuvent contenir des informations 
>>> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
>>> exploites ou copies sans autorisation. Si vous avez recu ce message 
>>> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>>
>>> This message and its attachments may contain confidential or 
>>> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
>>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>>> Thank you.
>>>
>>
>> --
>> Loïc Dachary, Artisan Logiciel Libre
>>
>>
>> ______________________________________________________________________
>> ___________________________________________________
>>
>> Ce message et ses pieces jointes peuvent contenir des informations 
>> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
>> exploites ou copies sans autorisation. Si vous avez recu ce message 
>> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>
>> This message and its attachments may contain confidential or 
>> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>> Thank you.
>>
> 
> --
> Loïc Dachary, Artisan Logiciel Libre
> 
> 
> _________________________________________________________________________________________________________________________
> 
> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
> 
> This message and its attachments may contain confidential or privileged information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
> Thank you.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux