Re: EC configuration questions...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Don,

On 03/03/2015 01:18, Don Doerner wrote:> Hello,
> 
>  
> 
> I am trying to set up to measure erasure coding performance and overhead.  My Ceph “cluster-of-one” has 27 disks, hence 27 OSDs, all empty.  I have ots of memory, and I am using “osd crush chooseleaf type = 0” in my config file, so my OSDs should be able to peer with others on the same host, right?
> 
>  
> 
> I look at the EC profiles defined, and see only “default” which has k=2,m=1.  Wanting to set up a more realistic test, I defined a new profile “k8m3”, similar to default, but with k=8,m=3. 
> 
>  
> 
> Checked with “ceph osd erasure-code-profile get k8m3”, all looks good.

When you create the erasure-code-profile you also need to set the failure domain (see ruleset-failure-domain in http://ceph.com/docs/master/rados/operations/erasure-code-jerasure/). It will not use the "osd crush chooseleaf type = 0" from your configuration file. You can verify the details of the ruleset used by the erasure coded pool with the command ./ceph osd crush rule dump

Cheers

> 
>  
> 
> I then go to define my pool: “ceph osd pool create ecpool 256 256 erasure k8m3” apparently succeeds.
> 
> ·        Sidebar: my math on the pgnum stuff was (27 pools * 100)/11 = ~246, round up to 256.
> 
>  
> 
> Now I ask “ceph health”, and get:
> 
> HEALTH_WARN256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; too few pgs per osd (9 < min 20)
> 
>  
> 
> Digging in to this a bit (“ceph health detail”), I see the magic OSD number (2147483647) that says that there weren’t enough OSDs to assign to a placement group, /for all placement groups/.  And at the same time, it is warning me that I have too few PGs per OSD.
> 
>  
> 
> At the moment, I am defining a traditional replicated pool (3X) to see if that will work…  Anyone have any guess as to what I may be doing incorrectly with my erasure coded pool?  Or what I should do next to get a clue?
> 
>  
> 
> Regards,
> 
>  
> 
> -don-
> 
>  
> 
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have electronic communications, including email and attachments, sent across its networks filtered through anti virus and spam software programs and retain such messages in order to comply with applicable data security and retention requirements. Quantum is not responsible for the proper and complete transmission of the substance of this communication or for any delay in its receipt.
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux