Update: the attempt to define a traditional replicated pool was
successful; it’s online and ready to go.
So the cluster basics appear sound… -don- From:
ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Don Doerner Hello, I am trying to set up to measure erasure coding performance and overhead.
My Ceph “cluster-of-one” has 27 disks, hence 27 OSDs, all empty.
I have ots of memory, and I am using “osd crush chooseleaf type = 0” in my config file, so my OSDs should be able to peer with others on the same host, right? I look at the EC profiles defined, and see only “default” which has k=2,m=1.
Wanting to set up a more realistic test, I defined a new profile “k8m3”, similar to default, but with k=8,m=3.
Checked with “ceph osd erasure-code-profile get k8m3”, all looks good. I then go to define my pool: “ceph osd pool create ecpool 256 256 erasure k8m3” apparently succeeds.
·
Sidebar: my math on the pgnum stuff was (27 pools * 100)/11 = ~246, round up to 256. Now I ask “ceph health”, and get:
HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; too few pgs per osd (9 < min 20) Digging in to this a bit (“ceph health detail”), I see the magic OSD number (2147483647) that says that there weren’t enough OSDs to assign to a placement group,
for all placement groups.
And at the same time, it is warning me that I have too few PGs per OSD. At the moment, I am defining a traditional replicated pool (3X) to see if that will work…
Anyone have any guess as to what I may be doing incorrectly with my erasure coded pool?
Or what I should do next to get a clue? Regards, -don- The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential
information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have electronic communications, including email and attachments, sent across its networks filtered through anti virus and spam software
programs and retain such messages in order to comply with applicable data security and retention requirements. Quantum is not responsible for the proper and complete transmission of the substance of this communication or for any delay in its receipt. |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com