Re: Filestore OSD on CephFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marc:
To clarify, there will be no direct client workload (which is what I mean by “active production workload”), but rather RBD images from a remote cluster imported from either RBD export/import or as an RBD mirror destination.  Obviously the best solution is dedicated hardware, but I don’t have that.  The single OSD is simply due to the underlying cluster already either being erasure coded or replicated.

Thanks,

--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com 
DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
GSA Schedule 70 SDVOSB: GS-35F-0646S
GSA MOBIS Schedule: GS-10F-0404Y
ISO 9001 / ISO 20000 / ISO 27001 / CMMI Level 3

Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copy, use, disclosure, or distribution is STRICTLY prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.

On Jan 16, 2019, at 8:14 AM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:


How can there be a "catastrophic reason" if you have "no active,
production workload"...? Do as you please. I am also having 1
replication for temp en tests. But if you have only one osd why use
ceph? Choose the correct 'tool' for the job.





-----Original Message-----
From: Kenneth Van Alstyne [mailto:kvanalstyne@xxxxxxxxxxxxxxx]
Sent: 16 January 2019 15:04
To: ceph-users
Subject: [ceph-users] Filestore OSD on CephFS?

Disclaimer:  Even I will admit that I know this is going to sound like a
silly/crazy/insane question, but I have a reason for wanting to do this
and asking the question.  Its also worth noting that no active,
production workload will be used on this cluster, so Im worried
more about data integrity than performance of availability.

Can anyone think of any catastrophic reason why I cannot use an existing
clusters CephFS filesystem as a single OSD for a small cluster?  Ive
tested it and it seems to work with the following caveats:
- 50% performance degradation (due to double write penalty since journal
and OSD data both are on the same backing cluster)
- Max object name and namespace length limits, which can be overcome
with the following OSD parameters:
- osd max object name len = 256
- osd max object namespace len = 64
- Due to above name/namespace length limits, cluster should be limited
to RBD (which is exactly what I want to do)

Some details of my cluster are below if anyone cares and Im getting a
consistent, solid roughly 50% of the underlying clusters performance
benchmarks using rados bench:
# ceph --cluster cephfs status
 cluster:
   id:     0f8904ce-754b-48d4-aa58-7ee6fe9e2cca
   health: HEALTH_OK

 services:
   mon:        1 daemons, quorum storage
   mgr:        storage(active)
   osd:        1 osds: 1 up, 1 in
   rbd-mirror: 1 daemon active

 data:
   pools:   1 pools, 32 pgs
   objects: 10  objects, 133 B
   usage:   12 MiB used, 87 GiB / 87 GiB avail
   pgs:     32 active+clean

 io:
   client:   85 B/s wr, 0 op/s rd, 0 op/s wr

# ceph --cluster cephfs versions
{
   "mon": {
       "ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)": 1
   },
   "mgr": {
       "ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)": 1
   },
   "osd": {
       "ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)": 1
   },
   "mds": {},
   "rbd-mirror": {
       "ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)": 1
   },
   "overall": {
       "ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e)
mimic (stable)": 4
   }
}


# ceph --cluster cephfs osd df
ID CLASS WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE VAR  PGS
0   hdd 0.08510  1.00000 87 GiB 16 MiB 87 GiB 0.02 1.00  32
                   TOTAL 87 GiB 16 MiB 87 GiB 0.02          
MIN/MAX VAR: 1.00/1.00  STDDEV: 0


# ceph --cluster cephfs df
GLOBAL:
   SIZE       AVAIL      RAW USED     %RAW USED
   87 GiB     87 GiB       16 MiB          0.02
POOLS:
   NAME     ID     USED      %USED     MAX AVAIL     OBJECTS
   rbd      1      133 B         0        83 GiB          10


# df -h /var/lib/ceph/osd/cephfs-0/
Filesystem             Size  Used Avail Use% Mounted on
10.0.0.1:/ceph-remote   87G   12M   87G   1% /var/lib/ceph

Thanks,

--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com
DHS EAGLE II Prime Contractor: FC1 SDVOSB Track GSA Schedule 70 SDVOSB:
GS-35F-0646S GSA MOBIS Schedule: GS-10F-0404Y ISO 9001 / ISO 20000 / ISO
27001 / CMMI Level 3

Notice: This e-mail message, including any attachments, is for the sole
use of the intended recipient(s) and may contain confidential and
privileged information. Any unauthorized review, copy, use, disclosure,
or distribution is STRICTLY prohibited. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all
copies of the original message.




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux