Re: osd memory target not work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Farhad,


I wrote the underlying osd memory target code.  OSDs won't always use all of the memory if there is nothing driving a need. Primarily the driver of memory usage will be the meta and data caches needing more memory to keep the hit rates high.  If you perform some reads/writes across a large dataset you should see the OSDs start using more memory and then start oscillating near the target.


Mark


On 6/20/23 05:16, farhad kh wrote:
  when set osd_memory_target for limitation usage memory for osd disk ,This
value is expected to be set for the OSD container .But with the docker stats
command, this value is not seen Is my perception of this process wrong?
-----------

[root@opcsdfpsbpp0201 ~]# ceph orch ps | grep osd.12
osd.12                                          opcsdfpsbpp0201
        running (9d)     5m ago   9d    1205M    1953M  17.2.6
c9a1062f7289  bf27cfe16046
[root@opcsdfpsbpp0201 ~]# docker stats | grep osd
1253766d6a78   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-48
0.05%     2.237GiB / 7.732GiB   28.93%    0B / 0B   86.8GB / 562GB
63
2bc012e5c604   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-67
0.20%     727MiB / 7.732GiB     9.18%     0B / 0B   37.5GB / 1.29TB
63
dc0bf068050b   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-62
0.11%     360.5MiB / 7.732GiB   4.55%     0B / 0B   125MB / 1.85GB
63
c5f119a37652   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-55
0.12%     312.5MiB / 7.732GiB   3.95%     0B / 0B   86.6MB / 1.66GB
63
7f0b7b61807d   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-5
0.11%     299.4MiB / 7.732GiB   3.78%     0B / 0B   119MB / 1.6GB
63
dadffc77f7b6   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-40
0.11%     274MiB / 7.732GiB     3.46%     0B / 0B   110MB / 1.5GB
63
e439e58d907e   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-34
0.12%     355.9MiB / 7.732GiB   4.49%     0B / 0B   125MB / 1.78GB
63
5e500e2197d6   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-25
0.11%     273.3MiB / 7.732GiB   3.45%     0B / 0B   128MB / 1.55GB
63
a63709567669   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-19
0.11%     714.6MiB / 7.732GiB   9.03%     0B / 0B   89.8MB / 167GB
63
bf27cfe16046   ceph-79a2627c-0821-11ee-a494-00505695c58c-osd-12
0.16%     1.177GiB / 7.732GiB   15.23%    0B / 0B   40.8GB / 644GB
63
-----------
# ceph orch ps | grep osd | grep opcsdfpsbpp0201
osd.5     opcsdfpsbpp0201   running (9d)     6m ago   9d     298M
1953M  17.2.6          c9a1062f7289  7f0b7b61807d
osd.12    opcsdfpsbpp0201   running (9d)     6m ago   9d    1205M
1953M  17.2.6          c9a1062f7289  bf27cfe16046
osd.19    opcsdfpsbpp0201   running (9d)     6m ago   9d     704M
1953M  17.2.6          c9a1062f7289  a63709567669
osd.25    opcsdfpsbpp0201   running (9d)     6m ago   9d     273M
1953M  17.2.6          c9a1062f7289  5e500e2197d6
osd.34    opcsdfpsbpp0201   running (9d)     6m ago   9d     355M
1953M  17.2.6          c9a1062f7289  e439e58d907e
osd.40    opcsdfpsbpp0201   running (9d)     6m ago   9d     273M
1953M  17.2.6          c9a1062f7289  dadffc77f7b6
osd.48    opcsdfpsbpp0201   running (4h)     6m ago   9d    2290M
1953M  17.2.6          c9a1062f7289  1253766d6a78
osd.55    opcsdfpsbpp0201   running (9d)     6m ago   9d     312M
1953M  17.2.6          c9a1062f7289  c5f119a37652
osd.62    opcsdfpsbpp0201   running (9d)     6m ago   9d     359M
1953M  17.2.6          c9a1062f7289  dc0bf068050b
osd.67    opcsdfpsbpp0201   running (9d)     6m ago   9d     727M
1953M  17.2.6          c9a1062f7289  2bc012e5c604
----------------------------------

  #ceph config get mgr  osd_memory_target
2048000000
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Best Regards,
Mark Nelson
Head of R&D (USA)

Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: mark.nelson@xxxxxxxxx

We are hiring: https://www.clyso.com/jobs/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux