I did some investigation and tracked the high usage down to librados. I don't think Python has anything to do with it. I also noticed that the memory usage was really unpredictable. Sometimes I could do a whole 'ceph -s' with only 256M; most of the time I couldn't, but the program crashed in various points along the way. I was going to instrument librados and try to track it further, but I found that Ceph is too complex and resource-consuming for me to build. I wonder if there is a way to build just librados without downloading and building 3 GiB of source code. I hadn't thought before about starting the 'ceph' shell and looking at the process as it's waiting for a command, but I just did, and see the virtual memory size does vary a lot from one invocation to the next. Strange. Makes one think there's some kind of race or use of an unset variable. So I looked at the memory map (/proc/PID/maps) and see in one run (where I got lucky and it fit in my 256M limit) 165 vmareas occupying 226 MiB (compared to 49 and 25 MiB for a Python shell). I'll look closer and see if there are some particulary large ones and what varies from one invocation to the next. >Is there a reason you're worried about the address space but not the >actual RAM used? Yes. The way I prevent programs from destroying my system with excessive real memory usage or paging, either by accident or by my ignorance, is by running with address space rlimits. It's the best I can do; there is no real memory or paging rate rlimit. As it stands, any normal shell on my systems has an address space limit of 256M, which has never been a problem before, but is majorly inconvenient now. -- Bryan Henderson San Jose, California _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com