On 11.01.22 г. 16:00 ч., Steven Rostedt wrote:
+PyObject *PyFtrace_set_affinity(PyObject *self, PyObject *args, + PyObject *kwargs) +{ + struct tracefs_instance *instance; + static char *kwlist[] = {"cpus", "instance", NULL}; + PyObject *py_cpus; + PyObject *py_inst = NULL; + const char *cpu_str; + struct trace_seq seq; + int ret; + + if (!PyArg_ParseTupleAndKeywords(args, + kwargs, + "O|O", + kwlist, + &py_cpus, + &py_inst)) { + return NULL; + } + + trace_seq_init(&seq);There is a global trace_seq object that can be used here. Also you have to check for error. Perhaps having:Don't we need mutex protection if we use a global object?As far as I know Python is intrinsically single threaded. Only one thread can execute Python code at once.Are you sure about that? A quick search produced this: https://realpython.com/intro-to-python-threading/
Hmm, I am not 100% sure, but I still think this is the case. In your link, if you go down a bit you have"If you’re not sure if you want to use Python threading, asyncio, or multiprocessing, then you can check out Speed Up Your Python Program With Concurrency."
where " Speed Up Your Python Program With Concurrency" is a link to here: https://realpython.com/python-concurrency/and here if you scroll down to the first table, you will see that multiprocessing is the only real way to run Python on multiple CPUs.
You can have a look also here https://docs.python.org/3/library/threading.html"In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor. However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously."
Thanks! Y.
Multiprocessing is the only way to parallelize the execution.if (!init_print_seq()) return NULL;