Hi Reinette, On 3/20/23 11:52, Reinette Chatre wrote: > Hi Babu, > > On 3/20/2023 8:07 AM, Moger, Babu wrote: >> On 3/16/23 15:33, Reinette Chatre wrote: >>> On 3/16/2023 12:51 PM, Moger, Babu wrote: >>>> On 3/16/23 12:12, Reinette Chatre wrote: >>>>> On 3/16/2023 9:27 AM, Moger, Babu wrote: >>>>>>> -----Original Message----- >>>>>>> From: Reinette Chatre <reinette.chatre@xxxxxxxxx> >>>>>>> Sent: Wednesday, March 15, 2023 1:33 PM >>>>>>> To: Moger, Babu <Babu.Moger@xxxxxxx>; corbet@xxxxxxx; >>>>>>> tglx@xxxxxxxxxxxxx; mingo@xxxxxxxxxx; bp@xxxxxxxxx >>>>>>> Cc: fenghua.yu@xxxxxxxxx; dave.hansen@xxxxxxxxxxxxxxx; x86@xxxxxxxxxx; >>>>>>> hpa@xxxxxxxxx; paulmck@xxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; >>>>>>> quic_neeraju@xxxxxxxxxxx; rdunlap@xxxxxxxxxxxxx; >>>>>>> damien.lemoal@xxxxxxxxxxxxxxxxxx; songmuchun@xxxxxxxxxxxxx; >>>>>>> peterz@xxxxxxxxxxxxx; jpoimboe@xxxxxxxxxx; pbonzini@xxxxxxxxxx; >>>>>>> chang.seok.bae@xxxxxxxxx; pawan.kumar.gupta@xxxxxxxxxxxxxxx; >>>>>>> jmattson@xxxxxxxxxx; daniel.sneddon@xxxxxxxxxxxxxxx; Das1, Sandipan >>>>>>> <Sandipan.Das@xxxxxxx>; tony.luck@xxxxxxxxx; james.morse@xxxxxxx; >>>>>>> linux-doc@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; >>>>>>> bagasdotme@xxxxxxxxx; eranian@xxxxxxxxxx; christophe.leroy@xxxxxxxxxx; >>>>>>> jarkko@xxxxxxxxxx; adrian.hunter@xxxxxxxxx; quic_jiles@xxxxxxxxxxx; >>>>>>> peternewman@xxxxxxxxxx >>>>>>> Subject: Re: [PATCH v3 1/7] x86/resctrl: Add multiple tasks to the resctrl group >>>>>>> at once >>>>>>> >>>>>>> Hi Babu, >>>>>>> >>>>>>> On 3/2/2023 12:24 PM, Babu Moger wrote: >>>>>>>> The resctrl task assignment for MONITOR or CONTROL group needs to be >>>>>>>> done one at a time. For example: >>>>>>>> >>>>>>>> $mount -t resctrl resctrl /sys/fs/resctrl/ >>>>>>>> $mkdir /sys/fs/resctrl/clos1 >>>>>>>> $echo 123 > /sys/fs/resctrl/clos1/tasks >>>>>>>> $echo 456 > /sys/fs/resctrl/clos1/tasks >>>>>>>> $echo 789 > /sys/fs/resctrl/clos1/tasks >>>>>>>> >>>>>>>> This is not user-friendly when dealing with hundreds of tasks. Also, >>>>>>>> there is a syscall overhead for each command executed from user space. >>>>>>> >>>>>>> To support this change it may also be helpful to add that moving tasks take the >>>>>>> mutex so attempting to move tasks in parallel will not achieve a significant >>>>>>> performance gain. >>>>>> >>>>>> Agree. It may not be significant performance gain. Will remove this line. >>>>> >>>>> It does not sound as though you are actually responding to my comment. >>>> >>>> I am confused. I am already saying there is syscall overhead for each >>>> command if we move the tasks one by one. Now do you want me to add "moving >>>> tasks take the mutex so attempting to move tasks in parallel will not >>>> achieve a significant performance gain". >>>> >>>> It is contradictory, So, I wanted to remove the line about performance. >>>> Did I still miss something? >>> >>> Where is the contradiction? >>> >>> Consider your example: >>> $echo 123 > /sys/fs/resctrl/clos1/tasks >>> $echo 456 > /sys/fs/resctrl/clos1/tasks >>> $echo 789 > /sys/fs/resctrl/clos1/tasks >>> >>> Yes, there is syscall overhead for each of the above lines. My statement was in >>> support of this work by stating that a user aiming to improve performance by >>> attempting the above in parallel would not be able to see achieve significant >>> performance gain since the calls would end up being serialized. >> >> ok. Sure. Will add the text. I may modify little bit. >>> >>> You are providing two motivations (a) "user-friendly when dealing with >>> hundreds of tasks", and (b) syscall overhead. Have you measured the >>> improvement this solution provides? >> >> No. I have not measured the performance improvement. > > The changelog makes a claim that the current implementation has overhead > that is removed with this change. There is no data to support this claim. My main motivation for this change is to make it user-friendly. So that users can search the pid's and assign multiple tasks at a time. Originally I did not have the line for performance. Actually, I don't want to claim performance benefits. I will remove the performance claims. > > ... > >>>>>>>> + >>>>>>>> + buf[nbytes - 1] = '\0'; >>>>>>>> + >>>>>>>> rdtgrp = rdtgroup_kn_lock_live(of->kn); >>>>>>>> if (!rdtgrp) { >>>>>>>> rdtgroup_kn_unlock(of->kn); >>>>>>>> return -ENOENT; >>>>>>>> } >>>>>>>> + >>>>>>>> +next: >>>>>>>> + if (!buf || buf[0] == '\0') >>>>>>>> + goto unlock; >>>>>>>> + >>>>>>>> + pid_str = strim(strsep(&buf, ",")); >>>>>>>> + >>>>>>> >>>>>>> Could lib/cmdline.c:get_option() be useful? >>>>>> >>>>>> Yes. We could that also. May not be required for the simple case like this. >>>>> >>>>> Please keep an eye out for how much of it you end up duplicating .... >>>> >>>> Using the get_options will require at least two calls(one to get the >>>> length and then read the integers). Also need to allocate the integers >>>> array dynamically. That is lot code if we are going that route. >>>> >>> >>> I did not ask about get_options(), I asked about get_option(). >> >> If you insist, will use get_option. But we still have to loop thru all the >> string till get_option returns 0. I can try that. > > > I just asked whether get_option() could be useful. Could you please point out what > I said that made you think that I insist on this change being made? If it matches > your usage, then know it is available, if it does not, then don't use it. Ok. I dont see a major benefit using get_option here. So, not planning to to use it. > > ... > >>>> I can say "The failure pid will be logged in >>>> /sys/fs/resctrl/info/last_cmd_status file." >>> >>> That will not be accurate. Not all errors include the pid. >> >> Can you please suggest? > > last_cmd_status provides a 512 char buffer to communicate details > to the user. The buffer is cleared before the loop that moves all the > tasks start. If an error is encountered, a detailed message is written > to the buffer. One option may be to append a string to the buffer that > includes the pid? Perhaps something like: > rdt_last_cmd_printf("Error encountered while moving task %d\n", pid); ok. Will try to add and test it. > > Please feel free to improve. > > Reinette > > -- Thanks Babu Moger