Austin thanks for the Idea, those tests will give more reliable results. Itai thanks for the idea, I will search more about ftrace. Another newbie question, when I put a read or write call inside a thread with real time priority, I can say that the call is real time scheduled or should I code in a different way to "make it" real time? I mean, if just the fact that a function is inside the RT thread with high priority is enough to say it is real time. I am asking this because I'm still learning how the real time context fits into the Preempt RT kernel concept. Em qui., 6 de ago. de 2020 às 16:56, Itai Handler <itai.handler@xxxxxxxxx> escreveu: > > Hi Michel, > > I think that you should first measure the resolution of your clock. > You can do it by calling clock_gettime repeatedly and saving the results into an array. > Don’t run this test at a real-time priority because your measurements may be affected by RT throttling in that case. > > If your clock resolution is good enough, you should proceed to analyze it using ftrace. > Steven Rostedt gave many talks about ftrace and you should be able to find some of his videos and slides online. > > Itai Handler > > > On 5 Aug 2020, at 22:52, Michel Macena Oliveira <michel@xxxxxxxxxxxxxxxxxxxx> wrote: > > > > Hi Itai, > > I get the end time after the tcdrain function. I found that my > > CONFIG_HZ is 250 and know that you said it makes > > sense the 4 ms. To be honest I've searched about this and still I > > didn't understand this frequency and how it is related to time > > functions such as clock_gettime. Is it kind > > of frequency "given" to the user to have as a time base? I'm lost in > > which context it applies, kernel space, user space or any abstraction > > level of the system. > > Could you elaborate or give me some good material as a reference? > > > > I searched about the ftrace, and if I understand correctly this and > > the CONFIG_HZ option should be enabled during kernel compiling time. > > > >> Em qua., 5 de ago. de 2020 às 16:21, Itai Handler > >> <itai.handler@xxxxxxxxx> escreveu: > >> > >> Hi Michel, > >> > >> If I understand correctly, you’re measuring the duration of the write function to a serial port. > >> I think that there is no guarantee that this function will return before the port finishes to transmit all data. > >> However that doesn’t explain why you’re observing longer than expected duration. > >> The difference in times of 4ms (from 8 to 12) suggests that the scheduler is involved. > >> I guess you’re working with CONFIG_HZ_250? > >> May I suggest that you use function graph tracing (ftrace) to analyze it? > >> > >> Itai Handler > >> > >> > >> > >>>> On 5 Aug 2020, at 21:29, Michel Macena Oliveira <michel@xxxxxxxxxxxxxxxxxxxx> wrote: > >>> > >>> Hi, Thanks for your answers > >>> > >>> I've written two simple codes to write a buffer of characters > >>> into the serial port. I'm reading the characters on another computer > >>> (windows 10, Termite as serial monitor). I did a simple time analysis > >>> to know how much time to write a given number of bytes into the port > >>> for both codes. > >>> In the first code the write function was called in the main function, > >>> in the second I put inside a thread function with sched priority 90. > >>> > >>> I did the time analysis using the clock_gettime function with the > >>> CLOCK_REALTIME flag to get time before and after the write call inside > >>> a testing loop. Then I got the time measures, calculated the average > >>> and stored the maximum value. > >>> I got the same results with non-threaded (NT) and threaded (T) code. > >>> But I found something weird for both. > >>> > >>> For 1000 testing loops: > >>> > >>> buffer size: 1 > >>> Average Time: NT= 8.044138785 ms T= 8.126175254 ms > >>> Maximum Time: NT= 12.069355000 ms T= 12.102010000 ms > >>> > >>> buffer size: 48 > >>> Average Time: NT= 8.021624309 ms T= 8.038524315 ms > >>> Maximum Time: NT= 12.018302000 ms T= 12.054890000 ms > >>> > >>> buffer size: 49 > >>> Average Time: NT= 12.010010796 ms T= 12.044165909 ms > >>> Maximum Time: NT= 16.019277000 ms T= 16.038130000 ms > >>> > >>> buffer size: 96 > >>> Average Time: NT= 12.057916205 ms T= 12.054206314 ms > >>> Maximum Time: NT= 16.101405000 ms T= 16.040888000 ms > >>> > >>> buffer size: 144 > >>> Average Time: NT= 16.043416532 ms T= 16.020754198 ms > >>> Maximum Time: NT= 20.069923000 ms T= 20.059298000 ms > >>> > >>> buffer size: 145 > >>> Average Time: NT= 20.039547930 ms T= 20.048668453 ms > >>> Maximum Time: NT= 24.029462000 ms T= 24.038915000 ms > >>> > >>> The first weird thing as you can see is that the time taken is the > >>> same for a range as long as it is a multiple of 48 bytes. Also it > >>> doesn't change > >>> between the prioritized threaded version and non-threaded version. > >>> > >>> The Second weird thing is that for buffer sizes with some values > >>> multiples of 24 gives the message "stack smashing detected". Some > >>> values are 72 and 120 bytes. > >>> > >>> Any Ideas of what it could be? > >>> > >>> My settings for serial: > >>> - 115200 bauds. 8 bits, No parity bit, 1 stop bit > >>> - ASYNC_LOW_LATENCY flag is set > >>> - Raw input/output > >>> > >>> Also if I consider the settings above, I would have a frame of 10 bits > >>> (given 8N1 setting) per second received/delivered from which 8 bits > >>> are useful data. So it gives 11520 chars (or useful bytes) per second. > >>> For example, to send 96 bytes it would take 96/11520 seconds or > >>> approx. 8.33 ms. I know that there is the overhead to system calls and > >>> other OS complexities, but is there any way to diminish the time taken > >>> from 12 ms to something at least closer to 8.33 ms? > >>> > >>>> Em qui., 16 de jul. de 2020 às 17:40, Austin Schuh > >>>> <austin.linux@xxxxxxxxx> escreveu: > >>>> > >>>>> On Thu, Jul 16, 2020 at 1:21 PM Michel Macena Oliveira > >>>>> <michel@xxxxxxxxxxxxxxxxxxxx> wrote: > >>>>> > >>>>> Em qui., 16 de jul. de 2020 às 16:26, Itai Handler > >>>>> <itai.handler@xxxxxxxxx> escreveu: > >>>>>> > >>>>>> Hi Michel, > >>>>>> > >>>>>> On 7/16/20, Michel Macena Oliveira <michel@xxxxxxxxxxxxxxxxxxxx> wrote: > >>>>>>> Hi, sorry for my newbie question, > >>>>>>> What is the status of the serial port driver? > >>>>>>> I've written some code to use serial ports for the generic kernel linux, > >>>>>>> And now I want to write something similar for the real time kernel, do > >>>>>>> I have to use some specific library, flags or something or is it just > >>>>>>> the same settings? > >>>>>>> > >>>>>>> I'm asking because now I need to take into account the serial access > >>>>>>> in the scheduler to ensure a real time behavior . > >>>>>>> > >>>>>>> Any help, advice, hint or tips would be appreciated. > >>>>>>> > >>>>>> > >>>>>> Do you mean that you wrote some code that works with a /dev/tty* serial port? > >>>>>> > >>>>>> In some kernel versions (e.g 4.19) we saw high latencies in the rx > >>>>>> path of serial drivers. > >>>>>> I think that this is due to rx handled in the context of a workqueue > >>>>>> which also handles other system activities. > >>>>>> We found a workaround to this problem. If I remember correctly we > >>>>>> create a dedicated thread for the port and assign it a high priority. > >>>>>> > >>>>>> Itai Handler > >>>>> > >>>>> Thanks for the answer, > >>>>> Yes I wrote a code that works with /dev/tty* serial using termios.h. > >>>>> I thought exactly what you suggested, to use a dedicated thread, but > >>>>> how to properly implement it? > >>>>> I mean in a simple way, just need to put open and write calls inside > >>>>> the high priority (RT) thread? > >>>>> > >>>>> Current kernel: 4.4.208-rt198 #1 SMP PREEMPT RT > >>>> > >>>> Itai is talking about the kernel work to handle the serial data, not > >>>> the userspace work. We found the same thing, that the work goes > >>>> through a work queue in the kernel, which adds unbounded latency. > >>>> > >>>> https://www.spinics.net/lists/linux-serial/msg07341.html looks > >>>> relevant and matches my memory of what we did to solve this. There's > >>>> either a flag in userspace or patch to apply to the kernel (and then a > >>>> flag) to skip the work queue. > >>>> > >>>> Austin