Increasing the Linux timer frequency will have performance and optimization benefits

At some point many of us We dared to play with the governors In our system, this is done in order to obtain some performance improvements or for some particular purpose, such as workload, applications, games, etc.

This type of modifications, They are not usually included in general distributions And it is understandable why, since this translates into an increase or decrease (depending on the case) in the resources of our team.

I mention this, because recently a Google engineer has proposed modifying la default settings of the Linux kernel timer, raising its interruption rate from 250 Hz to 1000 Hz.

This change would imply a higher frequency in task switching and a reduction in the quantum of time in the task scheduler, whichand efficiency could be improved in certain scenarios. Currently, 250Hz is considered a balance between performance, latency, and power consumption.

Motivation for the proposal

Uno of the main arguments in favor of this change is the Performance optimization on devices with 120Hz displays, which are increasingly common in PCs and mobile devices. At the current 250Hz setting, the time quantization inaccuracy amounts to about half of the frame time, which impacts resource allocation efficiency.

Furthermore, it has been observed that the dynamic voltage and frequency scaling mechanism (DVFS) tends to adopt aggressive frequency selection strategies to avoid slowdowns. This can result in unnecessary power consumption when a task has already finished processing, but the processor continues to run at a higher frequency because its time quantum has not yet expired.

The increase of the frequency of task switching would allow:

  • Improved efficiency in dynamic frequency management (DVFS).
  • More precise allocation of task scheduler times.
  • Increased frequency of updating CPU load statistics.
  • Reduced waiting time for pending tasks.
  • Arguments against the amendment

For its part, Another Google engineer expressed his disagreement with the change, arguing that Keeping the timer frequency at 250 Hz is most beneficial for low-power devices, such as IoT boards and mobile devices.

Based on your assessment, Increasing the frequency to 1000 Hz could cause an increase in power consumption. On Android devices, for example, an increase of up to 7% in processor consumption has been observed in certain situations.

Furthermore, A higher timer frequency would imply a more frequent reactivation CPU. At 250 Hz, timers scheduled at t+1 ms, t+2 ms, t+3 ms, and t+4 ms are grouped into a single wakeup, while at 1000 Hz there would be four individual wakeups, which could increase power consumption.

Performance test results

About the case, the portal Phoronix conducted a series of tests on a PC with an AMD Ryzen 9 9950X CPU to evaluate the impact of the frequency change. The results were mixed:

  • MBetter performance with 1000 Hz in:
    Call.cpp

    Super Tux Kart
    S
    Kernel compilation times
  • Best performance at 250Hz on:
    Darktable
    PostgreSQL
    Unvanquished
    xonotic
    Blender
    SVT-AV1
    RawTherapee

As to energy consumption, the results were the following:

  • Setting at 1000 Hz:
    Average consumption: 144,2 W
    Minimum consumption: 0,18 W
    Maximum consumption: 202,13 W
  • Setting at 250 Hz:
    Average consumption: 144,37 W
    Minimum consumption: 0,07 W
    Maximum consumption: 202 W

Finally, it is worth mentioning that increasing the kernel timer interrupt rate to 1000 Hz offers advantages in certain use cases, especially in applications that require more frequent task switching and on devices with high refresh rate displays. However, it also presents disadvantages in terms of power consumption, particularly on low-power devices and environments where energy efficiency is a priority.

For the moment, The proposal is still under debate within the community and its adoption will depend on a deeper analysis of the impacts in different usage scenarios.