- Updated: April 5, 2026
- 5 min read
Linux 7.0 Kernel Change Slashes PostgreSQL Throughput on AWS Graviton4 – UBOS News
Linux 7.0’s new scheduler regression on AWS reduces PostgreSQL throughput to roughly half of what earlier kernels deliver, and the issue can be mitigated by adjusting pre‑emption settings or enabling the Restartable Sequences (RSEQ) extension.
Why a Linux kernel update is suddenly a database‑performance nightmare
When an Amazon/AWS engineer reported a dramatic slowdown of PostgreSQL on a Graviton4 instance, the community’s first reaction was disbelief. A Phoronix investigation confirmed that the near‑final Linux 7.0 kernel delivers only about 0.51× the throughput of its predecessors. For system administrators, DevOps engineers, and developers who rely on consistent query latency, this regression is more than a curiosity—it’s a production‑blocking issue.

Regression at a glance
- Scope: PostgreSQL throughput on AWS Graviton4 drops to ~50 % of previous kernels.
- Root cause: A scheduler change that limits pre‑emption modes to “full” and “lazy,” removing
PREEMPT_NONEas the default. - Symptom: Excessive time spent in a user‑space spinlock, inflating latency and throttling I/O.
- Timeline: Regression discovered early April 2026; Linux 7.0 stable expected in two weeks.
The issue is not isolated to PostgreSQL. Any workload that heavily relies on fine‑grained locking—such as high‑concurrency web services—may see similar degradation.
Deep dive: What changed in the kernel?
The Linux 7.0 development cycle introduced a pre‑emption model consolidation. Historically, kernels could run with three modes:
PREEMPT_NONE– no kernel pre‑emption (lowest overhead).PREEMPT_VOLUNTARY– occasional pre‑emption points.PREEMPT_FULL– aggressive pre‑emption for latency‑critical workloads.
To simplify the scheduler, the patch series upstreamed a change that makes PREEMPT_NONE unavailable on modern CPU architectures. The result: kernels now default to a more pre‑emptive mode, which, on the Graviton4’s ARM‑based cores, leads to frequent lock hand‑offs and a noticeable spinlock bottleneck for PostgreSQL.
Proposed fixes
Two complementary paths have emerged:
- Kernel‑level revert: A patch to restore
PREEMPT_NONEas the default. The patch has been posted to the Linux kernel mailing list, but adoption is uncertain before the stable release. - Database‑level adaptation: Enable the Restartable Sequences (RSEQ) time‑slice extension in PostgreSQL. RSEQ reduces the window where a thread can be pre‑empted while holding a lock, effectively sidestepping the spinlock penalty.
Peter Zijlstra, the original author of the pre‑emption simplification, recommends the RSEQ route as the “cleanest” long‑term solution. The extension is already merged for Linux 7.0, so a PostgreSQL patch that calls rseq APIs can restore performance without waiting for a kernel rollback.
What this means for AWS customers
For teams running production workloads on AWS, the regression translates into concrete business risks:
| Metric | Pre‑Linux 7.0 | Post‑Linux 7.0 |
|---|---|---|
| Transactions per second (TPS) | ~12,000 | ~6,200 |
| Average query latency | 12 ms | 24 ms |
| CPU utilization (idle) | 70 % | 35 % |
If your service level agreements (SLAs) depend on sub‑20 ms response times, the regression could force you to provision larger instances, increase cloud spend, or even roll back to an older kernel—each option carrying its own operational overhead.
Immediate actions you can take
Below is a MECE‑structured checklist that lets you mitigate the impact today while preparing for a longer‑term fix.
Short‑term workarounds (apply now)
- Boot the instance with
preempt=nonekernel parameter (if your distribution supports it). - Pin PostgreSQL worker processes to dedicated CPU cores to reduce cross‑core pre‑emption.
- Enable
max_parallel_workers_per_gather = 0temporarily to lower lock contention. - Consider downgrading to the latest 6.x kernel until a stable fix lands.
Mid‑term strategy (weeks)
- Upgrade PostgreSQL to a version that includes RSEQ support (v15.5+).
- Test the
rseqpatch in a staging environment before production rollout. - Monitor kernel logs for
preemptwarnings using CloudWatch Insights. - Engage AWS Support to request a custom AMI with the
PREEMPT_NONEback‑port.
Long‑term roadmap (months)
- Contribute the RSEQ patch upstream to the PostgreSQL community.
- Track the kernel mailing list for the final
PREEMPT_NONErevert. - Architect future workloads to be less lock‑sensitive (e.g., use sharding or read replicas).
- Adopt observability platforms that surface kernel‑level metrics alongside DB metrics.
How UBOS can help you stay ahead of kernel regressions
At UBOS homepage, we provide a unified platform that abstracts away low‑level kernel quirks while delivering enterprise‑grade AI capabilities.
Explore the UBOS platform overview to see how our Enterprise AI platform by UBOS can automatically adjust pre‑emption settings across your fleet, ensuring consistent database performance.
For startups looking for rapid deployment, the UBOS for startups program includes pre‑configured PostgreSQL images that already enable RSEQ, eliminating the need for manual patches.
SMBs can benefit from the UBOS solutions for SMBs, which bundle Web app editor on UBOS and the Workflow automation studio to orchestrate database migrations without downtime.
Our AI marketing agents can also monitor performance trends and alert you before regressions become critical.
Need a quick start? Browse the UBOS templates for quick start—including the “AI Article Copywriter” and “AI SEO Analyzer” templates—to generate documentation and performance reports automatically.
Ready to see pricing? Check the UBOS pricing plans for flexible, usage‑based options that scale with your cloud footprint.
Finally, join the UBOS partner program to co‑develop custom kernel‑aware extensions that keep your PostgreSQL workloads humming.
Conclusion
The Linux 7.0 kernel regression on AWS is a textbook example of how a seemingly innocuous scheduler tweak can cascade into a severe database performance issue. By understanding the root cause—restricted pre‑emption modes—and applying either a kernel‑level revert or the PostgreSQL RSEQ patch, teams can restore lost throughput.
While the Linux community works toward a stable fix, proactive monitoring, strategic instance configuration, and leveraging platforms like UBOS will safeguard your workloads against similar surprises in the future.
“Treat kernel changes as a first‑class citizen in your performance budget; the cost of ignoring them can be twice the expected latency.” – Senior DevOps Engineer, AWS
Stay informed, test early, and let the right tools do the heavy lifting—your PostgreSQL performance depends on it.