summaryrefslogtreecommitdiff
path: root/rust/kernel/interop/git@git.tavy.me:linux-stable.git
diff options
context:
space:
mode:
authorLai Jiangshan <jiangshan.ljs@antgroup.com>2026-01-23 17:03:03 +0800
committerSean Christopherson <seanjc@google.com>2026-03-12 10:36:01 -0700
commitb3ae3ceb556945724d0c046ddb4ea0cf492a0ce6 (patch)
tree3df01ebfdf04a8e792cc2388e6c5f45170908156 /rust/kernel/interop/git@git.tavy.me:linux-stable.git
parentecb80629321306547f7ad13b0ca5ef9cf8cdbb77 (diff)
KVM: x86/mmu: KVM: x86/mmu: Skip unsync when large pages are allowed
Use the large-page metadata to avoid pointless attempts to search SP. If the target GFN falls within a range where a large page is allowed, then there cannot be a shadow page for that GFN; a shadow page in the range would itself disallow using a large page. In that case, there is nothing to unsync and mmu_try_to_unsync_pages() can return immediately. This is always true for TDP MMU without nested TDP, and holds for a significant fraction of cases with shadow paging even all SPs are 4K. For shadow paging, this optimization theoretically avoids work for about 1/e ~= 37% of GFNs, assuming one guest page table per 2M of memory and that each GPT falls randomly into the 2M memory buckets. In a simple test setup, it skipped unsync in a much higher percentage of cases, mainly because the guest buddy allocator clusters GPTs into fewer buckets. Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com> Link: https://patch.msgid.link/20260123090304.32286-2-jiangshanlai@gmail.com [sean: check for hugepage after write-tracking, update comment] Signed-off-by: Sean Christopherson <seanjc@google.com>
Diffstat (limited to 'rust/kernel/interop/git@git.tavy.me:linux-stable.git')
0 files changed, 0 insertions, 0 deletions