diff options
| author | Loïc Molinari <loic.molinari@collabora.com> | 2025-12-05 19:22:23 +0100 |
|---|---|---|
| committer | Boris Brezillon <boris.brezillon@collabora.com> | 2025-12-08 10:52:47 +0100 |
| commit | 211b9a39f2619b9c0d85bcd48aeb399397910b42 (patch) | |
| tree | 6df14032529c8857d7a8d99ae73b556d69b6dc96 /tools/perf/lib/Documentation/tutorial/git@git.tavy.me:linux.git | |
| parent | 9d2d49027c3a9628989c9ec70ebef9d241f49c1e (diff) | |
drm/shmem-helper: Map huge pages in fault handler
Attempt a PMD sized PFN insertion into the VMA if the faulty address
of the fault handler is part of a huge page.
On builds with CONFIG_TRANSPARENT_HUGEPAGE enabled, if the mmap() user
address is PMD size aligned, if the GEM object is backed by shmem
buffers on mountpoints setting the 'huge=' option and if the shmem
backing store manages to allocate a huge folio, CPU mapping would then
benefit from significantly increased memcpy() performance. When these
conditions are met on a system with 2 MiB huge pages, an aligned copy
of 2 MiB would raise a single page fault instead of 4096.
v4:
- implement map_pages instead of huge_fault
v6:
- get rid of map_pages handler for now (keep it for another series
along with arm64 contpte support)
v11:
- remove page fault validity check helper
- rename drm_gem_shmem_map_pmd() to drm_gem_shmem_try_map_pmd()
- add Boris R-b
v12:
- move up ret var decl in fault handler to minimize diff
Signed-off-by: Loïc Molinari <loic.molinari@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Link: https://patch.msgid.link/20251205182231.194072-3-loic.molinari@collabora.com
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Diffstat (limited to 'tools/perf/lib/Documentation/tutorial/git@git.tavy.me:linux.git')
0 files changed, 0 insertions, 0 deletions
