From: Muchun Song Subject: mm: rmap: fix cache flush on THP pages Patch series "Fix some bugs related to ramp and dax", v7. Patch 1-2 fix a cache flush bug, because subsequent patches depend on those on those changes, there are placed in this series. Patch 3-4 are preparation for fixing a dax bug in patch 5. Patch 6 is code cleanup since the previous patch removes the usage of follow_invalidate_pte(). This patch (of 6): The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Link: https://lkml.kernel.org/r/20220403053957.10770-1-songmuchun@bytedance.com Link: https://lkml.kernel.org/r/20220403053957.10770-2-songmuchun@bytedance.com Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song Reviewed-by: Yang Shi Reviewed-by: Dan Williams Reviewed-by: Christoph Hellwig Cc: Matthew Wilcox Cc: Jan Kara Cc: Al Viro Cc: Alistair Popple Cc: Yang Shi Cc: Ralph Campbell Cc: Hugh Dickins Cc: Xiyu Yang Cc: "Kirill A. Shutemov" Cc: Ross Zwisler Cc: Xiongchun Duan Signed-off-by: Andrew Morton --- mm/rmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/mm/rmap.c~mm-rmap-fix-cache-flush-on-thp-pages +++ a/mm/rmap.c @@ -971,7 +971,8 @@ static bool page_mkclean_one(struct foli if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, folio_pfn(folio)); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); _