From: Miaohe Lin Subject: mm/swapfile: fix lost swap bits in unuse_pte() This is observed by code review only but not any real report. When we turn off swapping we could have lost the bits stored in the swap ptes. The new rmap-exclusive bit is fine since that turned into a page flag, but not for soft-dirty and uffd-wp. Add them. Link: https://lkml.kernel.org/r/20220424091105.48374-3-linmiaohe@huawei.com Signed-off-by: Miaohe Lin Suggested-by: Peter Xu Reviewed-by: David Hildenbrand Cc: Matthew Wilcox Cc: Vlastimil Babka Cc: David Howells Cc: NeilBrown Cc: Alistair Popple Cc: Suren Baghdasaryan Cc: Minchan Kim Cc: Stephen Rothwell Cc: Naoya Horiguchi Signed-off-by: Andrew Morton --- mm/swapfile.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) --- a/mm/swapfile.c~mm-swapfile-fix-lost-swap-bits-in-unuse_pte +++ a/mm/swapfile.c @@ -1783,7 +1783,7 @@ static int unuse_pte(struct vm_area_stru { struct page *swapcache; spinlock_t *ptl; - pte_t *pte; + pte_t *pte, new_pte; int ret = 1; swapcache = page; @@ -1832,8 +1832,12 @@ static int unuse_pte(struct vm_area_stru page_add_new_anon_rmap(page, vma, addr); lru_cache_add_inactive_or_unevictable(page, vma); } - set_pte_at(vma->vm_mm, addr, pte, - pte_mkold(mk_pte(page, vma->vm_page_prot))); + new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot)); + if (pte_swp_soft_dirty(*pte)) + new_pte = pte_mksoft_dirty(new_pte); + if (pte_swp_uffd_wp(*pte)) + new_pte = pte_mkuffd_wp(new_pte); + set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); out: pte_unmap_unlock(pte, ptl); _