From patchwork Tue Mar 17 13:43:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleg Nesterov X-Patchwork-Id: 5649 Received: (qmail 46408 invoked by alias); 17 Mar 2015 13:45:06 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 46390 invoked by uid 89); 17 Mar 2015 13:45:05 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.8 required=5.0 tests=AWL, BAYES_00, SPF_HELO_PASS, SPF_PASS, T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Tue, 17 Mar 2015 13:45:05 +0000 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id E5B608E760; Tue, 17 Mar 2015 13:45:03 +0000 (UTC) Received: from tranklukator.brq.redhat.com (dhcp-1-208.brq.redhat.com [10.34.1.208]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id t2HDj0YU002773; Tue, 17 Mar 2015 09:45:01 -0400 Received: by tranklukator.brq.redhat.com (nbSMTP-1.00) for uid 500 oleg@redhat.com; Tue, 17 Mar 2015 14:43:12 +0100 (CET) Date: Tue, 17 Mar 2015 14:43:09 +0100 From: Oleg Nesterov To: Andy Lutomirski Cc: Hugh Dickins , Linus Torvalds , Jan Kratochvil , Sergio Durigan Junior , GDB Patches , Pedro Alves , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" Subject: Re: install_special_mapping && vm_pgoff (Was: vvar, gup && coredump) Message-ID: <20150317134309.GA365@redhat.com> References: <87zj7r5fpz.fsf@redhat.com> <20150305205744.GA13165@host1.jankratochvil.net> <20150311200052.GA22654@redhat.com> <20150312143438.GA4338@redhat.com> <20150312165423.GA10073@redhat.com> <20150312174653.GA13086@redhat.com> <20150316190154.GA18472@redhat.com> <20150316194446.GA21791@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20150316194446.GA21791@redhat.com> User-Agent: Mutt/1.5.18 (2008-05-17) On 03/16, Oleg Nesterov wrote: > > On 03/16, Andy Lutomirski wrote: > > > > Ick, you're probably right. For what it's worth, the vdso *seems* to > > be okay (on 64-bit only, and only if you don't poke at it too hard) if > > you mremap it in one piece. CRIU does that. > > I need to run away till tomorrow, but looking at this code even if "one piece" > case doesn't look right if it was cow'ed. I'll verify tomorrow. And I am still not sure this all is 100% correct, but I got lost in this code. Probably this is fine... But at least the bug exposed by the test-case looks clear: do_linear_fault: vmf->pgoff = (((address & PAGE_MASK) - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; ... special_mapping_fault: pgoff = vmf->pgoff - vma->vm_pgoff; So special_mapping_fault() can only work if this mapping starts from the first page in ->pages[]. So perhaps we need _something like_ the (wrong/incomplete) patch below... Or, really, perhaps we can create vdso_mapping ? So that map_vdso() could simply mmap the anon_inode file... Oleg. --- x/mm/mmap.c +++ x/mm/mmap.c @@ -2832,6 +2832,8 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_area_struct *vma) return 0; } +bool is_special_vma(struct vm_area_struct *vma); + /* * Copy the vma structure to a new location in the same mm, * prior to moving page table entries, to effect an mremap move. @@ -2851,7 +2853,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap, * If anonymous vma has not yet been faulted, update new pgoff * to match new location, to increase its chance of merging. */ - if (unlikely(!vma->vm_file && !vma->anon_vma)) { + if (unlikely(!vma->vm_file && !is_special_vma(vma) && !vma->anon_vma)) { pgoff = addr >> PAGE_SHIFT; faulted_in_anon_vma = false; } @@ -2953,6 +2955,11 @@ static const struct vm_operations_struct legacy_special_mapping_vmops = { .fault = special_mapping_fault, }; +bool is_special_vma(struct vm_area_struct *vma) +{ + return vma->vm_ops == &special_mapping_vmops; +} + static int special_mapping_fault(struct vm_area_struct *vma, struct vm_fault *vmf) { @@ -2965,7 +2972,7 @@ static int special_mapping_fault(struct vm_area_struct *vma, * We are allowed to do this because we are the mm; do not copy * this code into drivers! */ - pgoff = vmf->pgoff - vma->vm_pgoff; + pgoff = vmf->pgoff; if (vma->vm_ops == &legacy_special_mapping_vmops) pages = vma->vm_private_data; @@ -3014,6 +3021,7 @@ static struct vm_area_struct *__install_special_mapping( if (ret) goto out; + vma->vm_pgoff = 0; mm->total_vm += len >> PAGE_SHIFT; perf_event_mmap(vma);