From patchwork Thu Nov 18 08:21:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Jelinek X-Patchwork-Id: 47874 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id D63053858424 for ; Thu, 18 Nov 2021 08:23:29 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org D63053858424 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1637223809; bh=tMNFRIhx3CJW5yYB09zd1wnoOt2GX1CmOUxxDSP5osc=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=JOHwi1aQVqbLmPCIZxoS7hlilzp+fEkZfbcjucodvdI9Mkf8PtWG1EbbdZxPxeEtN anwY/+U76KpQ2qh1iKAglVKUMPTSJeNxSYZ5yZ5DmCOdiWVqTeipyetVdASAX8l7yU J6mvbkPg2be9XckhWdfB40+J6eIwszg+I4Mh+aIM= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id BDE943858402 for ; Thu, 18 Nov 2021 08:21:54 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org BDE943858402 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-224-6J0h6dJ_PQOjxbNQvFk82Q-1; Thu, 18 Nov 2021 03:21:53 -0500 X-MC-Unique: 6J0h6dJ_PQOjxbNQvFk82Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 30FA71808304 for ; Thu, 18 Nov 2021 08:21:52 +0000 (UTC) Received: from tucnak.zalov.cz (unknown [10.39.192.54]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C15D760BF1 for ; Thu, 18 Nov 2021 08:21:51 +0000 (UTC) Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.16.1/8.16.1) with ESMTPS id 1AI8LmfT2640280 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT) for ; Thu, 18 Nov 2021 09:21:49 +0100 Received: (from jakub@localhost) by tucnak.zalov.cz (8.16.1/8.16.1/Submit) id 1AI8LmE52640279 for gcc-patches@gcc.gnu.org; Thu, 18 Nov 2021 09:21:48 +0100 Date: Thu, 18 Nov 2021 09:21:48 +0100 To: gcc-patches@gcc.gnu.org Subject: [committed] libgomp: Ensure that either gomp_team is properly aligned [PR102838] Message-ID: <20211118082148.GQ2710@tucnak> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-Spam-Status: No, score=-5.7 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Jakub Jelinek via Gcc-patches From: Jakub Jelinek Reply-To: Jakub Jelinek Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Hi! struct gomp_team has struct gomp_work_share array inside of it. If that latter structure has 64-byte aligned member in the middle, the whole struct gomp_team needs to be 64-byte aligned, but we weren't allocating it using gomp_aligned_alloc. This patch fixes that, except that on gcn team_malloc is special, so I've instead decided at least for now to avoid using aligned member and use the padding instead on gcn. Bootstrapped/regtested on x86_64-linux and i686-linux and in the PR Rainer mentioned testing on Solaris, committed to trunk. 2021-11-18 Jakub Jelinek PR libgomp/102838 * libgomp.h (GOMP_USE_ALIGNED_WORK_SHARES): Define if GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC is defined and __AMDGCN__ is not. (struct gomp_work_share): Use GOMP_USE_ALIGNED_WORK_SHARES instead of GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC. * work.c (alloc_work_share, gomp_work_share_start): Likewise. * team.c (gomp_new_team): If GOMP_USE_ALIGNED_WORK_SHARES, use gomp_aligned_alloc instead of team_malloc. Jakub --- libgomp/libgomp.h.jj 2021-11-11 14:35:37.699347142 +0100 +++ libgomp/libgomp.h 2021-11-16 11:57:26.657271188 +0100 @@ -95,6 +95,10 @@ enum memmodel #define GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC 1 #endif +#if defined(GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC) && !defined(__AMDGCN__) +#define GOMP_USE_ALIGNED_WORK_SHARES 1 +#endif + extern void *gomp_malloc (size_t) __attribute__((malloc)); extern void *gomp_malloc_cleared (size_t) __attribute__((malloc)); extern void *gomp_realloc (void *, size_t); @@ -348,7 +352,7 @@ struct gomp_work_share are in a different cache line. */ /* This lock protects the update of the following members. */ -#ifdef GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC +#ifdef GOMP_USE_ALIGNED_WORK_SHARES gomp_mutex_t lock __attribute__((aligned (64))); #else char pad[64 - offsetof (struct gomp_work_share_1st_cacheline, pad)]; --- libgomp/work.c.jj 2021-10-20 09:34:47.027331304 +0200 +++ libgomp/work.c 2021-11-16 11:58:10.136662003 +0100 @@ -78,7 +78,7 @@ alloc_work_share (struct gomp_team *team team->work_share_chunk *= 2; /* Allocating gomp_work_share structures aligned is just an optimization, don't do it when using the fallback method. */ -#ifdef GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC +#ifdef GOMP_USE_ALIGNED_WORK_SHARES ws = gomp_aligned_alloc (__alignof (struct gomp_work_share), team->work_share_chunk * sizeof (struct gomp_work_share)); @@ -191,7 +191,7 @@ gomp_work_share_start (size_t ordered) /* Work sharing constructs can be orphaned. */ if (team == NULL) { -#ifdef GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC +#ifdef GOMP_USE_ALIGNED_WORK_SHARES ws = gomp_aligned_alloc (__alignof (struct gomp_work_share), sizeof (*ws)); #else --- libgomp/team.c.jj 2021-11-11 14:35:37.699347142 +0100 +++ libgomp/team.c 2021-11-16 11:59:46.401311440 +0100 @@ -177,7 +177,12 @@ gomp_new_team (unsigned nthreads) { size_t extra = sizeof (team->ordered_release[0]) + sizeof (team->implicit_task[0]); +#ifdef GOMP_USE_ALIGNED_WORK_SHARES + team = gomp_aligned_alloc (__alignof (struct gomp_team), + sizeof (*team) + nthreads * extra); +#else team = team_malloc (sizeof (*team) + nthreads * extra); +#endif #ifndef HAVE_SYNC_BUILTINS gomp_mutex_init (&team->work_share_list_free_lock);