From patchwork Tue Jul 19 14:08:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wilco Dijkstra X-Patchwork-Id: 56165 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8F3C7385381F for ; Tue, 19 Jul 2022 14:09:09 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8F3C7385381F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1658239749; bh=OZfRx5e2pzZVWUUFPYKamO14kIlqet+PmYitC7vyRvQ=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=yH7AQ3fKnZm1/nRfp82wCfSeN0RXIVHblyvSqUQpBZu3tI+f9MTbZAUq8Oj8FP71r 0Bw2d9N8WJMYsGsk1lmjXizohzHh4ZREHW/HLvMC7y2rIOdE28UQula8RZUpqtJ1lv lTfj2hcM36ezTel+SZCxosxiIQ7EKbg/X+I0ILEc= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2055.outbound.protection.outlook.com [40.107.20.55]) by sourceware.org (Postfix) with ESMTPS id ED0CE3858017 for ; Tue, 19 Jul 2022 14:08:46 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org ED0CE3858017 ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=EIX1HwifNlm3ps9yuk9LGhmWpaRDQMfxhwm/BSmR4cEv4LwhpxCkFHrJe3xNZcXukrSiKxBMUZgyve2GDkF/ieWQJrQGXW0TYmzl5q3G5VIcP7mAPaV6/maJ5RrPZCgJp3ge66ReFFx51MeDSO/tTi6Z72PNyg291bjLOtXwx6rUTrvaJM/VoXqmQdeTQEW/tCA+8SwvJRGmlBF8yPTgKwiuPsioa4BbsBr5vROkd4zAhAEnfv9E9SHMXi9ItxWyA3TwNhgbFoDOOh3GWDPYIO+/lgDsWEL6hjW/yj2HCgl+RwtRDO3cfNB1Clx+wYF+qHSUAt5xEXopzVCdEBGSNw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OZfRx5e2pzZVWUUFPYKamO14kIlqet+PmYitC7vyRvQ=; b=Cryr72zBCADtnRvkff60m8vLp03d6qE82sZOCF9Adn8tXeSRhvUgMBr88bI7RzKei5+US2D3/nnTbdszQlg2fSHTeTdH17qpbGNrJrJwG1rIl65yGlH/0dFQLFNa1b8902996sYX3jiO8FOPN5eflqVyVTgfFW+c7+xDh978hPznIJcIHAN3ffgk0POs8tmvr45l6BE5c7yJ6r1CV8mXJtMYF2GhpBLsDmJ5gfuqv7QGFII6XRtpqhSpBdXy3BDvrPGoWodZyk8ew9ifdG1IgKHnha4lQ2BCGgJJe1oq32oVJAKv1ejWpMjd63l9gAVX1VoGVq3CL10Z8xCO7cAirw== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=sourceware.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from FR3P281CA0129.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:94::17) by PR2PR08MB4763.eurprd08.prod.outlook.com (2603:10a6:101:20::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.17; Tue, 19 Jul 2022 14:08:35 +0000 Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com (2603:10a6:d10:94:cafe::7) by FR3P281CA0129.outlook.office365.com (2603:10a6:d10:94::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.7 via Frontend Transport; Tue, 19 Jul 2022 14:08:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12 via Frontend Transport; Tue, 19 Jul 2022 14:08:35 +0000 Received: ("Tessian outbound c883b5ba7b70:v123"); Tue, 19 Jul 2022 14:08:34 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 69ae50fc4018cd32 X-CR-MTA-TID: 64aa7808 Received: from 5c0e4308d423.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3FE42897-107D-4AF1-805F-D4934CE75274.1; Tue, 19 Jul 2022 14:08:27 +0000 Received: from EUR01-DB5-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5c0e4308d423.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 19 Jul 2022 14:08:27 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SO44lfMYz81Etc9aKosl6G7LXhVC35vOLJ/GdDKSfKGiy6+SXpv52DXLjLsvNznsrB4dNLFnli1FOPIx1svjYoYkQrbYUuwpQhidw0AyTlsTsX5lAoYuuRIUi4bwKJ+q4uCGXo/mxE9eEAc+BHUbpM63OX3HP1JRWFq5llrCI9q/NiyX7F+hjBaelnsfiVZAhuZ2vRTSkNg3G9YYKkBJDZghLYjTyUmJehNoE99nl3lBy2tluGWKjezvnlYHMdwf+WJPqTT4ObJoUISCTJOyVCUxrn0HPGF+flX8czw79XKCtlIScso1K5xC7F3+4EXyN4y/vTDI8PSRGphUpFY+0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OZfRx5e2pzZVWUUFPYKamO14kIlqet+PmYitC7vyRvQ=; b=DebviABA97BX6ZRh1WivYKTA03po25ehAtUMbgyyOR/P+D+nzLjbIwRaPpfvgeqCuuKa/ZLJJhCRlQnObyUzwppy5Z+DEZvm+248Np4GrHhtqENkUusEx7G/4TyRS5ICirGnobCGobEPkxlhCqItHGrtZnL2HbufMHfj3SZc4z7IRpdeQE+q3rIS+ovSw7P+3gnoQoQfyI1DKrF6nmkqAZxEGHY1CqNr+qkeDoQwd53md76MrZ+J0nn1Uo40Z54sSFiXSQakMrhigSY+sboJxVUnuH6IxCGr58/dXWg2znxcKcZLwPn17A3l5XgF7O7HigLbQymibgqZdGlk8kVhwg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com (2603:10a6:203:3c::14) by HE1PR0801MB2123.eurprd08.prod.outlook.com (2603:10a6:3:7e::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.20; Tue, 19 Jul 2022 14:08:24 +0000 Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581]) by AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581%5]) with mapi id 15.20.5438.023; Tue, 19 Jul 2022 14:08:24 +0000 To: 'GNU C Library' Subject: [PATCH 1/4] Switch to builtin atomics Thread-Topic: [PATCH 1/4] Switch to builtin atomics Thread-Index: AQHYm3hOFfnWh8iJY0aL/Bzvad4gEw== Date: Tue, 19 Jul 2022 14:08:23 +0000 Message-ID: Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-MS-Office365-Filtering-Correlation-Id: 1016d293-71e9-41d0-9e6a-08da69902b8f x-ms-traffictypediagnostic: HE1PR0801MB2123:EE_|VE1EUR03FT063:EE_|PR2PR08MB4763:EE_ x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: T6Poh2EhjqL6casIUP5rRVek96epBSVhyjIJu3Y2FQipAOw4oGJFW0wf1oFj5KeWmoyqbc6AtO7FTJiVSJBHSJz5H2ldgODQR+gkC2luqCIxjzVv1b++evDoJN034snSOqWFWK77H2GwDOI12OQYHmaEQ8jnjXb/uDKgAmW3Os+Zn4VUsp45ZmyTkg1b+YENLUjAGjZZwcinPWqBAejxZnvCtcmCb1CcWJ8LsTapAC5wRK8dtOK5T6OVFL8jBmrbI++nXkVDcirLV6M3YgD3R72KOZCNQdZfGyHHOpXBkn0sNKCKzMSbhd9Vb7eVTOQf36NVdH1OZoc8mZ2eChrIDDDYzCHvDYfCLn2SQsxuPiB21el6wHGXqH0oejcwFJLNFcc0GZjMd9WzF60FHyRj9BtGyor7OasjajQNnjXjRjjlu5Vn1jhv4020AmnH6/mJ6bmkRt7mVE7TOTRzUKhtkDmfOj7BkO7WM2Vz9hpzPW5ps0C8CNE4+eyufD96CbSX9cZx66tsNgOeEt0hKIrMgwHcR11wY2i12A65vyHN/l3Hcxldd6KRp8M5tCcJirNL+PqrhMAQpC9Ol9r/FqE1W3EBgtw6KpIGnx43+h7gCoASk/nwy8avGbw6We7j/klXt4jlrs0Xpu6g03kHApf3juyxjFrjEpoxxzfVDu5rdbwLpmhL5d5lAxWosT7GFWp5+14jIk7qAYYHGWMHgGtNPYjpfCMmSK0vf0uyfy2ZJUnNLi3xdSlK2EC9U0NRuWN6OMjMkvkBLl+uARNkVzVlYoNBXToevY3naWWQk/CzJvlwcFLpbHkvVQQgTrRlnU2dfICu2FnJn5pPoBjhr8Y3JGLRSkh2UQCIYbJfrAhjek8= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM5PR0801MB1668.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230016)(4636009)(346002)(366004)(39860400002)(396003)(376002)(136003)(6506007)(41300700001)(26005)(7696005)(9686003)(91956017)(6916009)(478600001)(2906002)(71200400001)(8676002)(55016003)(66946007)(66476007)(76116006)(66446008)(64756008)(316002)(66556008)(52536014)(5660300002)(8936002)(122000001)(38070700005)(33656002)(186003)(83380400001)(86362001)(38100700002)(21314003); DIR:OUT; SFP:1101; MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB2123 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 0d587d34-0e88-4a9d-945b-08da699024c6 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gcRUJlnHCrr6BTanZ5Nzg0fwNAq08/hGaWfExFOM2QjMlSyn7AUHSQTtEWW3H2D7Raf/Ok6l5irojEFIAzlrxQL4uF+SdwJpe7/BXENwWuJdh4vNxe84uu4LYoMWKhgFVj+gRTBp4OT101RVKN8yGwELWK3Vov3IPJkQLlGSyXiXbhdXv3Pb6yuLRpU/g6j2zOYa3O+nXzVx/8WOOXHtDt4ofstLudFxXFnmiv0OTr/d0etda5nVjh6Wi/UqDOE13GzJ5dM2rVy5PIHF3bHpVMWPT8gKDMft9mEaOdrHVetHWG5Fr71AS4aPrYsHpEI9n2NlnxPVzG15ZemF3gNIA4yUtgbQh6a6en3bZQybgWKWKzu7RwmgLBABfkd4BSD/2g4oGTZxpmSHGVpR78CNMIlBFEG+nALsuZwrYjYB5t34B6PT5EJFvU38Zh6aT3+UQygfCxAj6urZBMQSk5IDkORGr3W3iEfqrv90qMKrYMhKy/bSWYeuApRRO4R2axsxIygoaElZuH9Y9BLVAVa3Kusz4rePpyHjE8hfxAGJIKBa5+6CJCCOvXy1stzHXn/GzgSa6gnWUufhfwooj1rdaL27L+M8rVuhjIpKjG3Jrn9IRN9/gdk+/YyWhOphNU2bGN58a4uxNR/kpRStivuuTAIVxfCpux6phvKcYSr+3G3ti+O3bEYGNgIQxcub8GsDX12jLqifyqURZHG1dWuet29ZeRNAx0WlKnacjQXaPFSaPRQauZA/LPweTzah78nNN5tS4MJPATebuDe/nLntFy1G+E3O1INRbe7yg2hmS8/8RsX82jz/1ek26qrJLsexGc0C8U1knsLbsyRULxeUVJNWNs0EeXOVSF/0lCuJBpI= X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230016)(4636009)(39860400002)(136003)(396003)(346002)(376002)(40470700004)(46966006)(36840700001)(36860700001)(8936002)(40480700001)(55016003)(40460700003)(478600001)(47076005)(186003)(52536014)(336012)(6506007)(7696005)(5660300002)(26005)(83380400001)(2906002)(33656002)(81166007)(9686003)(6916009)(86362001)(82740400003)(316002)(356005)(82310400005)(8676002)(70206006)(70586007)(41300700001)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2022 14:08:35.1979 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1016d293-71e9-41d0-9e6a-08da69902b8f X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4763 X-Spam-Status: No, score=-11.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Wilco Dijkstra via Libc-alpha From: Wilco Dijkstra Reply-To: Wilco Dijkstra Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" After the cleanup, it is now easy to switch to standard builtin atomics by removing the !USE_ATOMIC_COMPILER_BUILTINS clause in atomic.h. A small adjustment is needed for m68k since it incorrectly claims to support lock-free 64-bit atomics. Passes buildmanyglibc and regression testing on AArch64. diff --git a/include/atomic.h b/include/atomic.h index 0f31ea77ba2095ea461bf84f89c9987317f63b35..bf6417621a34458a1d7585ee6a7a24cea6670a07 100644 --- a/include/atomic.h +++ b/include/atomic.h @@ -266,9 +266,6 @@ C11. Usually, a function named atomic_OP_MO(args) is equivalent to C11's atomic_OP_explicit(args, memory_order_MO); exceptions noted below. */ -/* Each arch can request to use compiler built-ins for C11 atomics. If it - does, all atomics will be based on these. */ -#if USE_ATOMIC_COMPILER_BUILTINS /* We require 32b atomic operations; some archs also support 64b atomic operations. */ @@ -383,166 +380,6 @@ void __atomic_link_error (void); ({ __atomic_check_size((mem)); \ __atomic_fetch_xor ((mem), (operand), __ATOMIC_RELEASE); }) -#else /* !USE_ATOMIC_COMPILER_BUILTINS */ - -/* By default, we assume that read, write, and full barriers are equivalent - to acquire, release, and seq_cst barriers. Archs for which this does not - hold have to provide custom definitions of the fences. */ -# ifndef atomic_thread_fence_acquire -# define atomic_thread_fence_acquire() atomic_read_barrier () -# endif -# ifndef atomic_thread_fence_release -# define atomic_thread_fence_release() atomic_write_barrier () -# endif -# ifndef atomic_thread_fence_seq_cst -# define atomic_thread_fence_seq_cst() atomic_full_barrier () -# endif - -# ifndef atomic_load_relaxed -# define atomic_load_relaxed(mem) \ - ({ __typeof ((__typeof (*(mem))) *(mem)) __atg100_val; \ - __asm ("" : "=r" (__atg100_val) : "0" (*(mem))); \ - __atg100_val; }) -# endif -# ifndef atomic_load_acquire -# define atomic_load_acquire(mem) \ - ({ __typeof (*(mem)) __atg101_val = atomic_load_relaxed (mem); \ - atomic_thread_fence_acquire (); \ - __atg101_val; }) -# endif - -# ifndef atomic_store_relaxed -/* XXX Use inline asm here? */ -# define atomic_store_relaxed(mem, val) do { *(mem) = (val); } while (0) -# endif -# ifndef atomic_store_release -# define atomic_store_release(mem, val) \ - do { \ - atomic_thread_fence_release (); \ - atomic_store_relaxed ((mem), (val)); \ - } while (0) -# endif - -/* On failure, this CAS has memory_order_relaxed semantics. */ -/* XXX This potentially has one branch more than necessary, but archs - currently do not define a CAS that returns both the previous value and - the success flag. */ -# ifndef atomic_compare_exchange_weak_acquire -# define atomic_compare_exchange_weak_acquire(mem, expected, desired) \ - ({ typeof (*(expected)) __atg102_expected = *(expected); \ - *(expected) = \ - atomic_compare_and_exchange_val_acq ((mem), (desired), *(expected)); \ - *(expected) == __atg102_expected; }) -# endif -# ifndef atomic_compare_exchange_weak_relaxed -/* XXX Fall back to CAS with acquire MO because archs do not define a weaker - CAS. */ -# define atomic_compare_exchange_weak_relaxed(mem, expected, desired) \ - atomic_compare_exchange_weak_acquire ((mem), (expected), (desired)) -# endif -# ifndef atomic_compare_exchange_weak_release -# define atomic_compare_exchange_weak_release(mem, expected, desired) \ - ({ typeof (*(expected)) __atg103_expected = *(expected); \ - *(expected) = \ - atomic_compare_and_exchange_val_rel ((mem), (desired), *(expected)); \ - *(expected) == __atg103_expected; }) -# endif - -/* XXX Fall back to acquire MO because archs do not define a weaker - atomic_exchange. */ -# ifndef atomic_exchange_relaxed -# define atomic_exchange_relaxed(mem, val) \ - atomic_exchange_acq ((mem), (val)) -# endif -# ifndef atomic_exchange_acquire -# define atomic_exchange_acquire(mem, val) \ - atomic_exchange_acq ((mem), (val)) -# endif -# ifndef atomic_exchange_release -# define atomic_exchange_release(mem, val) \ - atomic_exchange_rel ((mem), (val)) -# endif - -# ifndef atomic_fetch_add_acquire -# define atomic_fetch_add_acquire(mem, operand) \ - atomic_exchange_and_add_acq ((mem), (operand)) -# endif -# ifndef atomic_fetch_add_relaxed -/* XXX Fall back to acquire MO because the MO semantics of - atomic_exchange_and_add are not documented; the generic version falls back - to atomic_exchange_and_add_acq if atomic_exchange_and_add is not defined, - and vice versa. */ -# define atomic_fetch_add_relaxed(mem, operand) \ - atomic_fetch_add_acquire ((mem), (operand)) -# endif -# ifndef atomic_fetch_add_release -# define atomic_fetch_add_release(mem, operand) \ - atomic_exchange_and_add_rel ((mem), (operand)) -# endif -# ifndef atomic_fetch_add_acq_rel -# define atomic_fetch_add_acq_rel(mem, operand) \ - ({ atomic_thread_fence_release (); \ - atomic_exchange_and_add_acq ((mem), (operand)); }) -# endif - -/* XXX Fall back to acquire MO because archs do not define a weaker - atomic_and_val. */ -# ifndef atomic_fetch_and_relaxed -# define atomic_fetch_and_relaxed(mem, operand) \ - atomic_fetch_and_acquire ((mem), (operand)) -# endif -/* XXX The default for atomic_and_val has acquire semantics, but this is not - documented. */ -# ifndef atomic_fetch_and_acquire -# define atomic_fetch_and_acquire(mem, operand) \ - atomic_and_val ((mem), (operand)) -# endif -# ifndef atomic_fetch_and_release -/* XXX This unnecessarily has acquire MO. */ -# define atomic_fetch_and_release(mem, operand) \ - ({ atomic_thread_fence_release (); \ - atomic_and_val ((mem), (operand)); }) -# endif - -/* XXX The default for atomic_or_val has acquire semantics, but this is not - documented. */ -# ifndef atomic_fetch_or_acquire -# define atomic_fetch_or_acquire(mem, operand) \ - atomic_or_val ((mem), (operand)) -# endif -/* XXX Fall back to acquire MO because archs do not define a weaker - atomic_or_val. */ -# ifndef atomic_fetch_or_relaxed -# define atomic_fetch_or_relaxed(mem, operand) \ - atomic_fetch_or_acquire ((mem), (operand)) -# endif -/* XXX Contains an unnecessary acquire MO because archs do not define a weaker - atomic_or_val. */ -# ifndef atomic_fetch_or_release -# define atomic_fetch_or_release(mem, operand) \ - ({ atomic_thread_fence_release (); \ - atomic_fetch_or_acquire ((mem), (operand)); }) -# endif - -# ifndef atomic_fetch_xor_release -/* Failing the atomic_compare_exchange_weak_release reloads the value in - __atg104_expected, so we need only do the XOR again and retry. */ -# define atomic_fetch_xor_release(mem, operand) \ - ({ __typeof (mem) __atg104_memp = (mem); \ - __typeof (*(mem)) __atg104_expected = (*__atg104_memp); \ - __typeof (*(mem)) __atg104_desired; \ - __typeof (*(mem)) __atg104_op = (operand); \ - \ - do \ - __atg104_desired = __atg104_expected ^ __atg104_op; \ - while (__glibc_unlikely \ - (atomic_compare_exchange_weak_release ( \ - __atg104_memp, &__atg104_expected, __atg104_desired) \ - == 0)); \ - __atg104_expected; }) -#endif - -#endif /* !USE_ATOMIC_COMPILER_BUILTINS */ /* This operation does not affect synchronization semantics but can be used in the body of a spin loop to potentially improve its efficiency. */ diff --git a/sysdeps/m68k/m680x0/m68020/atomic-machine.h b/sysdeps/m68k/m680x0/m68020/atomic-machine.h index 8460fb61072dce030957b029d7c180de13089481..529aa0a70abdb6fde367a031b2f11e577af2c914 100644 --- a/sysdeps/m68k/m680x0/m68020/atomic-machine.h +++ b/sysdeps/m68k/m680x0/m68020/atomic-machine.h @@ -15,7 +15,8 @@ License along with the GNU C Library. If not, see . */ -#define __HAVE_64B_ATOMICS 1 +/* GCC does not support lock-free 64-bit atomic_load/store. */ +#define __HAVE_64B_ATOMICS 0 #define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ From patchwork Tue Jul 19 14:12:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wilco Dijkstra X-Patchwork-Id: 56169 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 0796638515E2 for ; Tue, 19 Jul 2022 14:13:11 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 0796638515E2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1658239991; bh=fuT9/CF3EUnFj9cWrjMYqeUnZouK6sIRTL/I/Pw+6Gw=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=miFRIBy892qy/l2DixWPN/WNpCHzAsrniVjFNCmKuFX4cMCPMuhhDB+7WGt2DlKRF 1YrmzeKCjBPfOA15aSR4/QiODVuMYnLlkwmtPEzdYoRFAMGapRny9rOQGw4c//J5D4 swSqlcvrDHNBOvhdXD9A7UdHywZIRUnxCn77gku4= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from EUR02-HE1-obe.outbound.protection.outlook.com (mail-eopbgr10044.outbound.protection.outlook.com [40.107.1.44]) by sourceware.org (Postfix) with ESMTPS id D17023857036 for ; Tue, 19 Jul 2022 14:12:48 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org D17023857036 ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=W2k7s/j0a9c1gRJeXAJvjlcLeSYCp9LeD+GZQsFTxRnE0AKVLSPJgSHS2+tLsqwzlhFXLfeFO7ZX/RsnCJRKR+Smtc4CJncvh5T+Gbf+Q73fZlwrnEUZ9WjA+KQHjHvQMRPGO5VRxvtqj+QQkR2wszvUFlPrUraFCNCNyS6+WbxDeeNi6gPI4TiRpv+L7ZsGBOxHyyhwNUcTf/RVhARqx/nia2Ghove3IHZY0pAsK0g2cOxmRudOqSBCQtR3aSA9Ab8kVBCH8TsdlRzVoki+QPuCXO50cdAUMBIF5Tf7Sq6Kt7V6rhzmQW6LqYwgQjlBeo3Z+Oor68652Deqtv2ViA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fuT9/CF3EUnFj9cWrjMYqeUnZouK6sIRTL/I/Pw+6Gw=; b=HM+OFuTPEtouMwAYoUunEhdWowAo+cTttZhHzw3Ntp74Bri0cOBa5mM8fjn3EnDZ7FZW9DlSNQHaO4A6BRq+On7KRYgEi5PTSwiCuK3ECZOkPxR9kU3gemOrpajapw43qLUbBQQh7mKmekPSLUruZZS15hbSqUyDa5XTvoQJjZPeGpKfXaOxL/mtXYdU7kYTOX76M//2ExnvSXdTJg7GEGfszXGItKxcI16sINEBruTOs1KoycXVlz33TP6l3RixHHw0hPHxRRtgIlll5qrn/2NJdT/ALTm2bHed7LHC4pH+W3ySkcvbzKT9l9nI3xXycSOX8yfcMU3yxQY97YP2kQ== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=sourceware.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from DB6PR07CA0064.eurprd07.prod.outlook.com (2603:10a6:6:2a::26) by DBBPR08MB4377.eurprd08.prod.outlook.com (2603:10a6:10:c6::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.20; Tue, 19 Jul 2022 14:12:44 +0000 Received: from DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com (2603:10a6:6:2a:cafe::94) by DB6PR07CA0064.outlook.office365.com (2603:10a6:6:2a::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.16 via Frontend Transport; Tue, 19 Jul 2022 14:12:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT043.mail.protection.outlook.com (100.127.143.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12 via Frontend Transport; Tue, 19 Jul 2022 14:12:44 +0000 Received: ("Tessian outbound c883b5ba7b70:v123"); Tue, 19 Jul 2022 14:12:44 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 4e1f446ce5097c47 X-CR-MTA-TID: 64aa7808 Received: from d6dc8797296f.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3F3AD14E-DB98-4B9D-A80A-9E86A5A96871.1; Tue, 19 Jul 2022 14:12:37 +0000 Received: from EUR02-VE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d6dc8797296f.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 19 Jul 2022 14:12:37 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dg+07nKkLtDswlcglqkLjTljw0QMgvUbnQVct9qXasqtYudkE7ZjMBssGk7E4Wq90UPV+oPjighaXHsZRSA70x3hYHYH/iiUEuEH9f17sOtFn3RaHPg4jYVYonba54u6hKfyT1I5Ce9E3gy0GLX/rnRC9vLGRADfTkk8ebqkWy5LuPTwOIJXLkQHD9/yWrA8NBxhzNMpXPSlTKcu2mg81wKXULcysrcUrJY7YBiQr+g/y4YH/5eRLzvddHIMhvU0bhHi8srPypGPMqb+qB+zwXQTbCzLjWUlueHBI6vGoYKShGtWZMi1M8oBv4ky5GxLhvnHK8Q7g7q90iy9uLYDag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fuT9/CF3EUnFj9cWrjMYqeUnZouK6sIRTL/I/Pw+6Gw=; b=Hzj7AC9Gh5XJ2FX8mat5QqBt+982VsDlky8/A7Ilk5Xz1ezdLT5T08Zcbfuu8t8Jzojrh1daS9aD976WrcQFJ01t3UU3FgyECCpWaGjGlx/TZW6K5DMQ49qmQtUjXwTvfKCL1sgt9RQw+Ula3kwh8ldBg0LUt1HcDLK6ZiBa9ffqE/NqlpwqIJeJP3Bo1k3HKmxRBHN57EsT/rGwtg9u9Orb5o8jky4m0CcAhjA58vMhVAqcCOaEsO+ZPAut+K2g+j/3e0yQnjnd/P+/bKQQ5MG5+sR86JfoYxm1bWKYYPcEEViwu2fh14Oq9xMehZtyKUD8GUF7z2ZYMbkC+Ymxhg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com (2603:10a6:203:3c::14) by VI1PR0801MB1998.eurprd08.prod.outlook.com (2603:10a6:800:89::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.20; Tue, 19 Jul 2022 14:12:35 +0000 Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581]) by AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581%5]) with mapi id 15.20.5438.023; Tue, 19 Jul 2022 14:12:35 +0000 To: 'GNU C Library' Subject: [PATCH 2/4] Use builtin atomics for compare_and_exchange_val/bool Thread-Topic: [PATCH 2/4] Use builtin atomics for compare_and_exchange_val/bool Thread-Index: AQHYm3kcE2gWFVE4ikm+GIqIXwnMEw== Date: Tue, 19 Jul 2022 14:12:35 +0000 Message-ID: Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-MS-Office365-Filtering-Correlation-Id: 8709d3bf-4aab-43c6-a985-08da6990c036 x-ms-traffictypediagnostic: VI1PR0801MB1998:EE_|DBAEUR03FT043:EE_|DBBPR08MB4377:EE_ x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: /QpQFlxXZXrUIuciAwE8v71p5JeKsBmwccVylTBFonvcIdngE4fdciXRJtDupoxwJ6eu8d85GExW8wXhl3PBU5wLVxG3LqPhUNzVvbHCqIP+DQT9YRHe0V6KVCxm54DMGZevQhQV9LYK9dyNEcsv+iGaPlTzRtNqGrxL6rbsU8GtoGM80rDqPmbKn7+pSmSXdiPwoD6tudoh/pc4dh7VBShhzWiqrsm72ERE9uzqXdPxw2O/XN3X0haBewwSWkj5FKTM0A8OYcojZo0eDmOP7pF2IeIzYskYK78GtfdOGYpL6pMhMP6kRzqNUhT6YfhTfFE+OmhLRQi4xmrBTwQerDLkGpfSVHF6CCVoCznUXncTP+pfFjjwFJEeFppC0SznfpX4LqXAKyTFeyr7Od1xYSYSWtGEc914+dcwTGlZ02bXv4h9DnpeAjtZkSiCCyInkBb3GMHwchsW2WTC3tNT1p6iWTj180SQvmQ1txZKKvMTPTKRN6rN/eK/p3ywa9p7V6pvuFOJrFwiXNO1dz7MkgQQNSDgIYeJy796g/7K0dvqbxVrEwXNaBa8p5KW6GlvOie1AcWy91VhdkTczA5Um3U7dVursebRR7nOCMBlzKOxc3bqzUVtjPBnzO6m9auIUZfyGQ9P0FmJ+HU6QNUcXhc8ghTkKV+vT1eyW/Iqm1/yeX66kVn7hH2f970i8itb2F3B6cnNwLqZEErq9WhwWGynTpKqtHRQVwXCae4y4rckwS2pYRoYB/su2LjFXEFVtSSU9PGOPwa2aTlPdmX20rETeByc9eIxLhoebj6gT8/n6v11MOzeNlNNGAPPGWkN X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM5PR0801MB1668.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230016)(4636009)(39860400002)(376002)(396003)(136003)(346002)(366004)(71200400001)(478600001)(86362001)(38070700005)(41300700001)(6916009)(316002)(6506007)(186003)(26005)(52536014)(66946007)(66446008)(8936002)(66476007)(66556008)(8676002)(5660300002)(76116006)(122000001)(91956017)(83380400001)(7696005)(33656002)(9686003)(2906002)(38100700002)(64756008)(55016003); DIR:OUT; SFP:1101; MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1998 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: a652248f-bb87-4433-c0af-08da6990bacc X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: x0jsExfK1OoKeZQPrdKEwqwld+ol7PRGQ62nmzsgebi5KJeUULVzvVx3yantPFEisk/6rGnyt5CoPvE5Uoqo+jE1glQXrYQhqQJkkNURGjFcE+RoYTgvWAVH7QqJH/F/FKSorUO02iI+ji6QAZqHaClNrrsTXQM+HakOAOPwwziip+G5UQziiUpukunOadBPdy+RpLbqTknHVuVtXAF12Za2f5uuF0UrK4j2JDStAGgqShPZaQpfp7SAiiV+ABG4YkhciCvJSTXyqDZK1i+nhNjNjBIZAn4+y2aUXEtGPHEY28uh5r/wVm/j68F4vCbfoBBPGWDS1eSTyp0h5K2uF43U5cE3/Borwf/VvJvvQkEUi8x/1LESzWSdgyw4F4/rkw/InOQUKg1mKbLrAaMa0GkuZIDfFDy/YqKL7bCeOTDxRNvsfDjYD69Vu65fDoMCiv9V9HpFX7CqbBp7r/MJU/jFvcnGMFmbsZkKuVCaWgc9jLeH5I2M1khB9BQWds01XVHmNEHGHOftgWgTdwmSF26C0AxYPxLxDmUY+15Wlw88qSobk9skndGUaFq33SOqtr0/TboSHp2/vj0z4h+/Qn4ZZ1uz0+Ma6nDcGgO9Ojq90+1S2TkuhkuRzDsFq+0pORT/qVeOcFIHP+xWZdS+xTQXglk4K0+oddmmgpTk+wBwqBz699oAK/s+dWAv6hHmQfEQEmqt/tStIF5xuxGnSL05PMiZ4EieWxTMlu/NR+r2+krZF/WU1zhhG5wy7ALmChobCkbFzox5kru3BOsgS9RbxVM4HBn13/jch/pUzkFdWawDuMqp6UmDcMnIQhfB X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230016)(4636009)(396003)(376002)(136003)(39860400002)(346002)(40470700004)(46966006)(36840700001)(70586007)(8676002)(8936002)(70206006)(52536014)(55016003)(40480700001)(86362001)(478600001)(356005)(26005)(41300700001)(6506007)(9686003)(36860700001)(40460700003)(2906002)(7696005)(336012)(5660300002)(47076005)(33656002)(6916009)(186003)(83380400001)(316002)(81166007)(82310400005)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2022 14:12:44.7361 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8709d3bf-4aab-43c6-a985-08da6990c036 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4377 X-Spam-Status: No, score=-11.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Wilco Dijkstra via Libc-alpha From: Wilco Dijkstra Reply-To: Wilco Dijkstra Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Define atomic_compare_and_exchange_val/bool_* in terms of standard atomics. Later cleanups will replace existing uses given the interface is very confusing. A minor change is needed in nscd-client.h which has a volatile parameter. diff --git a/include/atomic.h b/include/atomic.h index bf6417621a34458a1d7585ee6a7a24cea6670a07..e59e210e83fa301f48c4cdb3676d37e4980cc4a2 100644 --- a/include/atomic.h +++ b/include/atomic.h @@ -78,37 +78,30 @@ /* Atomically store NEWVAL in *MEM if *MEM is equal to OLDVAL. Return the old *MEM value. */ -#if !defined atomic_compare_and_exchange_val_acq \ - && defined __arch_compare_and_exchange_val_32_acq +#undef atomic_compare_and_exchange_val_acq # define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - __atomic_val_bysize (__arch_compare_and_exchange_val,acq, \ - mem, newval, oldval) -#endif - - -#ifndef atomic_compare_and_exchange_val_rel -# define atomic_compare_and_exchange_val_rel(mem, newval, oldval) \ - atomic_compare_and_exchange_val_acq (mem, newval, oldval) -#endif + ({ \ + __typeof (*(mem)) __atg3_old = (oldval); \ + atomic_compare_exchange_acquire (mem, (void*)&__atg3_old, newval); \ + __atg3_old; \ + }) +#undef atomic_compare_and_exchange_val_rel +#define atomic_compare_and_exchange_val_rel(mem, newval, oldval) \ + ({ \ + __typeof (*(mem)) __atg3_old = (oldval); \ + atomic_compare_exchange_release (mem, (void*)&__atg3_old, newval); \ + __atg3_old; \ + }) /* Atomically store NEWVAL in *MEM if *MEM is equal to OLDVAL. Return zero if *MEM was changed or non-zero if no exchange happened. */ -#ifndef atomic_compare_and_exchange_bool_acq -# ifdef __arch_compare_and_exchange_bool_32_acq -# define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool,acq, \ - mem, newval, oldval) -# else -# define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ - ({ /* Cannot use __oldval here, because macros later in this file might \ - call this macro with __oldval argument. */ \ - __typeof (oldval) __atg3_old = (oldval); \ - atomic_compare_and_exchange_val_acq (mem, newval, __atg3_old) \ - != __atg3_old; \ +#undef atomic_compare_and_exchange_bool_acq +#define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ + ({ \ + __typeof (*(mem)) __atg3_old = (oldval); \ + !atomic_compare_exchange_acquire (mem, (void*)&__atg3_old, newval); \ }) -# endif -#endif /* Store NEWVALUE in *MEM and return the old value. */ @@ -333,6 +326,19 @@ void __atomic_link_error (void); __atomic_compare_exchange_n ((mem), (expected), (desired), 1, \ __ATOMIC_RELEASE, __ATOMIC_RELAXED); }) +# define atomic_compare_exchange_relaxed(mem, expected, desired) \ + ({ __atomic_check_size((mem)); \ + __atomic_compare_exchange_n ((mem), (expected), (desired), 0, \ + __ATOMIC_RELAXED, __ATOMIC_RELAXED); }) +# define atomic_compare_exchange_acquire(mem, expected, desired) \ + ({ __atomic_check_size((mem)); \ + __atomic_compare_exchange_n ((mem), (expected), (desired), 0, \ + __ATOMIC_ACQUIRE, __ATOMIC_RELAXED); }) +# define atomic_compare_exchange_release(mem, expected, desired) \ + ({ __atomic_check_size((mem)); \ + __atomic_compare_exchange_n ((mem), (expected), (desired), 0, \ + __ATOMIC_RELEASE, __ATOMIC_RELAXED); }) + # define atomic_exchange_relaxed(mem, desired) \ ({ __atomic_check_size((mem)); \ __atomic_exchange_n ((mem), (desired), __ATOMIC_RELAXED); }) diff --git a/nscd/nscd-client.h b/nscd/nscd-client.h index ca9e6def1a88ff14c5e8b39f0e236aa8a30f95ae..89bac1899f1173c63699013f27e58571d9df9994 100644 --- a/nscd/nscd-client.h +++ b/nscd/nscd-client.h @@ -367,8 +367,9 @@ struct locked_map_ptr /* Try acquiring lock for mapptr, returns true if it succeeds, false if not. */ static inline bool -__nscd_acquire_maplock (volatile struct locked_map_ptr *mapptr) +__nscd_acquire_maplock (volatile struct locked_map_ptr *mapptr_in) { + struct locked_map_ptr *mapptr = (struct locked_map_ptr *) mapptr_in; int cnt = 0; while (__builtin_expect (atomic_compare_and_exchange_val_acq (&mapptr->lock, 1, 0) != 0, 0)) From patchwork Tue Jul 19 14:14:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wilco Dijkstra X-Patchwork-Id: 56170 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 080563858439 for ; Tue, 19 Jul 2022 14:15:40 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 080563858439 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1658240140; bh=Khh+KJLv7zPylcJEjf74BSWhkxswbUqjyxIBaWaVxyY=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=GZYr8tduZLF87ztSRig95ZBG3NGkAUp3DRekZNz70nW6PlbhyjCBwWqHf9NojEsYr BQ9Zc+N75KjmWx2S17JW33wNM3Yg1fASyH71hu79HvkF0jNfZgbRF2zVjxY9ZGpQld kVEkq1jl8BFTsF4YZYkVtUbeCruobdzoU93vMomo= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00068.outbound.protection.outlook.com [40.107.0.68]) by sourceware.org (Postfix) with ESMTPS id 3E6CF3853541 for ; Tue, 19 Jul 2022 14:15:17 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 3E6CF3853541 ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=F8I9EpMRvZhTNrUb3lvF74nrGBHyDr+9L81YApM9j53dzYbOMbQ2gVIQnJgq0qV19N86z4hW6mbq/4Jgs71dk9sUfZAsWMXaXTfA4K/w7pS3uAtg8NLk5cAlfsJk8BI+QwkWJmP68ZHhhT/dqxeHVx0gOUC/K2MdM3r/mV7Sn8OlYDURBDOFudVQqw+wN3ydfc1ScdEEhMawwfz331Rtf9AznODIfZwrX/RyBi9eOHwMGp1pjxHewGHVSg90ERXDimQoysRx7bfHOCIUvQWMJQCrS455r3kRAIbrPxnMRsDKG1Pcw5X668sKnda9+CWXuRh44w9rHCldq3wzpp0IbQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Khh+KJLv7zPylcJEjf74BSWhkxswbUqjyxIBaWaVxyY=; b=NIrYqZXaXP9714wO3LgeUpRVnM3Jd0QUk+kBM1by+OgTPGbDAy2sMuV6IvFQnyEuAWQP6RifEV6NCMBr04r//4XtkSn2l9pEtXeyl1p50buEzQpfHsOd4JeSNoeQoN4XMUeFzY5kRI+jtvhIQxT3pXu055gr+q2lmDiZ5qDkX7Yl16KLyLoKDm0fQPA1E6zYNdwjBZe2JwSnbH70mXDMwLWqiYDhPXDedLn5zOs0O0ot0UojE7Dr5n4I8kDI1sYIH+PmS+2QqVOPgyxaNnOiQ+3BW5qIV+u2cd0VsbyunBqj0fWNuo7010+Apz6iUIOPflxAo8eKw4VR025HMcaMoQ== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=sourceware.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from FR3P281CA0059.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4b::21) by VI1PR08MB3264.eurprd08.prod.outlook.com (2603:10a6:803:4b::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.17; Tue, 19 Jul 2022 14:15:09 +0000 Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com (2603:10a6:d10:4b:cafe::ea) by FR3P281CA0059.outlook.office365.com (2603:10a6:d10:4b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5417.6 via Frontend Transport; Tue, 19 Jul 2022 14:15:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12 via Frontend Transport; Tue, 19 Jul 2022 14:15:08 +0000 Received: ("Tessian outbound fccf984e7173:v123"); Tue, 19 Jul 2022 14:15:08 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 9d6fb000ac14035a X-CR-MTA-TID: 64aa7808 Received: from 780b917b4ee7.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id EFE46E3A-4110-4BE0-B915-8B7AC0D1BAC9.1; Tue, 19 Jul 2022 14:15:02 +0000 Received: from EUR04-HE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 780b917b4ee7.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 19 Jul 2022 14:15:02 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dANCeHXrV5FqcwOaLmZ9CrE5+38LSy09r8ejg96HjihcdXCQudJK/PK2DjgWOYnIXW3VD71SW8IbyqoPWMGTuQ2C/Umw88/xSFKpUGqkuqRki1n0gsmBlv9zSZBLdXo72wEBLCxi+S8iK44yhyC5cEESTxmdN0hPijbtcIBWoy89KIT9on0LIqQ71w966edjUNjDRD2N3Ii4QWmg8qgJXTS0Oqyy0X6ZOw4EELj9DzcwosnPwoijv/9EINow8aJh21JjhxeYRGZKeLu+WPOq7MkqVF66fgao87Y0I4mDa18toqdrqUWCiTskCJkwlj2JomCazoBfQwuryeVsVtOyvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Khh+KJLv7zPylcJEjf74BSWhkxswbUqjyxIBaWaVxyY=; b=C2IQpEh2FE5A19shBg5cG9eBPqw1/EEe00niR8tNOFCaE0eDJxfdNpCQ66lSYy3z1Cuxfwl3k/u1e3dqIVxKNSr05HyMxknjWVb/a1TuCgVYiwiKpL5j1KRZDRq4J9Eu04SDsFzef03wQgpIt/e2tGnGec5fXo9+S/11fJiUuFC336+kmoR7iWAV88bbx5V5HsBHcmBe+jyA4+rnlIPQQl9gUVefC+amk9YIf7mUnuesC5PSzjHpuArlGDlebROYBkKM3i758rC3lEfWPRA8wuNOeS0SdxJR2rOT8hs/cmlMAwlG3GBxPKwNMh25v+cLCl/rKsD0B4zHmtQ3R7r81w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com (2603:10a6:203:3c::14) by DB9PR08MB6875.eurprd08.prod.outlook.com (2603:10a6:10:2aa::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.15; Tue, 19 Jul 2022 14:14:58 +0000 Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581]) by AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581%5]) with mapi id 15.20.5438.023; Tue, 19 Jul 2022 14:14:58 +0000 To: 'GNU C Library' Subject: [PATCH 3/4] Remove old atomics defines from include/atomics.h Thread-Topic: [PATCH 3/4] Remove old atomics defines from include/atomics.h Thread-Index: AQHYm3mtOGr6A58cdkWFOSYH/TPR7w== Date: Tue, 19 Jul 2022 14:14:58 +0000 Message-ID: Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-MS-Office365-Filtering-Correlation-Id: 9d8624ff-7c7a-48a5-a098-08da6991161f x-ms-traffictypediagnostic: DB9PR08MB6875:EE_|VE1EUR03FT063:EE_|VI1PR08MB3264:EE_ x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: qmyml46uRJTwVbV3G2hBt8DlIoz+Kojgoqan8lf2XCqHxkf3oSnAbaHVZ+c1qGT5grQ5tlPCPpOPFSAHu/0iugQpKD0+1E75kDVMyAni0g17djW/vHQWUlHyMexmVCfraBj5GlZvQuiU8M32rsPiDpJuv+H+Thzehdts8wC3f1013Y/73sX08nz9wCue5lc96rFQ9txjzpkldRbACCF4zqNVwOyCBAO5eY5n9ZE6OKMNBStLMf4y9oebQxbC963s8GVzA0r8SYoYZSLj46GFadIHKBZZn2CwnyKal2q5XxHL47X86EUHimvQ+0IS1RFcZRjWBgKNcyX4ragNGO9PxQKh1geMmUcfwMdvt2IjmeGqMcfJCuSfKHMNp+0RTfwFsvCcFYljTjoXrWcdAH+/OCcp2BMBk+FVGrbLuInv7uYvVNzOa5iVLIjCgfvwARg9vjcZ9OjDkAwedsXW71TzNaLmbzUNgHQsMFnUarWBL+Xv/AEPwSAAPWtXWbkAOjttIuSZ+D0oWRQE1IL34FSOhVypMzqTgtZEi1YwtE9qnyAY+CSP3J3Mmu0OAPi//dhGBM/9LjlYT63/Q11FmPX4f4I6v9Guk6LErxtzMQ32CaM4uQUulzZwCF9R7QITHBXb4Ilrxihwm24HvKQO/a9MLL/5MYNQ2S/LDbLj6YN5jA0R8wbbcdbtconaCf11rX8RGY9q4zMeCQk4dzzHtGrTRKP14/XGbaudTCgVrlVWeTEUiyuYNBX7M5KtaiaxWxD4uTNG3A5UqkCi8ma8Ml49QN+nwvDMInn1/eYK3qyr6Zw= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM5PR0801MB1668.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230016)(4636009)(136003)(39860400002)(396003)(376002)(366004)(346002)(66946007)(64756008)(76116006)(66446008)(66476007)(66556008)(8676002)(316002)(8936002)(33656002)(52536014)(122000001)(6916009)(86362001)(38070700005)(91956017)(38100700002)(2906002)(478600001)(186003)(26005)(41300700001)(71200400001)(6506007)(55016003)(9686003)(7696005)(5660300002); DIR:OUT; SFP:1101; MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6875 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: ca0d714d-0307-47da-dc6e-08da69910fec X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iKVtTfgB/M6xWEicrvtOSGZ736SxYX+KXDfJmYiA0bCIilg/tdLPcwPH11B3h/iBpf6Kyd4JUcl3kS3kWR68rRf4oLL3R0je2nVKFqGS9mAZvZNTFg5dqG06bSHyAHSZxZ0y3xdIWBtYtF+cepcjtPy2bCAl8ItFODitSZby3oqx5iws/62KZcEQpbyzHy2bAAq7W/rXelDh9fCaiFUGrgxfO93HJrdue8iPuYHrLqZQnBSOok+HooGDutel+fgT52++M5ZJXnTG0l3qPOWz15BnUNTyZ+pkifY9tYmnwf1T9LrhVvpX5Y+3HZ5sCdYMg/eKIMXJMtdUmbA2W54G01d5nSt4GRza9LEYd0oxBlsoFK3VxYiLLPIMnZ+5LMp2F7KA+6EqE+7NN/NX7Fre4YxDmXJk4PI80avyfAjuPjwkhhUgagIvGDnCA5ZMtr1yGBI7UDdBS2+FpOy1+pnj5UTazIOtoYD+FFM42EuhzHftRqIO/D6skGEQDd7jsmr5eQQj2NIvrCXCSMZgUZESu0fL5pZ6cAr071LxNLD7GQKnpSGGS2Iujb1bZPSkqp87Pc+i5EsndY4KmAW1F4U20TYGGG93DdwD6j9iBFUlPaDa7YWbM9oQHRhEVmPQ0PyGqXKQLV5DpvcoOXaQZom6Wd3Or+0jf9tC4eibtlkBWeXBtzwkrlRR5gHHQpfx3wH/JuF8WXMZUmqzl0TAfKx9hbhYzMYqyOxtkIHf7NWv0gdERMeQCWL411rmPli9YMHSmJGtacA2cOPhFDPNNqJBjetsNONOh1cFAw21ckhqSUcwZAbL7Ao83E8ar0PRpUrm X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230016)(4636009)(346002)(39860400002)(376002)(136003)(396003)(40470700004)(36840700001)(46966006)(5660300002)(8936002)(2906002)(52536014)(33656002)(6916009)(40460700003)(55016003)(82310400005)(70586007)(40480700001)(8676002)(316002)(70206006)(9686003)(6506007)(86362001)(478600001)(41300700001)(186003)(7696005)(26005)(356005)(82740400003)(36860700001)(336012)(47076005)(81166007); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2022 14:15:08.7598 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9d8624ff-7c7a-48a5-a098-08da6991161f X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3264 X-Spam-Status: No, score=-11.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Wilco Dijkstra via Libc-alpha From: Wilco Dijkstra Reply-To: Wilco Dijkstra Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Since they are no longer used, remove various old atomics from include/atomics.h. diff --git a/include/atomic.h b/include/atomic.h index e59e210e83fa301f48c4cdb3676d37e4980cc4a2..40ad43346d3c5f27dac72b72cd8fe6612a4f11cd 100644 --- a/include/atomic.h +++ b/include/atomic.h @@ -104,60 +104,6 @@ }) -/* Store NEWVALUE in *MEM and return the old value. */ -#ifndef atomic_exchange_acq -# define atomic_exchange_acq(mem, newvalue) \ - ({ __typeof ((__typeof (*(mem))) *(mem)) __atg5_oldval; \ - __typeof (mem) __atg5_memp = (mem); \ - __typeof ((__typeof (*(mem))) *(mem)) __atg5_value = (newvalue); \ - \ - do \ - __atg5_oldval = *__atg5_memp; \ - while (__builtin_expect \ - (atomic_compare_and_exchange_bool_acq (__atg5_memp, __atg5_value, \ - __atg5_oldval), 0)); \ - \ - __atg5_oldval; }) -#endif - -#ifndef atomic_exchange_rel -# define atomic_exchange_rel(mem, newvalue) atomic_exchange_acq (mem, newvalue) -#endif - - -/* Add VALUE to *MEM and return the old value of *MEM. */ -#ifndef atomic_exchange_and_add_acq -# ifdef atomic_exchange_and_add -# define atomic_exchange_and_add_acq(mem, value) \ - atomic_exchange_and_add (mem, value) -# else -# define atomic_exchange_and_add_acq(mem, value) \ - ({ __typeof (*(mem)) __atg6_oldval; \ - __typeof (mem) __atg6_memp = (mem); \ - __typeof (*(mem)) __atg6_value = (value); \ - \ - do \ - __atg6_oldval = *__atg6_memp; \ - while (__builtin_expect \ - (atomic_compare_and_exchange_bool_acq (__atg6_memp, \ - __atg6_oldval \ - + __atg6_value, \ - __atg6_oldval), 0)); \ - \ - __atg6_oldval; }) -# endif -#endif - -#ifndef atomic_exchange_and_add_rel -# define atomic_exchange_and_add_rel(mem, value) \ - atomic_exchange_and_add_acq(mem, value) -#endif - -#ifndef atomic_exchange_and_add -# define atomic_exchange_and_add(mem, value) \ - atomic_exchange_and_add_acq(mem, value) -#endif - #ifndef atomic_max_relaxed # define atomic_max_relaxed(mem, value) \ do { \ @@ -196,40 +142,6 @@ #endif -/* Atomically *mem &= mask and return the old value of *mem. */ -#ifndef atomic_and_val -# define atomic_and_val(mem, mask) \ - ({ __typeof (*(mem)) __atg16_old; \ - __typeof (mem) __atg16_memp = (mem); \ - __typeof (*(mem)) __atg16_mask = (mask); \ - \ - do \ - __atg16_old = (*__atg16_memp); \ - while (__builtin_expect \ - (atomic_compare_and_exchange_bool_acq (__atg16_memp, \ - __atg16_old & __atg16_mask,\ - __atg16_old), 0)); \ - \ - __atg16_old; }) -#endif - -/* Atomically *mem |= mask and return the old value of *mem. */ -#ifndef atomic_or_val -# define atomic_or_val(mem, mask) \ - ({ __typeof (*(mem)) __atg19_old; \ - __typeof (mem) __atg19_memp = (mem); \ - __typeof (*(mem)) __atg19_mask = (mask); \ - \ - do \ - __atg19_old = (*__atg19_memp); \ - while (__builtin_expect \ - (atomic_compare_and_exchange_bool_acq (__atg19_memp, \ - __atg19_old | __atg19_mask,\ - __atg19_old), 0)); \ - \ - __atg19_old; }) -#endif - #ifndef atomic_full_barrier # define atomic_full_barrier() __asm ("" ::: "memory") #endif From patchwork Tue Jul 19 14:17:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wilco Dijkstra X-Patchwork-Id: 56171 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 48A273858281 for ; Tue, 19 Jul 2022 14:18:11 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 48A273858281 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1658240291; bh=zXJdbWaf75ytpRIOqsptMdK1aY0V4heUwHLjTvMshx8=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=KgN6//AEREmjIQAYriJw3DM0cWfhVtt5JUbSmlM9zp6U9gjgPpsZ8KpS8TuKS1jFo w79YRYJu4fGg3iJJlgs3aXv9voKFirC6rIjS9HEVQKh1WAD6Z8mFZfRm8t2U0cpU2G 0lQ+0skJML3T8PVuRgu3Td1dTRHd0HD1QD9RwCcA= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30089.outbound.protection.outlook.com [40.107.3.89]) by sourceware.org (Postfix) with ESMTPS id 1AD173858439 for ; Tue, 19 Jul 2022 14:17:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 1AD173858439 ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=NWFWs0qnJcCpSDRHwhpDd3IRQNxOCLgtXrDdxEc8xbA8GsulK68kUliM2jtijqWW1JMZbgEC8jd75P9pX1Qid6w7k4MalYUzeKDgkh5rcQZF2cqX4WvGKNNS1EFjuRRSPyooaH2JQzVuDT/nMnRibDCA5X+qBZua0q8YstNCGyezYpv0+xJJy63l9r7MAnUN7cSGDfKNN31HHsoSOrtSd773OvakdDIst8V3tqIMCU1Bi+BwFETO9kJ+/X1zIqqd4OkpN2rsp87RqSPvMlAV5oS8R4z3K7HEx2OGsHpkNmAeC1lUE5kodZFaGqtKz/wRkdIERlcF3ZQKXc+ip2KYIQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zXJdbWaf75ytpRIOqsptMdK1aY0V4heUwHLjTvMshx8=; b=JfPxpNblBXBhea1PXXSk0+7rhwnEuF+8XVOeD5Xr7HRaGmvMox+t9qbf0eJl86XCpHxEpQ+LibSFhP518AaIm7PtP1cuRuitVxcnZt6TWhT+/tAQLgz9rBsX2ruVUgXL0FjdvAXy4VpFDXv5Avn9gRBfHsiw2btyLotyIABCTuuPXi2FTVixc8XSYQ3U12z/Y3OHKo55LJaMsu+F4/hdu51bI5+oqYEfzlQkkiFAraRPorW95l3dwPu2MQC5WxNCa+D6ypO9a6lSvb0IVRZKNjlQzVbGvFIcNC0mhkiNmZGtigl37wsfsQ/AeoBuHws0anYlrlRdRUVXWJ5WXtNzSw== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=sourceware.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from DB6PR0601CA0047.eurprd06.prod.outlook.com (2603:10a6:4:17::33) by AM7PR08MB5429.eurprd08.prod.outlook.com (2603:10a6:20b:107::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12; Tue, 19 Jul 2022 14:17:34 +0000 Received: from DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:17:cafe::b0) by DB6PR0601CA0047.outlook.office365.com (2603:10a6:4:17::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12 via Frontend Transport; Tue, 19 Jul 2022 14:17:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT020.mail.protection.outlook.com (100.127.143.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12 via Frontend Transport; Tue, 19 Jul 2022 14:17:34 +0000 Received: ("Tessian outbound 63c09d5d38ac:v123"); Tue, 19 Jul 2022 14:17:34 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: f5016147f4cfc271 X-CR-MTA-TID: 64aa7808 Received: from c525382fd31e.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 471EBE0E-D804-4035-89D7-BB5A10526FF5.1; Tue, 19 Jul 2022 14:17:27 +0000 Received: from EUR04-HE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c525382fd31e.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 19 Jul 2022 14:17:27 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EzVVZKYLQJnkNfhaT75jkplKId9nJHUkVgTpbS1C68Hjyx/+pDnSThabAPkJyGQ408M3yR2iJn/KYigup5Psxf0bGA65Vjx49xVTpIL8FpYrEl2/OzERcLQ45YjoAc8Dpy3hWJvDNSUNWullO1WFwxWx8XWg64nujF42lSWrPvrweyMgDP5ZKyExPLTNkIjnO9E8Upgbo5iKY1EZ6TSVmH2KQjCzH1+DlE4QTeHZ/3XjQg90u8iSx7n+oO7azODbU9Lbeyru3wNhG5l9tOBpYlJ9Js7TyZmFtS8K9L+ht9ZVDFu1EsoY5mnGFkFkdNu1GmsFs8cgX6K2YNNt/Ne3lQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zXJdbWaf75ytpRIOqsptMdK1aY0V4heUwHLjTvMshx8=; b=FiM/GXYTU1NY+cxvpzREXRdgNf1IiM+IR8QZMv4csyWw18uvPEfI7WGVuuYexk/JdgxQh3ewwlZt579zEpNNp7DVIYLM+Myr3rtWuDwdKwi581W3LUcKd+xflGb8SOdxV15DdjTD+BEtx0tjgnZL2hWSFsyA6Rxs7gEcMl8puiYMTX1sqWjS/W91gqj2eULKsu4OR347xcFTcl84HTGmqssuYbsOv1jbViagQUN8iT/wWVkjYcNASVt+AamZiky2P0/Beav9xBlNnnfzTmQKzWY1JP5YhlF4fp4Y960N14nUpAaMdC5VwBmg7plJIQ0uSIBPboucwWQPKBfOrtNIHg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com (2603:10a6:203:3c::14) by DB9PR08MB6875.eurprd08.prod.outlook.com (2603:10a6:10:2aa::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.15; Tue, 19 Jul 2022 14:17:23 +0000 Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581]) by AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581%5]) with mapi id 15.20.5438.023; Tue, 19 Jul 2022 14:17:23 +0000 To: 'GNU C Library' Subject: [PATCH 4/4] Remove all target specific atomics Thread-Topic: [PATCH 4/4] Remove all target specific atomics Thread-Index: AQHYm3oAfMReMGuuLUaL45cTWOj+wQ== Date: Tue, 19 Jul 2022 14:17:23 +0000 Message-ID: Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-MS-Office365-Filtering-Correlation-Id: 05ecc87a-bedc-4b4f-4c78-08da69916cc4 x-ms-traffictypediagnostic: DB9PR08MB6875:EE_|DBAEUR03FT020:EE_|AM7PR08MB5429:EE_ x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: SFbfUk6lBWycycSO52vOLgg6VUOxo2VnFmIBQce/GJSWIqN0WSwdp1gQ81IOMX2M8ycn//oeRZjP3sj5g7qEiaDsJwMYe0EYOYGtaoqS+yzK7pYhLqbQlVJF7FaB9eve084DrNyDO4yHIGU0akAqX04EofUdEz92d7i8HKa9lpGDDRkfAKR3+kzhoJpOphEnmqBNTzFsTEYfoWv7safYgdOsBw36vEi5uFTREIA8aU+xDkvDqxz8Gby3shDyFbyN6bGCAGwMSVgzW5HVmGmkC0E3nOyfiz/YWqvzdE2KreRRDY3RTb0suyw6cTKMY74KGQZH/G9TIt0R3bacLuatVAhtDoRjGpQGG9kDyiJt6vhuxKUxUu3fWLET2JPkwT/L70a7kN1cGjfR6fTgQqIP/R4PXeu0ZzFK8fDQ6VJuD3IWQM2bC70mwbcws9DFkeng1SayYSvO5V8HYBW4mKGWbzirK+3Xj8UdsM/Nx9/raAXqMEDV2VDTPy29XLxfcJwV2gcIUNzftxF2pWzsSuHVNopA93NhZNQInVDTbZn6fRaVuiX13ZhkFzC/abkGe/L06UDV4I1bDBUJ/az5YseQ3bPhXXC4fa+/ShpZDwwESoEWxQm4JF9Nj4yFvtrHz1AzsQF3H8APQzcfBWnOmZrphOC0jVYAjhReyIDa4JdPpjKorxRjftTIOz2AllaoMO1oc0LbHdZOadT1Ekq3z2n9EAAHg8AmbGpiIRtPaYhk7Muu1D+zVY7S1f1sLwLjdbVvrFTfW3m5R70w8RQKr348Bk7pxF4wu3Qt8txBGIbiIlIsZFH18+wAJWKKkw1osvdSt1HzdeqbWjJjpQ0OBNN4yiWupM4ZqI0Xn4VcvQvxOIupSwEt6WidB/YNyQGkSWDt X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM5PR0801MB1668.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230016)(4636009)(346002)(366004)(376002)(396003)(39860400002)(136003)(41300700001)(186003)(26005)(966005)(6506007)(71200400001)(478600001)(19625735003)(9686003)(5660300002)(7696005)(83380400001)(30864003)(55016003)(52536014)(6916009)(122000001)(66946007)(64756008)(76116006)(66446008)(66476007)(8936002)(33656002)(66556008)(8676002)(316002)(2906002)(38100700002)(86362001)(38070700005)(91956017)(21314003)(2004002)(579004)(559001); DIR:OUT; SFP:1101; MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6875 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 7dc75d2a-a065-4e68-c032-08da6991663b X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: w1TbOsCCtuP0sSRv0JlZR6bC8sfwVxjT+RJWHqaBcZ6Z8CvtTggafJD4IMw8sXOpbKwnB6b8mlQVy8zBT1QDMXDdJxPWam6GRzhwCTFPvW+OP/p04d8kIaZYt8nMcRd7qBYm4djxc3YtODagkCWwi9io2h+4bjCOkvD1JertPwMECMgtnNWP5LJeIqn3GmJwkw4rrunY6749t2hQCRj8tV75PEOdrFl/yFXqdRwDZsQzJqI4Tm7JuEgw2DlDcnP++U/foZ0kWb13nmUFfGBX8xbqUlTLcnt0B3lzQBvYmATzrwXRLCLZu5DKWuYFCqGq0TQy4t1WBWJrxRq7MmrBHZNV+X003zQ/2mpv4jyCLWuiPBrh6f4HMfpHyWCEYTdPEwiD0MzhBI0f6AJmRv07i+Arh6H0CYJV5qY8vtJ5Nsa/muJKBypTiPj7MQpJkoBSrJfNssl2FFOlnZm/TWF92/yALEPf+uB+AlJXRbLk8E7AxZEsP0QciXJqsxDOvHQdcVLK5oQ+x1mbZYcQX0187gI4XpS1vXwUGtcXHDuUY33gQUOPexlv37i4kGrruKW3eHeh3R+vJ8nxp/aGoMvLTDTLnkOwigsv7kGtjqGNH+txJevmT09dp86RZN3tlHnmnYw71bJAUeI99NHLsQsfPEO0pI++EB/0C3lha27mlGxEH0Wo3Cm7OJdYzz/aRwLdk5TS/vZUx2pMi4DLC1te4Oo5PBz5T5BXQ7JWiJ2JsAzgFivmnPxKisWt8HBG77qvoFezTzwne25ySCKfLnBMSHZqb9VjDwltTc3hm0pd/d59zTsqmc63zJh78R0AgaX9l3tDJONcBkjudSOdzg6mmXjMkayf0fgqnQ4jbEtpUlf3dLcAur6b+bW9JWEzgZj2 X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230016)(4636009)(136003)(396003)(39860400002)(346002)(376002)(40470700004)(46966006)(36840700001)(19625735003)(55016003)(478600001)(966005)(186003)(7696005)(356005)(6506007)(26005)(9686003)(83380400001)(36860700001)(41300700001)(336012)(81166007)(8676002)(47076005)(82740400003)(33656002)(2906002)(52536014)(6916009)(86362001)(30864003)(5660300002)(8936002)(316002)(40460700003)(70206006)(82310400005)(40480700001)(70586007)(2004002)(21314003)(559001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jul 2022 14:17:34.2168 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 05ecc87a-bedc-4b4f-4c78-08da69916cc4 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5429 X-Spam-Status: No, score=-11.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Wilco Dijkstra via Libc-alpha From: Wilco Dijkstra Reply-To: Wilco Dijkstra Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Finally remove the huge amount target specific atomics since they are no longer used. The define of USE_ATOMIC_COMPILER_BUILTINS is removed and the atomic-machine.h headers now only contain a few basic defines for barriers and whether 64-bit atomics are supported. Passes buildmanyglibc and regress on AArch64. diff --git a/include/atomic.h b/include/atomic.h index 40ad43346d3c5f27dac72b72cd8fe6612a4f11cd..53bbf0423344ceda6cf98653ffa90e8d4f5d81aa 100644 --- a/include/atomic.h +++ b/include/atomic.h @@ -41,44 +41,8 @@ #include -/* Wrapper macros to call pre_NN_post (mem, ...) where NN is the - bit width of *MEM. The calling macro puts parens around MEM - and following args. */ -#define __atomic_val_bysize(pre, post, mem, ...) \ - ({ \ - __typeof ((__typeof (*(mem))) *(mem)) __atg1_result; \ - if (sizeof (*mem) == 1) \ - __atg1_result = pre##_8_##post (mem, __VA_ARGS__); \ - else if (sizeof (*mem) == 2) \ - __atg1_result = pre##_16_##post (mem, __VA_ARGS__); \ - else if (sizeof (*mem) == 4) \ - __atg1_result = pre##_32_##post (mem, __VA_ARGS__); \ - else if (sizeof (*mem) == 8) \ - __atg1_result = pre##_64_##post (mem, __VA_ARGS__); \ - else \ - abort (); \ - __atg1_result; \ - }) -#define __atomic_bool_bysize(pre, post, mem, ...) \ - ({ \ - int __atg2_result; \ - if (sizeof (*mem) == 1) \ - __atg2_result = pre##_8_##post (mem, __VA_ARGS__); \ - else if (sizeof (*mem) == 2) \ - __atg2_result = pre##_16_##post (mem, __VA_ARGS__); \ - else if (sizeof (*mem) == 4) \ - __atg2_result = pre##_32_##post (mem, __VA_ARGS__); \ - else if (sizeof (*mem) == 8) \ - __atg2_result = pre##_64_##post (mem, __VA_ARGS__); \ - else \ - abort (); \ - __atg2_result; \ - }) - - /* Atomically store NEWVAL in *MEM if *MEM is equal to OLDVAL. Return the old *MEM value. */ -#undef atomic_compare_and_exchange_val_acq # define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ ({ \ __typeof (*(mem)) __atg3_old = (oldval); \ @@ -86,7 +50,6 @@ __atg3_old; \ }) -#undef atomic_compare_and_exchange_val_rel #define atomic_compare_and_exchange_val_rel(mem, newval, oldval) \ ({ \ __typeof (*(mem)) __atg3_old = (oldval); \ @@ -96,7 +59,6 @@ /* Atomically store NEWVAL in *MEM if *MEM is equal to OLDVAL. Return zero if *MEM was changed or non-zero if no exchange happened. */ -#undef atomic_compare_and_exchange_bool_acq #define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ ({ \ __typeof (*(mem)) __atg3_old = (oldval); \ @@ -143,7 +105,7 @@ #ifndef atomic_full_barrier -# define atomic_full_barrier() __asm ("" ::: "memory") +# define atomic_full_barrier() __sync_synchronize() #endif diff --git a/sysdeps/aarch64/atomic-machine.h b/sysdeps/aarch64/atomic-machine.h index a7a600c86fa8ac6496d04a36c779542f76e7d7c9..2dc1c524e40e2e805161e6d9b1b385b85c53a5c8 100644 --- a/sysdeps/aarch64/atomic-machine.h +++ b/sysdeps/aarch64/atomic-machine.h @@ -20,90 +20,6 @@ #define _AARCH64_ATOMIC_MACHINE_H 1 #define __HAVE_64B_ATOMICS 1 -#define USE_ATOMIC_COMPILER_BUILTINS 1 #define ATOMIC_EXCHANGE_USES_CAS 0 -/* Compare and exchange. - For all "bool" routines, we return FALSE if exchange succesful. */ - -# define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - - -/* Compare and exchange with "acquire" semantics, ie barrier after. */ - -# define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -# define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -/* Compare and exchange with "release" semantics, ie barrier before. */ - -# define atomic_compare_and_exchange_val_rel(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_RELEASE) - -/* Barrier macro. */ -#define atomic_full_barrier() __sync_synchronize() - #endif diff --git a/sysdeps/alpha/atomic-machine.h b/sysdeps/alpha/atomic-machine.h index 115a9df5d77cd08bcb0a49d9b59f0c53b4a20d78..f384a2bf0b3376cf240dc25d501e1d64a94bffe1 100644 --- a/sysdeps/alpha/atomic-machine.h +++ b/sysdeps/alpha/atomic-machine.h @@ -18,313 +18,10 @@ #include #define __HAVE_64B_ATOMICS 1 -#define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 1 - -#define __MB " mb\n" - - -/* Compare and exchange. For all of the "xxx" routines, we expect a - "__prev" and a "__cmp" variable to be provided by the enclosing scope, - in which values are returned. */ - -#define __arch_compare_and_exchange_xxx_8_int(mem, new, old, mb1, mb2) \ -({ \ - unsigned long __tmp, __snew, __addr64; \ - __asm__ __volatile__ ( \ - mb1 \ - " andnot %[__addr8],7,%[__addr64]\n" \ - " insbl %[__new],%[__addr8],%[__snew]\n" \ - "1: ldq_l %[__tmp],0(%[__addr64])\n" \ - " extbl %[__tmp],%[__addr8],%[__prev]\n" \ - " cmpeq %[__prev],%[__old],%[__cmp]\n" \ - " beq %[__cmp],2f\n" \ - " mskbl %[__tmp],%[__addr8],%[__tmp]\n" \ - " or %[__snew],%[__tmp],%[__tmp]\n" \ - " stq_c %[__tmp],0(%[__addr64])\n" \ - " beq %[__tmp],1b\n" \ - mb2 \ - "2:" \ - : [__prev] "=&r" (__prev), \ - [__snew] "=&r" (__snew), \ - [__tmp] "=&r" (__tmp), \ - [__cmp] "=&r" (__cmp), \ - [__addr64] "=&r" (__addr64) \ - : [__addr8] "r" (mem), \ - [__old] "Ir" ((uint64_t)(uint8_t)(uint64_t)(old)), \ - [__new] "r" (new) \ - : "memory"); \ -}) - -#define __arch_compare_and_exchange_xxx_16_int(mem, new, old, mb1, mb2) \ -({ \ - unsigned long __tmp, __snew, __addr64; \ - __asm__ __volatile__ ( \ - mb1 \ - " andnot %[__addr16],7,%[__addr64]\n" \ - " inswl %[__new],%[__addr16],%[__snew]\n" \ - "1: ldq_l %[__tmp],0(%[__addr64])\n" \ - " extwl %[__tmp],%[__addr16],%[__prev]\n" \ - " cmpeq %[__prev],%[__old],%[__cmp]\n" \ - " beq %[__cmp],2f\n" \ - " mskwl %[__tmp],%[__addr16],%[__tmp]\n" \ - " or %[__snew],%[__tmp],%[__tmp]\n" \ - " stq_c %[__tmp],0(%[__addr64])\n" \ - " beq %[__tmp],1b\n" \ - mb2 \ - "2:" \ - : [__prev] "=&r" (__prev), \ - [__snew] "=&r" (__snew), \ - [__tmp] "=&r" (__tmp), \ - [__cmp] "=&r" (__cmp), \ - [__addr64] "=&r" (__addr64) \ - : [__addr16] "r" (mem), \ - [__old] "Ir" ((uint64_t)(uint16_t)(uint64_t)(old)), \ - [__new] "r" (new) \ - : "memory"); \ -}) - -#define __arch_compare_and_exchange_xxx_32_int(mem, new, old, mb1, mb2) \ -({ \ - __asm__ __volatile__ ( \ - mb1 \ - "1: ldl_l %[__prev],%[__mem]\n" \ - " cmpeq %[__prev],%[__old],%[__cmp]\n" \ - " beq %[__cmp],2f\n" \ - " mov %[__new],%[__cmp]\n" \ - " stl_c %[__cmp],%[__mem]\n" \ - " beq %[__cmp],1b\n" \ - mb2 \ - "2:" \ - : [__prev] "=&r" (__prev), \ - [__cmp] "=&r" (__cmp) \ - : [__mem] "m" (*(mem)), \ - [__old] "Ir" ((uint64_t)(int32_t)(uint64_t)(old)), \ - [__new] "Ir" (new) \ - : "memory"); \ -}) - -#define __arch_compare_and_exchange_xxx_64_int(mem, new, old, mb1, mb2) \ -({ \ - __asm__ __volatile__ ( \ - mb1 \ - "1: ldq_l %[__prev],%[__mem]\n" \ - " cmpeq %[__prev],%[__old],%[__cmp]\n" \ - " beq %[__cmp],2f\n" \ - " mov %[__new],%[__cmp]\n" \ - " stq_c %[__cmp],%[__mem]\n" \ - " beq %[__cmp],1b\n" \ - mb2 \ - "2:" \ - : [__prev] "=&r" (__prev), \ - [__cmp] "=&r" (__cmp) \ - : [__mem] "m" (*(mem)), \ - [__old] "Ir" ((uint64_t)(old)), \ - [__new] "Ir" (new) \ - : "memory"); \ -}) - -/* For all "bool" routines, we return FALSE if exchange succesful. */ - -#define __arch_compare_and_exchange_bool_8_int(mem, new, old, mb1, mb2) \ -({ unsigned long __prev; int __cmp; \ - __arch_compare_and_exchange_xxx_8_int(mem, new, old, mb1, mb2); \ - !__cmp; }) - -#define __arch_compare_and_exchange_bool_16_int(mem, new, old, mb1, mb2) \ -({ unsigned long __prev; int __cmp; \ - __arch_compare_and_exchange_xxx_16_int(mem, new, old, mb1, mb2); \ - !__cmp; }) - -#define __arch_compare_and_exchange_bool_32_int(mem, new, old, mb1, mb2) \ -({ unsigned long __prev; int __cmp; \ - __arch_compare_and_exchange_xxx_32_int(mem, new, old, mb1, mb2); \ - !__cmp; }) - -#define __arch_compare_and_exchange_bool_64_int(mem, new, old, mb1, mb2) \ -({ unsigned long __prev; int __cmp; \ - __arch_compare_and_exchange_xxx_64_int(mem, new, old, mb1, mb2); \ - !__cmp; }) - -/* For all "val" routines, return the old value whether exchange - successful or not. */ - -#define __arch_compare_and_exchange_val_8_int(mem, new, old, mb1, mb2) \ -({ unsigned long __prev; int __cmp; \ - __arch_compare_and_exchange_xxx_8_int(mem, new, old, mb1, mb2); \ - (typeof (*mem))__prev; }) - -#define __arch_compare_and_exchange_val_16_int(mem, new, old, mb1, mb2) \ -({ unsigned long __prev; int __cmp; \ - __arch_compare_and_exchange_xxx_16_int(mem, new, old, mb1, mb2); \ - (typeof (*mem))__prev; }) - -#define __arch_compare_and_exchange_val_32_int(mem, new, old, mb1, mb2) \ -({ unsigned long __prev; int __cmp; \ - __arch_compare_and_exchange_xxx_32_int(mem, new, old, mb1, mb2); \ - (typeof (*mem))__prev; }) - -#define __arch_compare_and_exchange_val_64_int(mem, new, old, mb1, mb2) \ -({ unsigned long __prev; int __cmp; \ - __arch_compare_and_exchange_xxx_64_int(mem, new, old, mb1, mb2); \ - (typeof (*mem))__prev; }) - -/* Compare and exchange with "acquire" semantics, ie barrier after. */ - -#define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, "", __MB) - -#define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, "", __MB) - -/* Compare and exchange with "release" semantics, ie barrier before. */ - -#define atomic_compare_and_exchange_val_rel(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __MB, "") - - -/* Atomically store value and return the previous value. */ - -#define __arch_exchange_8_int(mem, value, mb1, mb2) \ -({ \ - unsigned long __tmp, __addr64, __sval; __typeof(*mem) __ret; \ - __asm__ __volatile__ ( \ - mb1 \ - " andnot %[__addr8],7,%[__addr64]\n" \ - " insbl %[__value],%[__addr8],%[__sval]\n" \ - "1: ldq_l %[__tmp],0(%[__addr64])\n" \ - " extbl %[__tmp],%[__addr8],%[__ret]\n" \ - " mskbl %[__tmp],%[__addr8],%[__tmp]\n" \ - " or %[__sval],%[__tmp],%[__tmp]\n" \ - " stq_c %[__tmp],0(%[__addr64])\n" \ - " beq %[__tmp],1b\n" \ - mb2 \ - : [__ret] "=&r" (__ret), \ - [__sval] "=&r" (__sval), \ - [__tmp] "=&r" (__tmp), \ - [__addr64] "=&r" (__addr64) \ - : [__addr8] "r" (mem), \ - [__value] "r" (value) \ - : "memory"); \ - __ret; }) - -#define __arch_exchange_16_int(mem, value, mb1, mb2) \ -({ \ - unsigned long __tmp, __addr64, __sval; __typeof(*mem) __ret; \ - __asm__ __volatile__ ( \ - mb1 \ - " andnot %[__addr16],7,%[__addr64]\n" \ - " inswl %[__value],%[__addr16],%[__sval]\n" \ - "1: ldq_l %[__tmp],0(%[__addr64])\n" \ - " extwl %[__tmp],%[__addr16],%[__ret]\n" \ - " mskwl %[__tmp],%[__addr16],%[__tmp]\n" \ - " or %[__sval],%[__tmp],%[__tmp]\n" \ - " stq_c %[__tmp],0(%[__addr64])\n" \ - " beq %[__tmp],1b\n" \ - mb2 \ - : [__ret] "=&r" (__ret), \ - [__sval] "=&r" (__sval), \ - [__tmp] "=&r" (__tmp), \ - [__addr64] "=&r" (__addr64) \ - : [__addr16] "r" (mem), \ - [__value] "r" (value) \ - : "memory"); \ - __ret; }) - -#define __arch_exchange_32_int(mem, value, mb1, mb2) \ -({ \ - signed int __tmp; __typeof(*mem) __ret; \ - __asm__ __volatile__ ( \ - mb1 \ - "1: ldl_l %[__ret],%[__mem]\n" \ - " mov %[__val],%[__tmp]\n" \ - " stl_c %[__tmp],%[__mem]\n" \ - " beq %[__tmp],1b\n" \ - mb2 \ - : [__ret] "=&r" (__ret), \ - [__tmp] "=&r" (__tmp) \ - : [__mem] "m" (*(mem)), \ - [__val] "Ir" (value) \ - : "memory"); \ - __ret; }) - -#define __arch_exchange_64_int(mem, value, mb1, mb2) \ -({ \ - unsigned long __tmp; __typeof(*mem) __ret; \ - __asm__ __volatile__ ( \ - mb1 \ - "1: ldq_l %[__ret],%[__mem]\n" \ - " mov %[__val],%[__tmp]\n" \ - " stq_c %[__tmp],%[__mem]\n" \ - " beq %[__tmp],1b\n" \ - mb2 \ - : [__ret] "=&r" (__ret), \ - [__tmp] "=&r" (__tmp) \ - : [__mem] "m" (*(mem)), \ - [__val] "Ir" (value) \ - : "memory"); \ - __ret; }) - -#define atomic_exchange_acq(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, "", __MB) - -#define atomic_exchange_rel(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __MB, "") - - -/* Atomically add value and return the previous (unincremented) value. */ - -#define __arch_exchange_and_add_8_int(mem, value, mb1, mb2) \ - ({ __builtin_trap (); 0; }) - -#define __arch_exchange_and_add_16_int(mem, value, mb1, mb2) \ - ({ __builtin_trap (); 0; }) - -#define __arch_exchange_and_add_32_int(mem, value, mb1, mb2) \ -({ \ - signed int __tmp; __typeof(*mem) __ret; \ - __asm__ __volatile__ ( \ - mb1 \ - "1: ldl_l %[__ret],%[__mem]\n" \ - " addl %[__ret],%[__val],%[__tmp]\n" \ - " stl_c %[__tmp],%[__mem]\n" \ - " beq %[__tmp],1b\n" \ - mb2 \ - : [__ret] "=&r" (__ret), \ - [__tmp] "=&r" (__tmp) \ - : [__mem] "m" (*(mem)), \ - [__val] "Ir" ((signed int)(value)) \ - : "memory"); \ - __ret; }) - -#define __arch_exchange_and_add_64_int(mem, value, mb1, mb2) \ -({ \ - unsigned long __tmp; __typeof(*mem) __ret; \ - __asm__ __volatile__ ( \ - mb1 \ - "1: ldq_l %[__ret],%[__mem]\n" \ - " addq %[__ret],%[__val],%[__tmp]\n" \ - " stq_c %[__tmp],%[__mem]\n" \ - " beq %[__tmp],1b\n" \ - mb2 \ - : [__ret] "=&r" (__ret), \ - [__tmp] "=&r" (__tmp) \ - : [__mem] "m" (*(mem)), \ - [__val] "Ir" ((unsigned long)(value)) \ - : "memory"); \ - __ret; }) - -/* ??? Barrier semantics for atomic_exchange_and_add appear to be - undefined. Use full barrier for now, as that's safe. */ -#define atomic_exchange_and_add(mem, value) \ - __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, __MB, __MB) - #define atomic_full_barrier() __asm ("mb" : : : "memory"); #define atomic_read_barrier() __asm ("mb" : : : "memory"); #define atomic_write_barrier() __asm ("wmb" : : : "memory"); diff --git a/sysdeps/arc/atomic-machine.h b/sysdeps/arc/atomic-machine.h index 3d17f7899083c859efd645878e0f0596da32a073..2d519e3bbfa9ce77f4a41e313b67a690569d032e 100644 --- a/sysdeps/arc/atomic-machine.h +++ b/sysdeps/arc/atomic-machine.h @@ -20,38 +20,9 @@ #define _ARC_BITS_ATOMIC_H 1 #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 1 /* ARC does have legacy atomic EX reg, [mem] instruction but the micro-arch is not as optimal as LLOCK/SCOND specially for SMP. */ #define ATOMIC_EXCHANGE_USES_CAS 1 -#define __arch_compare_and_exchange_bool_8_acq(mem, newval, oldval) \ - (abort (), 0) -#define __arch_compare_and_exchange_bool_16_acq(mem, newval, oldval) \ - (abort (), 0) -#define __arch_compare_and_exchange_bool_64_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) -#define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) -#define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -#define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -#define atomic_full_barrier() ({ asm volatile ("dmb 3":::"memory"); }) - #endif /* _ARC_BITS_ATOMIC_H */ diff --git a/sysdeps/arm/atomic-machine.h b/sysdeps/arm/atomic-machine.h index 952404379748e6dc5dee1da7731fb5c6faab4e57..b172573ae74dc9d6c7618bfdb76f5fb0429469f8 100644 --- a/sysdeps/arm/atomic-machine.h +++ b/sysdeps/arm/atomic-machine.h @@ -17,122 +17,4 @@ . */ #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 0 #define ATOMIC_EXCHANGE_USES_CAS 1 - -void __arm_link_error (void); - -#ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 -# define atomic_full_barrier() __sync_synchronize () -#else -# define atomic_full_barrier() __arm_assisted_full_barrier () -#endif - -/* An OS-specific atomic-machine.h file will define this macro if - the OS can provide something. If not, we'll fail to build - with a compiler that doesn't supply the operation. */ -#ifndef __arm_assisted_full_barrier -# define __arm_assisted_full_barrier() __arm_link_error() -#endif - -/* Use the atomic builtins provided by GCC in case the backend provides - a pattern to do this efficiently. */ -#ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 - -# define atomic_exchange_acq(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_ACQUIRE) - -# define atomic_exchange_rel(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_RELEASE) - -/* Atomic exchange (without compare). */ - -# define __arch_exchange_8_int(mem, newval, model) \ - (__arm_link_error (), (typeof (*mem)) 0) - -# define __arch_exchange_16_int(mem, newval, model) \ - (__arm_link_error (), (typeof (*mem)) 0) - -# define __arch_exchange_32_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -# define __arch_exchange_64_int(mem, newval, model) \ - (__arm_link_error (), (typeof (*mem)) 0) - -/* Compare and exchange with "acquire" semantics, ie barrier after. */ - -# define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -# define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -/* Compare and exchange with "release" semantics, ie barrier before. */ - -# define atomic_compare_and_exchange_val_rel(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_RELEASE) - -/* Compare and exchange. - For all "bool" routines, we return FALSE if exchange succesful. */ - -# define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ - ({__arm_link_error (); 0; }) - -# define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ - ({__arm_link_error (); 0; }) - -# define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - ({__arm_link_error (); 0; }) - -# define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - ({__arm_link_error (); oldval; }) - -# define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - ({__arm_link_error (); oldval; }) - -# define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - ({__arm_link_error (); oldval; }) - -#else -# define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ - __arm_assisted_compare_and_exchange_val_32_acq ((mem), (newval), (oldval)) -#endif - -#ifndef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 -/* We don't support atomic operations on any non-word types. - So make them link errors. */ -# define __arch_compare_and_exchange_val_8_acq(mem, newval, oldval) \ - ({ __arm_link_error (); oldval; }) - -# define __arch_compare_and_exchange_val_16_acq(mem, newval, oldval) \ - ({ __arm_link_error (); oldval; }) - -# define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ - ({ __arm_link_error (); oldval; }) -#endif - -/* An OS-specific atomic-machine.h file will define this macro if - the OS can provide something. If not, we'll fail to build - with a compiler that doesn't supply the operation. */ -#ifndef __arm_assisted_compare_and_exchange_val_32_acq -# define __arm_assisted_compare_and_exchange_val_32_acq(mem, newval, oldval) \ - ({ __arm_link_error (); oldval; }) -#endif diff --git a/sysdeps/csky/atomic-machine.h b/sysdeps/csky/atomic-machine.h index 35853719674c3df82f87c0af22b6b6dd97152bed..4a7dc63be2044990852c52500943c90c898363be 100644 --- a/sysdeps/csky/atomic-machine.h +++ b/sysdeps/csky/atomic-machine.h @@ -20,48 +20,6 @@ #define __CSKY_ATOMIC_H_ #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 1 #define ATOMIC_EXCHANGE_USES_CAS 1 -#define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -#define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - (abort (), 0) - -#define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -#define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) - -#define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -#define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - #endif /* atomic-machine.h */ diff --git a/sysdeps/generic/atomic-machine.h b/sysdeps/generic/atomic-machine.h index b1c22dc29dc712203255558f94aa49809cd49ba3..30bb9c81bbdac924b46c3d7df11a6b0d055d1cf9 100644 --- a/sysdeps/generic/atomic-machine.h +++ b/sysdeps/generic/atomic-machine.h @@ -18,24 +18,4 @@ #ifndef _ATOMIC_MACHINE_H #define _ATOMIC_MACHINE_H 1 -/* We have by default no support for atomic operations. So define - them non-atomic. If this is a problem somebody will have to come - up with real definitions. */ - -/* The only basic operation needed is compare and exchange. */ -#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - ({ __typeof (mem) __gmemp = (mem); \ - __typeof (*mem) __gret = *__gmemp; \ - __typeof (*mem) __gnewval = (newval); \ - \ - if (__gret == (oldval)) \ - *__gmemp = __gnewval; \ - __gret; }) - -#define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ - ({ __typeof (mem) __gmemp = (mem); \ - __typeof (*mem) __gnewval = (newval); \ - \ - *__gmemp == (oldval) ? (*__gmemp = __gnewval, 0) : 1; }) - #endif /* atomic-machine.h */ diff --git a/sysdeps/generic/malloc-machine.h b/sysdeps/generic/malloc-machine.h index 001a8e7e606c584dabacc9cbf6713f137bb9b4a7..ebd6983ecc14b5b314f457fc1766a9f86561d32f 100644 --- a/sysdeps/generic/malloc-machine.h +++ b/sysdeps/generic/malloc-machine.h @@ -22,18 +22,6 @@ #include -#ifndef atomic_full_barrier -# define atomic_full_barrier() __asm ("" ::: "memory") -#endif - -#ifndef atomic_read_barrier -# define atomic_read_barrier() atomic_full_barrier () -#endif - -#ifndef atomic_write_barrier -# define atomic_write_barrier() atomic_full_barrier () -#endif - #ifndef DEFAULT_TOP_PAD # define DEFAULT_TOP_PAD 131072 #endif diff --git a/sysdeps/ia64/atomic-machine.h b/sysdeps/ia64/atomic-machine.h index b2f5d2f4774cc2503c7595cb82f30f60fbcbe89c..6f31c7b2eea67b5d8766dea1c38df6eedc168ebf 100644 --- a/sysdeps/ia64/atomic-machine.h +++ b/sysdeps/ia64/atomic-machine.h @@ -18,63 +18,6 @@ #include #define __HAVE_64B_ATOMICS 1 -#define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 0 - - -#define __arch_compare_and_exchange_bool_8_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_16_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_32_acq(mem, newval, oldval) \ - (!__sync_bool_compare_and_swap ((mem), (int) (long) (oldval), \ - (int) (long) (newval))) - -#define __arch_compare_and_exchange_bool_64_acq(mem, newval, oldval) \ - (!__sync_bool_compare_and_swap ((mem), (long) (oldval), \ - (long) (newval))) - -#define __arch_compare_and_exchange_val_8_acq(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_val_16_acq(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ - __sync_val_compare_and_swap ((mem), (int) (long) (oldval), \ - (int) (long) (newval)) - -#define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ - __sync_val_compare_and_swap ((mem), (long) (oldval), (long) (newval)) - -/* Atomically store newval and return the old value. */ -#define atomic_exchange_acq(mem, value) \ - __sync_lock_test_and_set (mem, value) - -#define atomic_exchange_rel(mem, value) \ - (__sync_synchronize (), __sync_lock_test_and_set (mem, value)) - -#define atomic_exchange_and_add(mem, value) \ - __sync_fetch_and_add ((mem), (value)) - -#define atomic_decrement_if_positive(mem) \ - ({ __typeof (*mem) __oldval, __val; \ - __typeof (mem) __memp = (mem); \ - \ - __val = (*__memp); \ - do \ - { \ - __oldval = __val; \ - if (__builtin_expect (__val <= 0, 0)) \ - break; \ - __val = atomic_compare_and_exchange_val_acq (__memp, __oldval - 1, \ - __oldval); \ - } \ - while (__builtin_expect (__val != __oldval, 0)); \ - __oldval; }) - -#define atomic_full_barrier() __sync_synchronize () diff --git a/sysdeps/m68k/coldfire/atomic-machine.h b/sysdeps/m68k/coldfire/atomic-machine.h index 8fd08c626cf1619df9975b9bae9664595a5a05d7..1503703ed36b825f6e9f2cb2ed1516cd80bd9947 100644 --- a/sysdeps/m68k/coldfire/atomic-machine.h +++ b/sysdeps/m68k/coldfire/atomic-machine.h @@ -20,25 +20,8 @@ /* If we have just non-atomic operations, we can as well make them wide. */ #define __HAVE_64B_ATOMICS 1 -#define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 1 -/* The only basic operation needed is compare and exchange. */ -#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - ({ __typeof (mem) __gmemp = (mem); \ - __typeof (*mem) __gret = *__gmemp; \ - __typeof (*mem) __gnewval = (newval); \ - \ - if (__gret == (oldval)) \ - *__gmemp = __gnewval; \ - __gret; }) - -#define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ - ({ __typeof (mem) __gmemp = (mem); \ - __typeof (*mem) __gnewval = (newval); \ - \ - *__gmemp == (oldval) ? (*__gmemp = __gnewval, 0) : 1; }) - #endif diff --git a/sysdeps/m68k/m680x0/m68020/atomic-machine.h b/sysdeps/m68k/m680x0/m68020/atomic-machine.h index 529aa0a70abdb6fde367a031b2f11e577af2c914..d356b55c9f9082db8dde734c254e01a631201206 100644 --- a/sysdeps/m68k/m680x0/m68020/atomic-machine.h +++ b/sysdeps/m68k/m680x0/m68020/atomic-machine.h @@ -17,111 +17,6 @@ /* GCC does not support lock-free 64-bit atomic_load/store. */ #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 1 - -#define __arch_compare_and_exchange_val_8_acq(mem, newval, oldval) \ - ({ __typeof (*(mem)) __ret; \ - __asm __volatile ("cas%.b %0,%2,%1" \ - : "=d" (__ret), "+m" (*(mem)) \ - : "d" (newval), "0" (oldval)); \ - __ret; }) - -#define __arch_compare_and_exchange_val_16_acq(mem, newval, oldval) \ - ({ __typeof (*(mem)) __ret; \ - __asm __volatile ("cas%.w %0,%2,%1" \ - : "=d" (__ret), "+m" (*(mem)) \ - : "d" (newval), "0" (oldval)); \ - __ret; }) - -#define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ - ({ __typeof (*(mem)) __ret; \ - __asm __volatile ("cas%.l %0,%2,%1" \ - : "=d" (__ret), "+m" (*(mem)) \ - : "d" (newval), "0" (oldval)); \ - __ret; }) - -# define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ - ({ __typeof (*(mem)) __ret; \ - __typeof (mem) __memp = (mem); \ - __asm __volatile ("cas2%.l %0:%R0,%1:%R1,(%2):(%3)" \ - : "=d" (__ret) \ - : "d" ((__typeof (*(mem))) (newval)), "r" (__memp), \ - "r" ((char *) __memp + 4), "0" (oldval) \ - : "memory"); \ - __ret; }) - -#define atomic_exchange_acq(mem, newvalue) \ - ({ __typeof (*(mem)) __result = *(mem); \ - if (sizeof (*(mem)) == 1) \ - __asm __volatile ("1: cas%.b %0,%2,%1;" \ - " jbne 1b" \ - : "=d" (__result), "+m" (*(mem)) \ - : "d" (newvalue), "0" (__result)); \ - else if (sizeof (*(mem)) == 2) \ - __asm __volatile ("1: cas%.w %0,%2,%1;" \ - " jbne 1b" \ - : "=d" (__result), "+m" (*(mem)) \ - : "d" (newvalue), "0" (__result)); \ - else if (sizeof (*(mem)) == 4) \ - __asm __volatile ("1: cas%.l %0,%2,%1;" \ - " jbne 1b" \ - : "=d" (__result), "+m" (*(mem)) \ - : "d" (newvalue), "0" (__result)); \ - else \ - { \ - __typeof (mem) __memp = (mem); \ - __asm __volatile ("1: cas2%.l %0:%R0,%1:%R1,(%2):(%3);" \ - " jbne 1b" \ - : "=d" (__result) \ - : "d" ((__typeof (*(mem))) (newvalue)), \ - "r" (__memp), "r" ((char *) __memp + 4), \ - "0" (__result) \ - : "memory"); \ - } \ - __result; }) - -#define atomic_exchange_and_add(mem, value) \ - ({ __typeof (*(mem)) __result = *(mem); \ - __typeof (*(mem)) __temp; \ - if (sizeof (*(mem)) == 1) \ - __asm __volatile ("1: move%.b %0,%2;" \ - " add%.b %3,%2;" \ - " cas%.b %0,%2,%1;" \ - " jbne 1b" \ - : "=d" (__result), "+m" (*(mem)), \ - "=&d" (__temp) \ - : "d" (value), "0" (__result)); \ - else if (sizeof (*(mem)) == 2) \ - __asm __volatile ("1: move%.w %0,%2;" \ - " add%.w %3,%2;" \ - " cas%.w %0,%2,%1;" \ - " jbne 1b" \ - : "=d" (__result), "+m" (*(mem)), \ - "=&d" (__temp) \ - : "d" (value), "0" (__result)); \ - else if (sizeof (*(mem)) == 4) \ - __asm __volatile ("1: move%.l %0,%2;" \ - " add%.l %3,%2;" \ - " cas%.l %0,%2,%1;" \ - " jbne 1b" \ - : "=d" (__result), "+m" (*(mem)), \ - "=&d" (__temp) \ - : "d" (value), "0" (__result)); \ - else \ - { \ - __typeof (mem) __memp = (mem); \ - __asm __volatile ("1: move%.l %0,%1;" \ - " move%.l %R0,%R1;" \ - " add%.l %R2,%R1;" \ - " addx%.l %2,%1;" \ - " cas2%.l %0:%R0,%1:%R1,(%3):(%4);" \ - " jbne 1b" \ - : "=d" (__result), "=&d" (__temp) \ - : "d" ((__typeof (*(mem))) (value)), "r" (__memp), \ - "r" ((char *) __memp + 4), "0" (__result) \ - : "memory"); \ - } \ - __result; }) diff --git a/sysdeps/microblaze/atomic-machine.h b/sysdeps/microblaze/atomic-machine.h index 5781b4440bf22a747fb90c4e7cd5476f14fb8573..4e7ccce21e59453f5233bdf82b22215d9a6d17b3 100644 --- a/sysdeps/microblaze/atomic-machine.h +++ b/sysdeps/microblaze/atomic-machine.h @@ -19,156 +19,6 @@ #include #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 1 - - -/* Microblaze does not have byte and halfword forms of load and reserve and - store conditional. So for microblaze we stub out the 8- and 16-bit forms. */ -#define __arch_compare_and_exchange_bool_8_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_16_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __tmp; \ - __typeof (mem) __memp = (mem); \ - int test; \ - __asm __volatile ( \ - " addc r0, r0, r0;" \ - "1: lwx %0, %3, r0;" \ - " addic %1, r0, 0;" \ - " bnei %1, 1b;" \ - " cmp %1, %0, %4;" \ - " bnei %1, 2f;" \ - " swx %5, %3, r0;" \ - " addic %1, r0, 0;" \ - " bnei %1, 1b;" \ - "2:" \ - : "=&r" (__tmp), \ - "=&r" (test), \ - "=m" (*__memp) \ - : "r" (__memp), \ - "r" (oldval), \ - "r" (newval) \ - : "cc", "memory"); \ - __tmp; \ - }) - -#define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) - -#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_compare_and_exchange_val_32_acq (mem, newval, oldval); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_compare_and_exchange_val_64_acq (mem, newval, oldval); \ - else \ - abort (); \ - __result; \ - }) - -#define atomic_compare_and_exchange_val_rel(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_compare_and_exchange_val_32_acq (mem, newval, oldval); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_compare_and_exchange_val_64_acq (mem, newval, oldval); \ - else \ - abort (); \ - __result; \ - }) - -#define __arch_atomic_exchange_32_acq(mem, value) \ - ({ \ - __typeof (*(mem)) __tmp; \ - __typeof (mem) __memp = (mem); \ - int test; \ - __asm __volatile ( \ - " addc r0, r0, r0;" \ - "1: lwx %0, %4, r0;" \ - " addic %1, r0, 0;" \ - " bnei %1, 1b;" \ - " swx %3, %4, r0;" \ - " addic %1, r0, 0;" \ - " bnei %1, 1b;" \ - : "=&r" (__tmp), \ - "=&r" (test), \ - "=m" (*__memp) \ - : "r" (value), \ - "r" (__memp) \ - : "cc", "memory"); \ - __tmp; \ - }) - -#define __arch_atomic_exchange_64_acq(mem, newval) \ - (abort (), (__typeof (*mem)) 0) - -#define atomic_exchange_acq(mem, value) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_exchange_32_acq (mem, value); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_exchange_64_acq (mem, value); \ - else \ - abort (); \ - __result; \ - }) - -#define atomic_exchange_rel(mem, value) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_exchange_32_acq (mem, value); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_exchange_64_acq (mem, value); \ - else \ - abort (); \ - __result; \ - }) - -#define __arch_atomic_exchange_and_add_32(mem, value) \ - ({ \ - __typeof (*(mem)) __tmp; \ - __typeof (mem) __memp = (mem); \ - int test; \ - __asm __volatile ( \ - " addc r0, r0, r0;" \ - "1: lwx %0, %4, r0;" \ - " addic %1, r0, 0;" \ - " bnei %1, 1b;" \ - " add %1, %3, %0;" \ - " swx %1, %4, r0;" \ - " addic %1, r0, 0;" \ - " bnei %1, 1b;" \ - : "=&r" (__tmp), \ - "=&r" (test), \ - "=m" (*__memp) \ - : "r" (value), \ - "r" (__memp) \ - : "cc", "memory"); \ - __tmp; \ - }) - -#define __arch_atomic_exchange_and_add_64(mem, value) \ - (abort (), (__typeof (*mem)) 0) - -#define atomic_exchange_and_add(mem, value) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_exchange_and_add_32 (mem, value); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_exchange_and_add_64 (mem, value); \ - else \ - abort (); \ - __result; \ - }) diff --git a/sysdeps/mips/atomic-machine.h b/sysdeps/mips/atomic-machine.h index 88805ee30b11d8af96e296b8794a8a0d2cedfeb7..1e611c2153996d28e14611c60189f52d0919b79c 100644 --- a/sysdeps/mips/atomic-machine.h +++ b/sysdeps/mips/atomic-machine.h @@ -21,29 +21,12 @@ #include -#if _MIPS_SIM == _ABIO32 && __mips < 2 -#define MIPS_PUSH_MIPS2 ".set mips2\n\t" -#else -#define MIPS_PUSH_MIPS2 -#endif - #if _MIPS_SIM == _ABIO32 || _MIPS_SIM == _ABIN32 #define __HAVE_64B_ATOMICS 0 #else #define __HAVE_64B_ATOMICS 1 #endif -/* See the comments in about the use of the sync instruction. */ -#ifndef MIPS_SYNC -# define MIPS_SYNC sync -#endif - -#define MIPS_SYNC_STR_2(X) #X -#define MIPS_SYNC_STR_1(X) MIPS_SYNC_STR_2(X) -#define MIPS_SYNC_STR MIPS_SYNC_STR_1(MIPS_SYNC) - -#define USE_ATOMIC_COMPILER_BUILTINS 1 - /* MIPS is an LL/SC machine. However, XLP has a direct atomic exchange instruction which will be used by __atomic_exchange_n. */ #ifdef _MIPS_ARCH_XLP @@ -52,133 +35,4 @@ # define ATOMIC_EXCHANGE_USES_CAS 1 #endif -/* Compare and exchange. - For all "bool" routines, we return FALSE if exchange succesful. */ - -#define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -#define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - (abort (), (typeof(*mem)) 0) - -#define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - (abort (), (typeof(*mem)) 0) - -#define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -#if _MIPS_SIM == _ABIO32 - /* We can't do an atomic 64-bit operation in O32. */ -# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - (abort (), 0) -# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - (abort (), (typeof(*mem)) 0) -#else -# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - __arch_compare_and_exchange_bool_32_int (mem, newval, oldval, model) -# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - __arch_compare_and_exchange_val_32_int (mem, newval, oldval, model) -#endif - -/* Compare and exchange with "acquire" semantics, ie barrier after. */ - -#define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -#define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -/* Compare and exchange with "release" semantics, ie barrier before. */ - -#define atomic_compare_and_exchange_val_rel(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_RELEASE) - - -/* Atomic exchange (without compare). */ - -#define __arch_exchange_8_int(mem, newval, model) \ - (abort (), (typeof(*mem)) 0) - -#define __arch_exchange_16_int(mem, newval, model) \ - (abort (), (typeof(*mem)) 0) - -#define __arch_exchange_32_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -#if _MIPS_SIM == _ABIO32 -/* We can't do an atomic 64-bit operation in O32. */ -# define __arch_exchange_64_int(mem, newval, model) \ - (abort (), (typeof(*mem)) 0) -#else -# define __arch_exchange_64_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) -#endif - -#define atomic_exchange_acq(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_ACQUIRE) - -#define atomic_exchange_rel(mem, value) \ - __atomic_val_bysize (__arch_exchange, int, mem, value, __ATOMIC_RELEASE) - - -/* Atomically add value and return the previous (unincremented) value. */ - -#define __arch_exchange_and_add_8_int(mem, value, model) \ - (abort (), (typeof(*mem)) 0) - -#define __arch_exchange_and_add_16_int(mem, value, model) \ - (abort (), (typeof(*mem)) 0) - -#define __arch_exchange_and_add_32_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -#if _MIPS_SIM == _ABIO32 -/* We can't do an atomic 64-bit operation in O32. */ -# define __arch_exchange_and_add_64_int(mem, value, model) \ - (abort (), (typeof(*mem)) 0) -#else -# define __arch_exchange_and_add_64_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) -#endif - -#define atomic_exchange_and_add_acq(mem, value) \ - __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ - __ATOMIC_ACQUIRE) - -#define atomic_exchange_and_add_rel(mem, value) \ - __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ - __ATOMIC_RELEASE) - -/* TODO: More atomic operations could be implemented efficiently; only the - basic requirements are done. */ - -#ifdef __mips16 -# define atomic_full_barrier() __sync_synchronize () - -#else /* !__mips16 */ -# define atomic_full_barrier() \ - __asm__ __volatile__ (".set push\n\t" \ - MIPS_PUSH_MIPS2 \ - MIPS_SYNC_STR "\n\t" \ - ".set pop" : : : "memory") -#endif /* !__mips16 */ - #endif /* atomic-machine.h */ diff --git a/sysdeps/or1k/atomic-machine.h b/sysdeps/or1k/atomic-machine.h index 0d27298d70037a771de9abb1033cbfcb48cdb1b8..90a10867b3f9cf97a0f2f521f6759a0008ef5b82 100644 --- a/sysdeps/or1k/atomic-machine.h +++ b/sysdeps/or1k/atomic-machine.h @@ -22,50 +22,6 @@ #include #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 1 #define ATOMIC_EXCHANGE_USES_CAS 1 -#define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -#define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - (abort (), 0) - -#define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -#define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - (abort (), (__typeof (*mem)) 0) - -#define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -#define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -#define atomic_full_barrier() ({ asm volatile ("l.msync" ::: "memory"); }) - #endif /* atomic-machine.h */ diff --git a/sysdeps/powerpc/atomic-machine.h b/sysdeps/powerpc/atomic-machine.h deleted file mode 100644 index f2114322f53699009aea29b9503492b1d5a03e2e..0000000000000000000000000000000000000000 --- a/sysdeps/powerpc/atomic-machine.h +++ /dev/null @@ -1,261 +0,0 @@ -/* Atomic operations. PowerPC Common version. - Copyright (C) 2003-2022 Free Software Foundation, Inc. - This file is part of the GNU C Library. - - The GNU C Library is free software; you can redistribute it and/or - modify it under the terms of the GNU Lesser General Public - License as published by the Free Software Foundation; either - version 2.1 of the License, or (at your option) any later version. - - The GNU C Library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Lesser General Public License for more details. - - You should have received a copy of the GNU Lesser General Public - License along with the GNU C Library; if not, see - . */ - -/* - * Never include sysdeps/powerpc/atomic-machine.h directly. - * Alway use include/atomic.h which will include either - * sysdeps/powerpc/powerpc32/atomic-machine.h - * or - * sysdeps/powerpc/powerpc64/atomic-machine.h - * as appropriate and which in turn include this file. - */ - -/* - * Powerpc does not have byte and halfword forms of load and reserve and - * store conditional. So for powerpc we stub out the 8- and 16-bit forms. - */ -#define __arch_compare_and_exchange_bool_8_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __arch_compare_and_exchange_bool_16_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __ARCH_ACQ_INSTR "isync" -#ifndef __ARCH_REL_INSTR -# define __ARCH_REL_INSTR "sync" -#endif - -#ifndef MUTEX_HINT_ACQ -# define MUTEX_HINT_ACQ -#endif -#ifndef MUTEX_HINT_REL -# define MUTEX_HINT_REL -#endif - -#define atomic_full_barrier() __asm ("sync" ::: "memory") - -#define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __tmp; \ - __typeof (mem) __memp = (mem); \ - __asm __volatile ( \ - "1: lwarx %0,0,%1" MUTEX_HINT_ACQ "\n" \ - " cmpw %0,%2\n" \ - " bne 2f\n" \ - " stwcx. %3,0,%1\n" \ - " bne- 1b\n" \ - "2: " __ARCH_ACQ_INSTR \ - : "=&r" (__tmp) \ - : "b" (__memp), "r" (oldval), "r" (newval) \ - : "cr0", "memory"); \ - __tmp; \ - }) - -#define __arch_compare_and_exchange_val_32_rel(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __tmp; \ - __typeof (mem) __memp = (mem); \ - __asm __volatile (__ARCH_REL_INSTR "\n" \ - "1: lwarx %0,0,%1" MUTEX_HINT_REL "\n" \ - " cmpw %0,%2\n" \ - " bne 2f\n" \ - " stwcx. %3,0,%1\n" \ - " bne- 1b\n" \ - "2: " \ - : "=&r" (__tmp) \ - : "b" (__memp), "r" (oldval), "r" (newval) \ - : "cr0", "memory"); \ - __tmp; \ - }) - -#define __arch_atomic_exchange_32_acq(mem, value) \ - ({ \ - __typeof (*mem) __val; \ - __asm __volatile ( \ - "1: lwarx %0,0,%2" MUTEX_HINT_ACQ "\n" \ - " stwcx. %3,0,%2\n" \ - " bne- 1b\n" \ - " " __ARCH_ACQ_INSTR \ - : "=&r" (__val), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_exchange_32_rel(mem, value) \ - ({ \ - __typeof (*mem) __val; \ - __asm __volatile (__ARCH_REL_INSTR "\n" \ - "1: lwarx %0,0,%2" MUTEX_HINT_REL "\n" \ - " stwcx. %3,0,%2\n" \ - " bne- 1b" \ - : "=&r" (__val), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_exchange_and_add_32(mem, value) \ - ({ \ - __typeof (*mem) __val, __tmp; \ - __asm __volatile ("1: lwarx %0,0,%3\n" \ - " add %1,%0,%4\n" \ - " stwcx. %1,0,%3\n" \ - " bne- 1b" \ - : "=&b" (__val), "=&r" (__tmp), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_exchange_and_add_32_acq(mem, value) \ - ({ \ - __typeof (*mem) __val, __tmp; \ - __asm __volatile ("1: lwarx %0,0,%3" MUTEX_HINT_ACQ "\n" \ - " add %1,%0,%4\n" \ - " stwcx. %1,0,%3\n" \ - " bne- 1b\n" \ - __ARCH_ACQ_INSTR \ - : "=&b" (__val), "=&r" (__tmp), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_exchange_and_add_32_rel(mem, value) \ - ({ \ - __typeof (*mem) __val, __tmp; \ - __asm __volatile (__ARCH_REL_INSTR "\n" \ - "1: lwarx %0,0,%3" MUTEX_HINT_REL "\n" \ - " add %1,%0,%4\n" \ - " stwcx. %1,0,%3\n" \ - " bne- 1b" \ - : "=&b" (__val), "=&r" (__tmp), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_decrement_if_positive_32(mem) \ - ({ int __val, __tmp; \ - __asm __volatile ("1: lwarx %0,0,%3\n" \ - " cmpwi 0,%0,0\n" \ - " addi %1,%0,-1\n" \ - " ble 2f\n" \ - " stwcx. %1,0,%3\n" \ - " bne- 1b\n" \ - "2: " __ARCH_ACQ_INSTR \ - : "=&b" (__val), "=&r" (__tmp), "=m" (*mem) \ - : "b" (mem), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_compare_and_exchange_val_32_acq(mem, newval, oldval); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_compare_and_exchange_val_64_acq(mem, newval, oldval); \ - else \ - abort (); \ - __result; \ - }) - -#define atomic_compare_and_exchange_val_rel(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_compare_and_exchange_val_32_rel(mem, newval, oldval); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_compare_and_exchange_val_64_rel(mem, newval, oldval); \ - else \ - abort (); \ - __result; \ - }) - -#define atomic_exchange_acq(mem, value) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_exchange_32_acq (mem, value); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_exchange_64_acq (mem, value); \ - else \ - abort (); \ - __result; \ - }) - -#define atomic_exchange_rel(mem, value) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_exchange_32_rel (mem, value); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_exchange_64_rel (mem, value); \ - else \ - abort (); \ - __result; \ - }) - -#define atomic_exchange_and_add(mem, value) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_exchange_and_add_32 (mem, value); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_exchange_and_add_64 (mem, value); \ - else \ - abort (); \ - __result; \ - }) -#define atomic_exchange_and_add_acq(mem, value) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_exchange_and_add_32_acq (mem, value); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_exchange_and_add_64_acq (mem, value); \ - else \ - abort (); \ - __result; \ - }) -#define atomic_exchange_and_add_rel(mem, value) \ - ({ \ - __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_exchange_and_add_32_rel (mem, value); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_exchange_and_add_64_rel (mem, value); \ - else \ - abort (); \ - __result; \ - }) - -/* Decrement *MEM if it is > 0, and return the old value. */ -#define atomic_decrement_if_positive(mem) \ - ({ __typeof (*(mem)) __result; \ - if (sizeof (*mem) == 4) \ - __result = __arch_atomic_decrement_if_positive_32 (mem); \ - else if (sizeof (*mem) == 8) \ - __result = __arch_atomic_decrement_if_positive_64 (mem); \ - else \ - abort (); \ - __result; \ - }) diff --git a/sysdeps/powerpc/powerpc32/atomic-machine.h b/sysdeps/powerpc/powerpc32/atomic-machine.h index 5a82e75399615c40d02aca420116c6ac76e9d627..f72d4be13709e38006255d236efb0e94f3976e68 100644 --- a/sysdeps/powerpc/powerpc32/atomic-machine.h +++ b/sysdeps/powerpc/powerpc32/atomic-machine.h @@ -32,65 +32,11 @@ # define MUTEX_HINT_REL #endif +#define __ARCH_ACQ_INSTR "isync" + #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 0 #define ATOMIC_EXCHANGE_USES_CAS 1 -/* - * The 32-bit exchange_bool is different on powerpc64 because the subf - * does signed 64-bit arithmetic while the lwarx is 32-bit unsigned - * (a load word and zero (high 32) form). So powerpc64 has a slightly - * different version in sysdeps/powerpc/powerpc64/atomic-machine.h. - */ -#define __arch_compare_and_exchange_bool_32_acq(mem, newval, oldval) \ -({ \ - unsigned int __tmp; \ - __asm __volatile ( \ - "1: lwarx %0,0,%1" MUTEX_HINT_ACQ "\n" \ - " subf. %0,%2,%0\n" \ - " bne 2f\n" \ - " stwcx. %3,0,%1\n" \ - " bne- 1b\n" \ - "2: " __ARCH_ACQ_INSTR \ - : "=&r" (__tmp) \ - : "b" (mem), "r" (oldval), "r" (newval) \ - : "cr0", "memory"); \ - __tmp != 0; \ -}) - -/* Powerpc32 processors don't implement the 64-bit (doubleword) forms of - load and reserve (ldarx) and store conditional (stdcx.) instructions. - So for powerpc32 we stub out the 64-bit forms. */ -#define __arch_compare_and_exchange_bool_64_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_val_64_rel(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_atomic_exchange_64_acq(mem, value) \ - ({ abort (); (*mem) = (value); }) - -#define __arch_atomic_exchange_64_rel(mem, value) \ - ({ abort (); (*mem) = (value); }) - -#define __arch_atomic_exchange_and_add_64(mem, value) \ - ({ abort (); (*mem) = (value); }) - -#define __arch_atomic_exchange_and_add_64_acq(mem, value) \ - ({ abort (); (*mem) = (value); }) - -#define __arch_atomic_exchange_and_add_64_rel(mem, value) \ - ({ abort (); (*mem) = (value); }) - -#define __arch_atomic_decrement_val_64(mem) \ - ({ abort (); (*mem)--; }) - -#define __arch_atomic_decrement_if_positive_64(mem) \ - ({ abort (); (*mem)--; }) - #ifdef _ARCH_PWR4 /* * Newer powerpc64 processors support the new "light weight" sync (lwsync) @@ -101,7 +47,6 @@ /* * "light weight" sync can also be used for the release barrier. */ -# define __ARCH_REL_INSTR "lwsync" # define atomic_write_barrier() __asm ("lwsync" ::: "memory") #else /* @@ -112,9 +57,3 @@ # define atomic_read_barrier() __asm ("sync" ::: "memory") # define atomic_write_barrier() __asm ("sync" ::: "memory") #endif - -/* - * Include the rest of the atomic ops macros which are common to both - * powerpc32 and powerpc64. - */ -#include_next diff --git a/sysdeps/powerpc/powerpc64/atomic-machine.h b/sysdeps/powerpc/powerpc64/atomic-machine.h index 7ac9ef6ab4c32550ba9de54873ace239757b6a77..fcb1592be9ad6a3981f56c513deac2f5f8ac5bb7 100644 --- a/sysdeps/powerpc/powerpc64/atomic-machine.h +++ b/sysdeps/powerpc/powerpc64/atomic-machine.h @@ -32,183 +32,11 @@ # define MUTEX_HINT_REL #endif +#define __ARCH_ACQ_INSTR "isync" + #define __HAVE_64B_ATOMICS 1 -#define USE_ATOMIC_COMPILER_BUILTINS 0 #define ATOMIC_EXCHANGE_USES_CAS 1 -/* The 32-bit exchange_bool is different on powerpc64 because the subf - does signed 64-bit arithmetic while the lwarx is 32-bit unsigned - (a load word and zero (high 32) form) load. - In powerpc64 register values are 64-bit by default, including oldval. - The value in old val unknown sign extension, lwarx loads the 32-bit - value as unsigned. So we explicitly clear the high 32 bits in oldval. */ -#define __arch_compare_and_exchange_bool_32_acq(mem, newval, oldval) \ -({ \ - unsigned int __tmp, __tmp2; \ - __asm __volatile (" clrldi %1,%1,32\n" \ - "1: lwarx %0,0,%2" MUTEX_HINT_ACQ "\n" \ - " subf. %0,%1,%0\n" \ - " bne 2f\n" \ - " stwcx. %4,0,%2\n" \ - " bne- 1b\n" \ - "2: " __ARCH_ACQ_INSTR \ - : "=&r" (__tmp), "=r" (__tmp2) \ - : "b" (mem), "1" (oldval), "r" (newval) \ - : "cr0", "memory"); \ - __tmp != 0; \ -}) - -/* - * Only powerpc64 processors support Load doubleword and reserve index (ldarx) - * and Store doubleword conditional indexed (stdcx) instructions. So here - * we define the 64-bit forms. - */ -#define __arch_compare_and_exchange_bool_64_acq(mem, newval, oldval) \ -({ \ - unsigned long __tmp; \ - __asm __volatile ( \ - "1: ldarx %0,0,%1" MUTEX_HINT_ACQ "\n" \ - " subf. %0,%2,%0\n" \ - " bne 2f\n" \ - " stdcx. %3,0,%1\n" \ - " bne- 1b\n" \ - "2: " __ARCH_ACQ_INSTR \ - : "=&r" (__tmp) \ - : "b" (mem), "r" (oldval), "r" (newval) \ - : "cr0", "memory"); \ - __tmp != 0; \ -}) - -#define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __tmp; \ - __typeof (mem) __memp = (mem); \ - __asm __volatile ( \ - "1: ldarx %0,0,%1" MUTEX_HINT_ACQ "\n" \ - " cmpd %0,%2\n" \ - " bne 2f\n" \ - " stdcx. %3,0,%1\n" \ - " bne- 1b\n" \ - "2: " __ARCH_ACQ_INSTR \ - : "=&r" (__tmp) \ - : "b" (__memp), "r" (oldval), "r" (newval) \ - : "cr0", "memory"); \ - __tmp; \ - }) - -#define __arch_compare_and_exchange_val_64_rel(mem, newval, oldval) \ - ({ \ - __typeof (*(mem)) __tmp; \ - __typeof (mem) __memp = (mem); \ - __asm __volatile (__ARCH_REL_INSTR "\n" \ - "1: ldarx %0,0,%1" MUTEX_HINT_REL "\n" \ - " cmpd %0,%2\n" \ - " bne 2f\n" \ - " stdcx. %3,0,%1\n" \ - " bne- 1b\n" \ - "2: " \ - : "=&r" (__tmp) \ - : "b" (__memp), "r" (oldval), "r" (newval) \ - : "cr0", "memory"); \ - __tmp; \ - }) - -#define __arch_atomic_exchange_64_acq(mem, value) \ - ({ \ - __typeof (*mem) __val; \ - __asm __volatile (__ARCH_REL_INSTR "\n" \ - "1: ldarx %0,0,%2" MUTEX_HINT_ACQ "\n" \ - " stdcx. %3,0,%2\n" \ - " bne- 1b\n" \ - " " __ARCH_ACQ_INSTR \ - : "=&r" (__val), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_exchange_64_rel(mem, value) \ - ({ \ - __typeof (*mem) __val; \ - __asm __volatile (__ARCH_REL_INSTR "\n" \ - "1: ldarx %0,0,%2" MUTEX_HINT_REL "\n" \ - " stdcx. %3,0,%2\n" \ - " bne- 1b" \ - : "=&r" (__val), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_exchange_and_add_64(mem, value) \ - ({ \ - __typeof (*mem) __val, __tmp; \ - __asm __volatile ("1: ldarx %0,0,%3\n" \ - " add %1,%0,%4\n" \ - " stdcx. %1,0,%3\n" \ - " bne- 1b" \ - : "=&b" (__val), "=&r" (__tmp), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_exchange_and_add_64_acq(mem, value) \ - ({ \ - __typeof (*mem) __val, __tmp; \ - __asm __volatile ("1: ldarx %0,0,%3" MUTEX_HINT_ACQ "\n" \ - " add %1,%0,%4\n" \ - " stdcx. %1,0,%3\n" \ - " bne- 1b\n" \ - __ARCH_ACQ_INSTR \ - : "=&b" (__val), "=&r" (__tmp), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_exchange_and_add_64_rel(mem, value) \ - ({ \ - __typeof (*mem) __val, __tmp; \ - __asm __volatile (__ARCH_REL_INSTR "\n" \ - "1: ldarx %0,0,%3" MUTEX_HINT_REL "\n" \ - " add %1,%0,%4\n" \ - " stdcx. %1,0,%3\n" \ - " bne- 1b" \ - : "=&b" (__val), "=&r" (__tmp), "=m" (*mem) \ - : "b" (mem), "r" (value), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_decrement_val_64(mem) \ - ({ \ - __typeof (*(mem)) __val; \ - __asm __volatile ("1: ldarx %0,0,%2\n" \ - " subi %0,%0,1\n" \ - " stdcx. %0,0,%2\n" \ - " bne- 1b" \ - : "=&b" (__val), "=m" (*mem) \ - : "b" (mem), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - -#define __arch_atomic_decrement_if_positive_64(mem) \ - ({ int __val, __tmp; \ - __asm __volatile ("1: ldarx %0,0,%3\n" \ - " cmpdi 0,%0,0\n" \ - " addi %1,%0,-1\n" \ - " ble 2f\n" \ - " stdcx. %1,0,%3\n" \ - " bne- 1b\n" \ - "2: " __ARCH_ACQ_INSTR \ - : "=&b" (__val), "=&r" (__tmp), "=m" (*mem) \ - : "b" (mem), "m" (*mem) \ - : "cr0", "memory"); \ - __val; \ - }) - /* * All powerpc64 processors support the new "light weight" sync (lwsync). */ @@ -216,11 +44,4 @@ /* * "light weight" sync can also be used for the release barrier. */ -#define __ARCH_REL_INSTR "lwsync" #define atomic_write_barrier() __asm ("lwsync" ::: "memory") - -/* - * Include the rest of the atomic ops macros which are common to both - * powerpc32 and powerpc64. - */ -#include_next diff --git a/sysdeps/s390/atomic-machine.h b/sysdeps/s390/atomic-machine.h index d2fc3cf240888ca3569c6d3b1287cc87209cab89..3e25dcf44126001382e3b98aa2f82d29e29f1424 100644 --- a/sysdeps/s390/atomic-machine.h +++ b/sysdeps/s390/atomic-machine.h @@ -15,24 +15,6 @@ License along with the GNU C Library; if not, see . */ -/* Activate all C11 atomic builtins. - - Note: - E.g. in nptl/pthread_key_delete.c if compiled with GCCs 6 and before, - an extra stack-frame is generated and the old value is stored on stack - before cs instruction but it never loads this value from stack. - An unreleased GCC 7 omit those stack operations. - - E.g. in nptl/pthread_once.c the condition code of cs instruction is - evaluated by a sequence of ipm, sra, compare and jump instructions instead - of one conditional jump instruction. This also occurs with an unreleased - GCC 7. - - The atomic_fetch_abc_def C11 builtins are now using load-and-abc instructions - on z196 zarch and higher cpus instead of a loop with compare-and-swap - instruction. */ -#define USE_ATOMIC_COMPILER_BUILTINS 1 - #ifdef __s390x__ # define __HAVE_64B_ATOMICS 1 #else @@ -40,43 +22,3 @@ #endif #define ATOMIC_EXCHANGE_USES_CAS 1 - -/* Implement some of the non-C11 atomic macros from include/atomic.h - with help of the C11 atomic builtins. The other non-C11 atomic macros - are using the macros defined here. */ - -/* Atomically store NEWVAL in *MEM if *MEM is equal to OLDVAL. - Return the old *MEM value. */ -#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - ({ __atomic_check_size((mem)); \ - typeof ((__typeof (*(mem))) *(mem)) __atg1_oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__atg1_oldval, \ - newval, 1, __ATOMIC_ACQUIRE, \ - __ATOMIC_RELAXED); \ - __atg1_oldval; }) -#define atomic_compare_and_exchange_val_rel(mem, newval, oldval) \ - ({ __atomic_check_size((mem)); \ - typeof ((__typeof (*(mem))) *(mem)) __atg1_2_oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__atg1_2_oldval, \ - newval, 1, __ATOMIC_RELEASE, \ - __ATOMIC_RELAXED); \ - __atg1_2_oldval; }) - -/* Atomically store NEWVAL in *MEM if *MEM is equal to OLDVAL. - Return zero if *MEM was changed or non-zero if no exchange happened. */ -#define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ - ({ __atomic_check_size((mem)); \ - typeof ((__typeof (*(mem))) *(mem)) __atg2_oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__atg2_oldval, newval, \ - 1, __ATOMIC_ACQUIRE, \ - __ATOMIC_RELAXED); }) - -/* Add VALUE to *MEM and return the old value of *MEM. */ -/* The gcc builtin uses load-and-add instruction on z196 zarch and higher cpus - instead of a loop with compare-and-swap instruction. */ -# define atomic_exchange_and_add_acq(mem, operand) \ - ({ __atomic_check_size((mem)); \ - __atomic_fetch_add ((mem), (operand), __ATOMIC_ACQUIRE); }) -# define atomic_exchange_and_add_rel(mem, operand) \ - ({ __atomic_check_size((mem)); \ - __atomic_fetch_add ((mem), (operand), __ATOMIC_RELEASE); }) diff --git a/sysdeps/sparc/atomic-machine.h b/sysdeps/sparc/atomic-machine.h index 653c2035f76bbf8cd5ef31463807f199528b417f..a7042f1ee546b9f238153cb923409d42eb45cc03 100644 --- a/sysdeps/sparc/atomic-machine.h +++ b/sysdeps/sparc/atomic-machine.h @@ -24,34 +24,10 @@ #else # define __HAVE_64B_ATOMICS 0 #endif -#define USE_ATOMIC_COMPILER_BUILTINS 1 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS __HAVE_64B_ATOMICS -/* Compare and exchange. - For all "bool" routines, we return FALSE if exchange succesful. */ - -#define __arch_compare_and_exchange_val_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -#define atomic_compare_and_exchange_val_acq(mem, new, old) \ - ({ \ - __typeof ((__typeof (*(mem))) *(mem)) __result; \ - if (sizeof (*mem) == 4 \ - || (__HAVE_64B_ATOMICS && sizeof (*mem) == 8)) \ - __result = __arch_compare_and_exchange_val_int (mem, new, old, \ - __ATOMIC_ACQUIRE); \ - else \ - abort (); \ - __result; \ - }) - #ifdef __sparc_v9__ # define atomic_full_barrier() \ __asm __volatile ("membar #LoadLoad | #LoadStore" \ diff --git a/sysdeps/unix/sysv/linux/hppa/atomic-machine.h b/sysdeps/unix/sysv/linux/hppa/atomic-machine.h index 393a056ece1add048f574f720cfdc71015964efa..9c9fecbefef037e3b7e8c291e722d093b811dd69 100644 --- a/sysdeps/unix/sysv/linux/hppa/atomic-machine.h +++ b/sysdeps/unix/sysv/linux/hppa/atomic-machine.h @@ -18,87 +18,10 @@ #ifndef _ATOMIC_MACHINE_H #define _ATOMIC_MACHINE_H 1 -#define atomic_full_barrier() __sync_synchronize () - #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 0 - -/* We use the compiler atomic load and store builtins as the generic - defines are not atomic. In particular, we need to use compare and - exchange for stores as the implementation is synthesized. */ -void __atomic_link_error (void); -#define __atomic_check_size_ls(mem) \ - if ((sizeof (*mem) != 1) && (sizeof (*mem) != 2) && sizeof (*mem) != 4) \ - __atomic_link_error (); - -#define atomic_load_relaxed(mem) \ - ({ __atomic_check_size_ls((mem)); \ - __atomic_load_n ((mem), __ATOMIC_RELAXED); }) -#define atomic_load_acquire(mem) \ - ({ __atomic_check_size_ls((mem)); \ - __atomic_load_n ((mem), __ATOMIC_ACQUIRE); }) - -#define atomic_store_relaxed(mem, val) \ - do { \ - __atomic_check_size_ls((mem)); \ - __atomic_store_n ((mem), (val), __ATOMIC_RELAXED); \ - } while (0) -#define atomic_store_release(mem, val) \ - do { \ - __atomic_check_size_ls((mem)); \ - __atomic_store_n ((mem), (val), __ATOMIC_RELEASE); \ - } while (0) /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 1 -/* prev = *addr; - if (prev == old) - *addr = new; - return prev; */ - -/* Use the kernel atomic light weight syscalls on hppa. */ -#define _LWS "0xb0" -#define _LWS_CAS "0" -/* Note r31 is the link register. */ -#define _LWS_CLOBBER "r1", "r23", "r22", "r20", "r31", "memory" -/* String constant for -EAGAIN. */ -#define _ASM_EAGAIN "-11" -/* String constant for -EDEADLOCK. */ -#define _ASM_EDEADLOCK "-45" - -/* The only basic operation needed is compare and exchange. The mem - pointer must be word aligned. We no longer loop on deadlock. */ -#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - ({ \ - register long lws_errno asm("r21"); \ - register unsigned long lws_ret asm("r28"); \ - register unsigned long lws_mem asm("r26") = (unsigned long)(mem); \ - register unsigned long lws_old asm("r25") = (unsigned long)(oldval);\ - register unsigned long lws_new asm("r24") = (unsigned long)(newval);\ - __asm__ __volatile__( \ - "0: \n\t" \ - "ble " _LWS "(%%sr2, %%r0) \n\t" \ - "ldi " _LWS_CAS ", %%r20 \n\t" \ - "cmpiclr,<> " _ASM_EAGAIN ", %%r21, %%r0\n\t" \ - "b,n 0b \n\t" \ - "cmpclr,= %%r0, %%r21, %%r0 \n\t" \ - "iitlbp %%r0,(%%sr0, %%r0) \n\t" \ - : "=r" (lws_ret), "=r" (lws_errno) \ - : "r" (lws_mem), "r" (lws_old), "r" (lws_new) \ - : _LWS_CLOBBER \ - ); \ - \ - (__typeof (oldval)) lws_ret; \ - }) - -#define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ - ({ \ - __typeof__ (*mem) ret; \ - ret = atomic_compare_and_exchange_val_acq(mem, newval, oldval); \ - /* Return 1 if it was already acquired. */ \ - (ret != oldval); \ - }) - #endif /* _ATOMIC_MACHINE_H */ diff --git a/sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h b/sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h index 67467fe9d6de19060a0c2f53031a9c9af4dea102..6f83fb2965bd162f0f76e0e3586472ade39af607 100644 --- a/sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h +++ b/sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h @@ -20,34 +20,11 @@ #include -/* Coldfire has no atomic compare-and-exchange operation, but the - kernel provides userspace atomicity operations. Use them. */ - #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 1 -/* The only basic operation needed is compare and exchange. */ -#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - ({ \ - /* Use temporary variables to workaround call-clobberness of \ - the registers. */ \ - __typeof (mem) _mem = mem; \ - __typeof (oldval) _oldval = oldval; \ - __typeof (newval) _newval = newval; \ - register uint32_t _d0 asm ("d0") = SYS_ify (atomic_cmpxchg_32); \ - register uint32_t *_a0 asm ("a0") = (uint32_t *) _mem; \ - register uint32_t _d2 asm ("d2") = (uint32_t) _oldval; \ - register uint32_t _d1 asm ("d1") = (uint32_t) _newval; \ - \ - asm ("trap #0" \ - : "+d" (_d0), "+m" (*_a0) \ - : "a" (_a0), "d" (_d2), "d" (_d1)); \ - (__typeof (oldval)) _d0; \ - }) - # define atomic_full_barrier() \ (INTERNAL_SYSCALL_CALL (atomic_barrier), (void) 0) diff --git a/sysdeps/unix/sysv/linux/nios2/atomic-machine.h b/sysdeps/unix/sysv/linux/nios2/atomic-machine.h index 951aa463797a5acb2f7360e79d8495edf9343130..4b4b714f93f4c4b9f7f650d70d2301299a45e2f5 100644 --- a/sysdeps/unix/sysv/linux/nios2/atomic-machine.h +++ b/sysdeps/unix/sysv/linux/nios2/atomic-machine.h @@ -20,64 +20,8 @@ #define _NIOS2_ATOMIC_MACHINE_H 1 #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 1 -#define __arch_compare_and_exchange_val_8_acq(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) -#define __arch_compare_and_exchange_val_16_acq(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) -#define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) - -#define __arch_compare_and_exchange_bool_8_acq(mem, newval, oldval) \ - (abort (), 0) -#define __arch_compare_and_exchange_bool_16_acq(mem, newval, oldval) \ - (abort (), 0) -#define __arch_compare_and_exchange_bool_64_acq(mem, newval, oldval) \ - (abort (), 0) - -#define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ - ({ \ - register int r2 asm ("r2"); \ - register int* r4 asm ("r4") = (int*)(mem); \ - register int r5 asm ("r5"); \ - register int r6 asm ("r6") = (int)(newval); \ - int retval, orig_oldval = (int)(oldval); \ - long kernel_cmpxchg = 0x1004; \ - while (1) \ - { \ - r5 = *r4; \ - if (r5 != orig_oldval) \ - { \ - retval = r5; \ - break; \ - } \ - asm volatile ("callr %1\n" \ - : "=r" (r2) \ - : "r" (kernel_cmpxchg), "r" (r4), "r" (r5), "r" (r6) \ - : "ra", "memory"); \ - if (!r2) { retval = orig_oldval; break; } \ - } \ - (__typeof (*(mem))) retval; \ - }) - -#define __arch_compare_and_exchange_bool_32_acq(mem, newval, oldval) \ - ({ \ - register int r2 asm ("r2"); \ - register int *r4 asm ("r4") = (int*)(mem); \ - register int r5 asm ("r5") = (int)(oldval); \ - register int r6 asm ("r6") = (int)(newval); \ - long kernel_cmpxchg = 0x1004; \ - asm volatile ("callr %1\n" \ - : "=r" (r2) \ - : "r" (kernel_cmpxchg), "r" (r4), "r" (r5), "r" (r6) \ - : "ra", "memory"); \ - r2; \ - }) - -#define atomic_full_barrier() ({ asm volatile ("sync"); }) - #endif /* _NIOS2_ATOMIC_MACHINE_H */ diff --git a/sysdeps/unix/sysv/linux/riscv/atomic-machine.h b/sysdeps/unix/sysv/linux/riscv/atomic-machine.h index c5eb5c639fb59d7395c0a2d8f4fd72452845914b..b0ebe09ce1fa4e15064dd57d83cadb8a1976f86d 100644 --- a/sysdeps/unix/sysv/linux/riscv/atomic-machine.h +++ b/sysdeps/unix/sysv/linux/riscv/atomic-machine.h @@ -19,127 +19,11 @@ #ifndef _LINUX_RISCV_BITS_ATOMIC_H #define _LINUX_RISCV_BITS_ATOMIC_H 1 -#define atomic_full_barrier() __sync_synchronize () - #ifdef __riscv_atomic # define __HAVE_64B_ATOMICS (__riscv_xlen >= 64) -# define USE_ATOMIC_COMPILER_BUILTINS 1 # define ATOMIC_EXCHANGE_USES_CAS 0 -/* Compare and exchange. - For all "bool" routines, we return FALSE if exchange succesful. */ - -# define __arch_compare_and_exchange_bool_8_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_16_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_bool_64_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - !__atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - }) - -# define __arch_compare_and_exchange_val_8_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_16_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_32_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -# define __arch_compare_and_exchange_val_64_int(mem, newval, oldval, model) \ - ({ \ - typeof (*mem) __oldval = (oldval); \ - __atomic_compare_exchange_n (mem, (void *) &__oldval, newval, 0, \ - model, __ATOMIC_RELAXED); \ - __oldval; \ - }) - -/* Atomic compare and exchange. */ - -# define atomic_compare_and_exchange_bool_acq(mem, new, old) \ - __atomic_bool_bysize (__arch_compare_and_exchange_bool, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -# define atomic_compare_and_exchange_val_acq(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_ACQUIRE) - -# define atomic_compare_and_exchange_val_rel(mem, new, old) \ - __atomic_val_bysize (__arch_compare_and_exchange_val, int, \ - mem, new, old, __ATOMIC_RELEASE) - -/* Atomic exchange (without compare). */ - -# define __arch_exchange_8_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -# define __arch_exchange_16_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -# define __arch_exchange_32_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -# define __arch_exchange_64_int(mem, newval, model) \ - __atomic_exchange_n (mem, newval, model) - -/* Atomically add value and return the previous (unincremented) value. */ - -# define __arch_exchange_and_add_8_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -# define __arch_exchange_and_add_16_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -# define __arch_exchange_and_add_32_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -# define __arch_exchange_and_add_64_int(mem, value, model) \ - __atomic_fetch_add (mem, value, model) - -# define atomic_exchange_and_add_acq(mem, value) \ - __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ - __ATOMIC_ACQUIRE) - -# define atomic_exchange_and_add_rel(mem, value) \ - __atomic_val_bysize (__arch_exchange_and_add, int, mem, value, \ - __ATOMIC_RELEASE) - /* Miscellaneous. */ # define asm_amo(which, ordering, mem, value) ({ \ diff --git a/sysdeps/unix/sysv/linux/sh/atomic-machine.h b/sysdeps/unix/sysv/linux/sh/atomic-machine.h index 582d67db61e89d654862d9e15665f2fec94a1202..71848194daa98ad0391c029a8c7d9dba5ba5fe3d 100644 --- a/sysdeps/unix/sysv/linux/sh/atomic-machine.h +++ b/sysdeps/unix/sysv/linux/sh/atomic-machine.h @@ -17,136 +17,6 @@ . */ #define __HAVE_64B_ATOMICS 0 -#define USE_ATOMIC_COMPILER_BUILTINS 0 /* XXX Is this actually correct? */ #define ATOMIC_EXCHANGE_USES_CAS 1 - -/* SH kernel has implemented a gUSA ("g" User Space Atomicity) support - for the user space atomicity. The atomicity macros use this scheme. - - Reference: - Niibe Yutaka, "gUSA: Simple and Efficient User Space Atomicity - Emulation with Little Kernel Modification", Linux Conference 2002, - Japan. http://lc.linux.or.jp/lc2002/papers/niibe0919h.pdf (in - Japanese). - - B.N. Bershad, D. Redell, and J. Ellis, "Fast Mutual Exclusion for - Uniprocessors", Proceedings of the Fifth Architectural Support for - Programming Languages and Operating Systems (ASPLOS), pp. 223-233, - October 1992. http://www.cs.washington.edu/homes/bershad/Papers/Rcs.ps - - SuperH ABI: - r15: -(size of atomic instruction sequence) < 0 - r0: end point - r1: saved stack pointer -*/ - -#define __arch_compare_and_exchange_val_8_acq(mem, newval, oldval) \ - ({ __typeof (*(mem)) __result; \ - __asm __volatile ("\ - mova 1f,r0\n\ - .align 2\n\ - mov r15,r1\n\ - mov #(0f-1f),r15\n\ - 0: mov.b @%1,%0\n\ - cmp/eq %0,%3\n\ - bf 1f\n\ - mov.b %2,@%1\n\ - 1: mov r1,r15"\ - : "=&r" (__result) : "u" (mem), "u" (newval), "u" (oldval) \ - : "r0", "r1", "t", "memory"); \ - __result; }) - -#define __arch_compare_and_exchange_val_16_acq(mem, newval, oldval) \ - ({ __typeof (*(mem)) __result; \ - __asm __volatile ("\ - mova 1f,r0\n\ - mov r15,r1\n\ - .align 2\n\ - mov #(0f-1f),r15\n\ - mov #-8,r15\n\ - 0: mov.w @%1,%0\n\ - cmp/eq %0,%3\n\ - bf 1f\n\ - mov.w %2,@%1\n\ - 1: mov r1,r15"\ - : "=&r" (__result) : "u" (mem), "u" (newval), "u" (oldval) \ - : "r0", "r1", "t", "memory"); \ - __result; }) - -#define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ - ({ __typeof (*(mem)) __result; \ - __asm __volatile ("\ - mova 1f,r0\n\ - .align 2\n\ - mov r15,r1\n\ - mov #(0f-1f),r15\n\ - 0: mov.l @%1,%0\n\ - cmp/eq %0,%3\n\ - bf 1f\n\ - mov.l %2,@%1\n\ - 1: mov r1,r15"\ - : "=&r" (__result) : "u" (mem), "u" (newval), "u" (oldval) \ - : "r0", "r1", "t", "memory"); \ - __result; }) - -/* XXX We do not really need 64-bit compare-and-exchange. At least - not in the moment. Using it would mean causing portability - problems since not many other 32-bit architectures have support for - such an operation. So don't define any code for now. */ - -# define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ - (abort (), (__typeof (*mem)) 0) - -#define atomic_exchange_and_add(mem, value) \ - ({ __typeof (*(mem)) __result, __tmp, __value = (value); \ - if (sizeof (*(mem)) == 1) \ - __asm __volatile ("\ - mova 1f,r0\n\ - .align 2\n\ - mov r15,r1\n\ - mov #(0f-1f),r15\n\ - 0: mov.b @%2,%0\n\ - mov %1,r2\n\ - add %0,r2\n\ - mov.b r2,@%2\n\ - 1: mov r1,r15"\ - : "=&r" (__result), "=&r" (__tmp) : "u" (mem), "1" (__value) \ - : "r0", "r1", "r2", "memory"); \ - else if (sizeof (*(mem)) == 2) \ - __asm __volatile ("\ - mova 1f,r0\n\ - .align 2\n\ - mov r15,r1\n\ - mov #(0f-1f),r15\n\ - 0: mov.w @%2,%0\n\ - mov %1,r2\n\ - add %0,r2\n\ - mov.w r2,@%2\n\ - 1: mov r1,r15"\ - : "=&r" (__result), "=&r" (__tmp) : "u" (mem), "1" (__value) \ - : "r0", "r1", "r2", "memory"); \ - else if (sizeof (*(mem)) == 4) \ - __asm __volatile ("\ - mova 1f,r0\n\ - .align 2\n\ - mov r15,r1\n\ - mov #(0f-1f),r15\n\ - 0: mov.l @%2,%0\n\ - mov %1,r2\n\ - add %0,r2\n\ - mov.l r2,@%2\n\ - 1: mov r1,r15"\ - : "=&r" (__result), "=&r" (__tmp) : "u" (mem), "1" (__value) \ - : "r0", "r1", "r2", "memory"); \ - else \ - { \ - __typeof (mem) memp = (mem); \ - do \ - __result = *memp; \ - while (__arch_compare_and_exchange_val_64_acq \ - (memp, __result + __value, __result) == __result); \ - (void) __value; \ - } \ - __result; }) diff --git a/sysdeps/x86/atomic-machine.h b/sysdeps/x86/atomic-machine.h index 2e06877034def3cc3c1cecb128cb770ac02acd78..b9be51c52d8cbef2a95a62192c8ef7011e7f2c12 100644 --- a/sysdeps/x86/atomic-machine.h +++ b/sysdeps/x86/atomic-machine.h @@ -19,36 +19,19 @@ #ifndef _X86_ATOMIC_MACHINE_H #define _X86_ATOMIC_MACHINE_H 1 -#include #include /* For mach. */ -#include /* For cast_to_integer. */ - -#define LOCK_PREFIX "lock;" - -#define USE_ATOMIC_COMPILER_BUILTINS 1 #ifdef __x86_64__ # define __HAVE_64B_ATOMICS 1 -# define SP_REG "rsp" #else /* Since the Pentium, i386 CPUs have supported 64-bit atomics, but the i386 psABI supplement provides only 4-byte alignment for uint64_t inside structs, so it is currently not possible to use 64-bit atomics on this platform. */ # define __HAVE_64B_ATOMICS 0 -# define SP_REG "esp" #endif #define ATOMIC_EXCHANGE_USES_CAS 0 -#define atomic_compare_and_exchange_val_acq(mem, newval, oldval) \ - __sync_val_compare_and_swap (mem, oldval, newval) -#define atomic_compare_and_exchange_bool_acq(mem, newval, oldval) \ - (! __sync_bool_compare_and_swap (mem, oldval, newval)) - -/* We don't use mfence because it is supposedly slower due to having to - provide stronger guarantees (e.g., regarding self-modifying code). */ -#define atomic_full_barrier() \ - __asm __volatile (LOCK_PREFIX "orl $0, (%%" SP_REG ")" ::: "memory") #define atomic_read_barrier() __asm ("" ::: "memory") #define atomic_write_barrier() __asm ("" ::: "memory")