From patchwork Fri Mar 19 13:27:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Szabolcs Nagy X-Patchwork-Id: 42685 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 0DE27386F44F; Fri, 19 Mar 2021 13:27:31 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 0DE27386F44F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1616160451; bh=kQW3RfmqeFOi77+L6hUq20atldb+tco7+vs/p0Wd7EU=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=AUgZYkzVuck2M6bUIv9eo6mTwQRgJqH6iZE9hcVqucYNi+KFu2g0x9KWMC52tEqR4 NTbVhEpmNIDt0blvzkbt8Qzm7LH+6T+qGzdwfl98xaXdyXrKVCDMMAJ+t8eeBBCs4B CO3YJ2Qx473YH7VLxWPtD+sP21W0bTyoH0gxKoOY= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2078.outbound.protection.outlook.com [40.107.20.78]) by sourceware.org (Postfix) with ESMTPS id DBEF7385142A for ; Fri, 19 Mar 2021 13:27:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org DBEF7385142A Received: from MR2P264CA0032.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::20) by VI1PR0801MB1773.eurprd08.prod.outlook.com (2603:10a6:800:5c::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.18; Fri, 19 Mar 2021 13:27:25 +0000 Received: from VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com (2603:10a6:500:0:cafe::68) by MR2P264CA0032.outlook.office365.com (2603:10a6:500::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.18 via Frontend Transport; Fri, 19 Mar 2021 13:27:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; sourceware.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;sourceware.org; dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by VE1EUR03FT032.mail.protection.outlook.com (10.152.18.121) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.18 via Frontend Transport; Fri, 19 Mar 2021 13:27:24 +0000 Received: ("Tessian outbound db863403a82e:v87"); Fri, 19 Mar 2021 13:27:24 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: d607496193d557dd X-CR-MTA-TID: 64aa7808 Received: from c4b700312b32.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 125A7C86-F7A7-4879-9B81-B3FBC2566FE7.1; Fri, 19 Mar 2021 13:27:18 +0000 Received: from EUR01-HE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c4b700312b32.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 19 Mar 2021 13:27:18 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UypjMv4O//nGbsTpIrTUIl87BjVooABeQ2V90XAIyIUTlU1azP4XQz0FMfcIF6DV1ts1V58L//w/WQJLJv2f9RRIOAHiRaesF/IYORYrj+frrMWFb05ooh+/LLptFyYaI8/YTN2Pjq6mX8tNJNzECxtXEVPdW0wvdJxYyeJkfMT1RfQWsAxscsNlIV5j91jJkXTBkVnMw78hXs0E/gPjasPLSuM//2jmmAazXZW620hDsqwYSlYDmXKjQX/4G3n+XQ37PN1s3bybg166/8ANaAOC4A2iIYvvgNcNTI8VGAn3GaKMVuVAyPQ4jYnGaupDZdegfdU7+1E3u+rKUJCXuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kQW3RfmqeFOi77+L6hUq20atldb+tco7+vs/p0Wd7EU=; b=GGXIMpPdkd0A1e+xUPqDRP68NId2OT4IfFYUVQ+5kIdf6TuhZ8N9UsVo5+sbf1XFV+R/3MKQH3C/GTWufoV5rgRSPyltHcfd78wKmMnL3nCe02cR6lQtGD4LcftM4aVoasP2qEThT5dR6uSnpjH0Y1L1KAnzpQ1DaNHjqdy//oXyonkPhWuysElNv68oGQxAoRLPviQfn6ggJx2bSGgn2Re4uiM4jBOktZY//RXuXXiUVpduCWgrea6ZQ6Arm4KmB0Qg22S0/8fv8mbu3QmOVMkA83VDSmkaXo5a4wJK8i+rHpH6owu7QMOqpdXn5EAZ16MQejzr7pvBQ5P7V6Fzew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Authentication-Results-Original: sourceware.org; dkim=none (message not signed) header.d=none;sourceware.org; dmarc=none action=none header.from=arm.com; Received: from PA4PR08MB6320.eurprd08.prod.outlook.com (2603:10a6:102:e5::9) by PA4PR08MB5966.eurprd08.prod.outlook.com (2603:10a6:102:ee::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.18; Fri, 19 Mar 2021 13:27:16 +0000 Received: from PA4PR08MB6320.eurprd08.prod.outlook.com ([fe80::60f0:3773:69b8:e336]) by PA4PR08MB6320.eurprd08.prod.outlook.com ([fe80::60f0:3773:69b8:e336%2]) with mapi id 15.20.3955.018; Fri, 19 Mar 2021 13:27:16 +0000 To: libc-alpha@sourceware.org, DJ Delorie Subject: [PATCH 4/6] malloc: Rename chunk2rawmem Date: Fri, 19 Mar 2021 13:27:10 +0000 Message-Id: <39adedfcc466045b1087f037f35ca437991e1c17.1616155129.git.szabolcs.nagy@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: X-Originating-IP: [217.140.106.49] X-ClientProxiedBy: LO2P265CA0209.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:9e::29) To PA4PR08MB6320.eurprd08.prod.outlook.com (2603:10a6:102:e5::9) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain (217.140.106.49) by LO2P265CA0209.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:9e::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3933.32 via Frontend Transport; Fri, 19 Mar 2021 13:27:16 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: a04511f5-719a-4fda-003a-08d8eadabbf1 X-MS-TrafficTypeDiagnostic: PA4PR08MB5966:|VI1PR0801MB1773: X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true NoDisclaimer: true X-MS-Oob-TLC-OOBClassifiers: OLM:1468;OLM:1468; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: PeuQzQ5rBUvsbbBMHtoF8yc259drCO5Ydw6e0UQDzMTGVH6N67RKLnqU5vmyKM1SaKGQoVOESb/CMIG46stLlclbpX8E3NcTYhpk70MSiGfMNAKsRc1topBvE+k5BTqgb0FjOv5iUTfbNCtcCiojdUx2O/07HZEUlQE1pOBR8fL5Coe7ghd/dM2uckLdR5QVVh8i5aRX25fW8jvsGCYGS969+6Vl/+nEOhf+kd7w9eGEzhSQpKZfsG+OgMCcNGzKp7VMw8sSLwWNDg1rLplyFS4sISY/KUmyJB+Y2KiB2tvKqU9ntpxBlOKSdDjnXLnzBWb4L+pdJqZEZU4C0FpeXbo1z9UbuHni5mZOJHmy4bU2Exe7GLrH92T5ZSclmLkspmk+gZHUsW2Mcd5e9K86H+dYNLyYwwnKFwGcWnxj/O5t2p2I7mOE84zE8jzQ0k0RHPi9MdXtCuEzKAR14mdlkT5r4fhG2hjrKxWMsIQvYAUHOQyrP8DMvaUcV/zurfkthBjz9CA8IqyjEQLjW0N9GLpu6A/YoxM+l7rGd4Mu06x4tdA7oomQmtkorZk+DYOzFCpC//TFXNi1zmiPyJTab73zfeL9WPHYhJ84Gtna2Q9fPUnrHSG3DsiscU3O64t3zLA0SMIWPvMqp0fQ8LmiE1ymAb4mhjfh5wi1/oJLKUDuGcSuO3tydQjDwwDFNUIL X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PA4PR08MB6320.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(396003)(39860400002)(136003)(376002)(366004)(346002)(2616005)(6916009)(26005)(316002)(36756003)(8936002)(6512007)(44832011)(5660300002)(8676002)(66556008)(66476007)(186003)(2906002)(6666004)(6506007)(30864003)(66946007)(69590400012)(16526019)(86362001)(6486002)(83380400001)(52116002)(478600001)(38100700001)(956004); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: mJeIf+0f7VNF7APPe26TixmOzFifp2+mV9P0I2NCAyqMH3Ha9YOdYytFLDRFjrjSkRTh/WXG0uxPiMwUf9RB1oVR4SfZyM4C0rij8cwpZCXc/RVTIjYwwtlotYVkWjASUBkKkkQvGO4WK4AxVeAek6Q+kTWPsc+F1MEPKpHUEoD3ETFYazi/X2c5bqRl33JdhqH5ONd5YuOtO/752rduuDxSZTqB/od9cIPnpf1MQskX5UkHvKSxmM1NVa6NVO08+p7TH5q6bVjIKT9o60Rez27KfTAQYq5O3xiBbEnkPvKJaN6jD2sIATbn1pHzLb9xbLY4bdnGJgV1+oHhvGAmY+/QTWsPoSlpEh5acM/0O9OPXxF6vwJesB9+Pj32UMwLy8BIV9VBSaBIDEN1A91Q2ZTeUpCED5AbnV3ophaqd13e2Xf2doKorKkHcaNoySOEFx2dF3oXvRJI5lE2JRnVc0Jqcdzll+i2H1c2Pdy6iZhbdGOgUZ9LF7U7K7+IIx8mAauN0SFIKr2iC3TkkE9OWleDvrWcWlzwCX+MUdBqONWjHK03y05+VwMtQRxtb9A0B4HqqASrnFyzrqo2KbGWOhZwfQA/P8frVzzTXL1gQEko037Ti3n1mENOpqte/dCRpYqqYMryD9XDQKyueKJ/5jDgxzHYD+03fvYHCEGvUA4CFju9Y8oRsFn+5m5+dMXFY0g6cpasbrRw2Zp3gq28Iznn0Tu75wqU8+HX862zP6gkHExvv93Q1ZzC/DYNZSdlGj68MQihxjwIWrWydubxRNYmmcoGXojsEf1HF0esDDnXx4UA5C35OLzo0hGDdEoSH2mOzYXyv2ObZTJFqqE4PXl8TJ/S7BtF0vOrr/23/Wxci7k4CUnjib54YW5oGCTzcF3lrL8/OWR2BtRWR/yM9Q/Ex3krXQxHIZhD/jPm+Ne1JD5JkchhHM05seC6onT/4Xw1S9P6PxdaLgjom+7P5nsUD+OWUivytP5lY32V2sAXTX0JIxiKNJ+3aDyv2ayxQz+BOdFDnBWb5mk48OkFMJ8muGzqMyOQxwEWPA+YlCFURpky9ch1UxweMeMNzXricpxPte7NiMTUejy1PN0EUxIED+1hbf1y4OePzo5HBDNjE6aE1L78HqCWcuLOADYYB8ejJFUwlZF+5PUTD2LmigTXNopn5qytYTRa0YRZSJoLJYXvHzAyKQWbiYlhXqxMTIvHtoUEBPM8Bblp0EYa/YkKgP01AKkWzGER1hHn62GOTHjURPCyH9Rm/Unx8aHU86w5u2Ahyi6mcGQcU4sB8BM4SZ9DD/9mrQ0t4hWG74jNnDBo9NpSuhHUBPHZ61NL X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5966 Original-Authentication-Results: sourceware.org; dkim=none (message not signed) header.d=none; sourceware.org; dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com X-MS-Office365-Filtering-Correlation-Id-Prvs: bb998a7d-815b-4d20-b348-08d8eadab6cb X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bpUnRjP9x37emijAGB+ymLcNZCqHdKTWTxCzOIlXoVzJqxmobcu45kEe4Rv3FEvO30BRGjHmEVaLZUkbOiPk3lYdRjKo1EbGuqYDGc6n7+rLh3eb1lrmMEedb2YOAsRMcPkgilVlAbWD0oGrzt0WPsnHa5ByY2FZ7/NQJYobuZ/FTrw3EaNciiZT4Q764dsmvQg18Dhem42xXISAVu4BhwU3LEM2ltMt/eB+JpgNU82lq/JlVkO9jaby950/4LZrZZnuWT/GM+sdvFXugAixOzrgfUft/ebjiiBoL53mf/5pdyrjfwchybxsmLLGlSk8UQ72M64+2SYiSEaa4nrbTJhPlIhq28roOf8UB1cKtMnjqBp/Gz5Eu4r5J8xFj2/9UVmSP5IUOZDleGIgywj5V1DkFTnT1/wMsNFWg4ujg/vx9ENkcce4MD9eai2QnxOcg3v4a2saHhm8HzNVDerHxg6q/xhn+h28e9E1YPujUrnnum9AH4ftMGGeh7LXdJfexZKBqFGFnP44YaEznuwv/1sgBZKWwE61AueWUU3mK3YQPC6z7ylxy6H17V2L3bTECvYgxo6rrRieJULdQ1otJWpWKj88DCkPAoYtAj8RnyrZilN+eSBJnQ5YROALdJmCV5OBYjaar6Ba1eumWSBr7V9gq3jqPWXvEln1CU+NdfK+1D1/nLwQzQZ2Xs5USo4r X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(4636009)(396003)(136003)(346002)(39860400002)(376002)(46966006)(36840700001)(36860700001)(36756003)(83380400001)(8676002)(81166007)(82740400003)(336012)(69590400012)(6512007)(30864003)(6506007)(6486002)(5660300002)(86362001)(956004)(2616005)(6862004)(6666004)(8936002)(70586007)(44832011)(82310400003)(356005)(478600001)(47076005)(16526019)(2906002)(186003)(70206006)(316002)(26005); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Mar 2021 13:27:24.7979 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a04511f5-719a-4fda-003a-08d8eadabbf1 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1773 X-Spam-Status: No, score=-14.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, GIT_PATCH_0, MSGID_FROM_MTA_HEADER, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_PASS, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Szabolcs Nagy via Libc-alpha From: Szabolcs Nagy Reply-To: Szabolcs Nagy Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" The previous patch ensured that all chunk to mem computations use chunk2rawmem, so now we can rename it to chunk2mem, and in the few cases where the tag of mem is relevant chunk2mem_tag can be used. Replaced tag_at (chunk2rawmem (x)) with chunk2mem_tag (x). Renamed chunk2rawmem to chunk2mem. --- malloc/hooks.c | 4 +-- malloc/malloc.c | 82 ++++++++++++++++++++++++------------------------- 2 files changed, 43 insertions(+), 43 deletions(-) diff --git a/malloc/hooks.c b/malloc/hooks.c index e888adcdc3..c91f9502ba 100644 --- a/malloc/hooks.c +++ b/malloc/hooks.c @@ -279,7 +279,7 @@ free_check (void *mem, const void *caller) else { /* Mark the chunk as belonging to the library again. */ - (void)tag_region (chunk2rawmem (p), memsize (p)); + (void)tag_region (chunk2mem (p), memsize (p)); _int_free (&main_arena, p, 1); __libc_lock_unlock (main_arena.mutex); } @@ -330,7 +330,7 @@ realloc_check (void *oldmem, size_t bytes, const void *caller) #if HAVE_MREMAP mchunkptr newp = mremap_chunk (oldp, chnb); if (newp) - newmem = tag_at (chunk2rawmem (newp)); + newmem = chunk2mem_tag (newp); else #endif { diff --git a/malloc/malloc.c b/malloc/malloc.c index 9ddb65f029..6f87b7bdb1 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -1307,12 +1307,12 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ /* Convert a chunk address to a user mem pointer without correcting the tag. */ -#define chunk2rawmem(p) ((void*)((char*)(p) + CHUNK_HDR_SZ)) +#define chunk2mem(p) ((void*)((char*)(p) + CHUNK_HDR_SZ)) -/* Convert between user mem pointers and chunk pointers, updating any - memory tags on the pointer to respect the tag value at that - location. */ -#define chunk2mem(p) ((void *)tag_at (((char*)(p) + CHUNK_HDR_SZ))) +/* Convert a chunk address to a user mem pointer and extract the right tag. */ +#define chunk2mem_tag(p) ((void*)tag_at ((char*)(p) + CHUNK_HDR_SZ)) + +/* Convert a user mem pointer to a chunk address and extract the right tag. */ #define mem2chunk(mem) ((mchunkptr)tag_at (((char*)(mem) - CHUNK_HDR_SZ))) /* The smallest possible chunk */ @@ -1328,7 +1328,7 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ #define aligned_OK(m) (((unsigned long)(m) & MALLOC_ALIGN_MASK) == 0) #define misaligned_chunk(p) \ - ((uintptr_t)(MALLOC_ALIGNMENT == CHUNK_HDR_SZ ? (p) : chunk2rawmem (p)) \ + ((uintptr_t)(MALLOC_ALIGNMENT == CHUNK_HDR_SZ ? (p) : chunk2mem (p)) \ & MALLOC_ALIGN_MASK) /* pad request bytes into a usable size -- internal version */ @@ -2128,7 +2128,7 @@ do_check_chunk (mstate av, mchunkptr p) /* chunk is page-aligned */ assert (((prev_size (p) + sz) & (GLRO (dl_pagesize) - 1)) == 0); /* mem is aligned */ - assert (aligned_OK (chunk2rawmem (p))); + assert (aligned_OK (chunk2mem (p))); } } @@ -2152,7 +2152,7 @@ do_check_free_chunk (mstate av, mchunkptr p) if ((unsigned long) (sz) >= MINSIZE) { assert ((sz & MALLOC_ALIGN_MASK) == 0); - assert (aligned_OK (chunk2rawmem (p))); + assert (aligned_OK (chunk2mem (p))); /* ... matching footer field */ assert (prev_size (next_chunk (p)) == sz); /* ... and is fully consolidated */ @@ -2231,7 +2231,7 @@ do_check_remalloced_chunk (mstate av, mchunkptr p, INTERNAL_SIZE_T s) assert ((sz & MALLOC_ALIGN_MASK) == 0); assert ((unsigned long) (sz) >= MINSIZE); /* ... and alignment */ - assert (aligned_OK (chunk2rawmem (p))); + assert (aligned_OK (chunk2mem (p))); /* chunk is less than MINSIZE more than request */ assert ((long) (sz) - (long) (s) >= 0); assert ((long) (sz) - (long) (s + MINSIZE) < 0); @@ -2501,16 +2501,16 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) { - /* For glibc, chunk2rawmem increases the address by + /* For glibc, chunk2mem increases the address by CHUNK_HDR_SZ and MALLOC_ALIGN_MASK is CHUNK_HDR_SZ-1. Each mmap'ed area is page aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ - assert (((INTERNAL_SIZE_T) chunk2rawmem (mm) & MALLOC_ALIGN_MASK) == 0); + assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); front_misalign = 0; } else - front_misalign = (INTERNAL_SIZE_T) chunk2rawmem (mm) & MALLOC_ALIGN_MASK; + front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { correction = MALLOC_ALIGNMENT - front_misalign; @@ -2536,7 +2536,7 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) check_chunk (av, p); - return chunk2rawmem (p); + return chunk2mem (p); } } } @@ -2757,7 +2757,7 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) /* Guarantee alignment of first new chunk made from this space */ - front_misalign = (INTERNAL_SIZE_T) chunk2rawmem (brk) & MALLOC_ALIGN_MASK; + front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { /* @@ -2815,10 +2815,10 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) { if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) /* MORECORE/mmap must correctly align */ - assert (((unsigned long) chunk2rawmem (brk) & MALLOC_ALIGN_MASK) == 0); + assert (((unsigned long) chunk2mem (brk) & MALLOC_ALIGN_MASK) == 0); else { - front_misalign = (INTERNAL_SIZE_T) chunk2rawmem (brk) & MALLOC_ALIGN_MASK; + front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { /* @@ -2906,7 +2906,7 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) set_head (p, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE); check_malloced_chunk (av, p, nb); - return chunk2rawmem (p); + return chunk2mem (p); } /* catch all failure paths */ @@ -3004,7 +3004,7 @@ munmap_chunk (mchunkptr p) if (DUMPED_MAIN_ARENA_CHUNK (p)) return; - uintptr_t mem = (uintptr_t) chunk2rawmem (p); + uintptr_t mem = (uintptr_t) chunk2mem (p); uintptr_t block = (uintptr_t) p - prev_size (p); size_t total_size = prev_size (p) + size; /* Unfortunately we have to do the compilers job by hand here. Normally @@ -3038,7 +3038,7 @@ mremap_chunk (mchunkptr p, size_t new_size) assert (chunk_is_mmapped (p)); uintptr_t block = (uintptr_t) p - offset; - uintptr_t mem = (uintptr_t) chunk2rawmem(p); + uintptr_t mem = (uintptr_t) chunk2mem(p); size_t total_size = offset + size; if (__glibc_unlikely ((block | total_size) & (pagesize - 1)) != 0 || __glibc_unlikely (!powerof2 (mem & (pagesize - 1)))) @@ -3059,7 +3059,7 @@ mremap_chunk (mchunkptr p, size_t new_size) p = (mchunkptr) (cp + offset); - assert (aligned_OK (chunk2rawmem (p))); + assert (aligned_OK (chunk2mem (p))); assert (prev_size (p) == offset); set_head (p, (new_size - offset) | IS_MMAPPED); @@ -3104,7 +3104,7 @@ static __thread tcache_perthread_struct *tcache = NULL; static __always_inline void tcache_put (mchunkptr chunk, size_t tc_idx) { - tcache_entry *e = (tcache_entry *) chunk2rawmem (chunk); + tcache_entry *e = (tcache_entry *) chunk2mem (chunk); /* Mark this chunk as "in the tcache" so the test in _int_free will detect a double free. */ @@ -3324,7 +3324,7 @@ __libc_free (void *mem) MAYBE_INIT_TCACHE (); /* Mark the chunk as belonging to the library again. */ - (void)tag_region (chunk2rawmem (p), memsize (p)); + (void)tag_region (chunk2mem (p), memsize (p)); ar_ptr = arena_for_chunk (p); _int_free (ar_ptr, p, 0); @@ -3419,7 +3419,7 @@ __libc_realloc (void *oldmem, size_t bytes) newp = mremap_chunk (oldp, nb); if (newp) { - void *newmem = tag_at (chunk2rawmem (newp)); + void *newmem = chunk2mem_tag (newp); /* Give the new block a different tag. This helps to ensure that stale handles to the previous mapping are not reused. There's a performance hit for both us and the @@ -3468,7 +3468,7 @@ __libc_realloc (void *oldmem, size_t bytes) { size_t sz = memsize (oldp); memcpy (newp, oldmem, sz); - (void) tag_region (chunk2rawmem (oldp), sz); + (void) tag_region (chunk2mem (oldp), sz); _int_free (ar_ptr, oldp, 0); } } @@ -3860,7 +3860,7 @@ _int_malloc (mstate av, size_t bytes) } } #endif - void *p = chunk2rawmem (victim); + void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } @@ -3918,7 +3918,7 @@ _int_malloc (mstate av, size_t bytes) } } #endif - void *p = chunk2rawmem (victim); + void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } @@ -4019,7 +4019,7 @@ _int_malloc (mstate av, size_t bytes) set_foot (remainder, remainder_size); check_malloced_chunk (av, victim, nb); - void *p = chunk2rawmem (victim); + void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } @@ -4051,7 +4051,7 @@ _int_malloc (mstate av, size_t bytes) { #endif check_malloced_chunk (av, victim, nb); - void *p = chunk2rawmem (victim); + void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; #if USE_TCACHE @@ -4213,7 +4213,7 @@ _int_malloc (mstate av, size_t bytes) set_foot (remainder, remainder_size); } check_malloced_chunk (av, victim, nb); - void *p = chunk2rawmem (victim); + void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } @@ -4321,7 +4321,7 @@ _int_malloc (mstate av, size_t bytes) set_foot (remainder, remainder_size); } check_malloced_chunk (av, victim, nb); - void *p = chunk2rawmem (victim); + void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } @@ -4359,7 +4359,7 @@ _int_malloc (mstate av, size_t bytes) set_head (remainder, remainder_size | PREV_INUSE); check_malloced_chunk (av, victim, nb); - void *p = chunk2rawmem (victim); + void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } @@ -4427,7 +4427,7 @@ _int_free (mstate av, mchunkptr p, int have_lock) if (tcache != NULL && tc_idx < mp_.tcache_bins) { /* Check to see if it's already in the tcache. */ - tcache_entry *e = (tcache_entry *) chunk2rawmem (p); + tcache_entry *e = (tcache_entry *) chunk2mem (p); /* This test succeeds on double free. However, we don't 100% trust it (it also matches random payload data at a 1 in @@ -4499,7 +4499,7 @@ _int_free (mstate av, mchunkptr p, int have_lock) malloc_printerr ("free(): invalid next size (fast)"); } - free_perturb (chunk2rawmem(p), size - CHUNK_HDR_SZ); + free_perturb (chunk2mem(p), size - CHUNK_HDR_SZ); atomic_store_relaxed (&av->have_fastchunks, true); unsigned int idx = fastbin_index(size); @@ -4572,7 +4572,7 @@ _int_free (mstate av, mchunkptr p, int have_lock) || __builtin_expect (nextsize >= av->system_mem, 0)) malloc_printerr ("free(): invalid next size (normal)"); - free_perturb (chunk2rawmem(p), size - CHUNK_HDR_SZ); + free_perturb (chunk2mem(p), size - CHUNK_HDR_SZ); /* consolidate backward */ if (!prev_inuse(p)) { @@ -4836,7 +4836,7 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, av->top = chunk_at_offset (oldp, nb); set_head (av->top, (newsize - nb) | PREV_INUSE); check_inuse_chunk (av, oldp); - return tag_new_usable (chunk2rawmem (oldp)); + return tag_new_usable (chunk2mem (oldp)); } /* Try to expand forward into next chunk; split off remainder below */ @@ -4869,7 +4869,7 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, } else { - void *oldmem = chunk2rawmem (oldp); + void *oldmem = chunk2mem (oldp); size_t sz = memsize (oldp); (void) tag_region (oldmem, sz); newmem = tag_new_usable (newmem); @@ -4906,7 +4906,7 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, } check_inuse_chunk (av, newp); - return tag_new_usable (chunk2rawmem (newp)); + return tag_new_usable (chunk2mem (newp)); } /* @@ -4972,7 +4972,7 @@ _int_memalign (mstate av, size_t alignment, size_t bytes) { set_prev_size (newp, prev_size (p) + leadsize); set_head (newp, newsize | IS_MMAPPED); - return chunk2rawmem (newp); + return chunk2mem (newp); } /* Otherwise, give back leader, use the rest */ @@ -4984,7 +4984,7 @@ _int_memalign (mstate av, size_t alignment, size_t bytes) p = newp; assert (newsize >= nb && - (((unsigned long) (chunk2rawmem (p))) % alignment) == 0); + (((unsigned long) (chunk2mem (p))) % alignment) == 0); } /* Also give back spare room at the end */ @@ -5003,7 +5003,7 @@ _int_memalign (mstate av, size_t alignment, size_t bytes) } check_inuse_chunk (av, p); - return chunk2rawmem (p); + return chunk2mem (p); } @@ -5038,7 +5038,7 @@ mtrim (mstate av, size_t pad) + sizeof (struct malloc_chunk) + psm1) & ~psm1); - assert ((char *) chunk2rawmem (p) + 2 * CHUNK_HDR_SZ + assert ((char *) chunk2mem (p) + 2 * CHUNK_HDR_SZ <= paligned_mem); assert ((char *) p + size > paligned_mem);