From patchwork Mon Oct 2 14:24:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Wilco Dijkstra X-Patchwork-Id: 23278 Received: (qmail 61717 invoked by alias); 2 Oct 2017 14:24:56 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 61313 invoked by uid 89); 2 Oct 2017 14:24:55 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-25.1 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_HELO_PASS, SPF_PASS autolearn=ham version=3.3.2 spammy=consolidation, H*RU:sk:EUR02-H, HTo:U*dj, Hx-spam-relays-external:sk:EUR02-H X-HELO: EUR02-HE1-obe.outbound.protection.outlook.com From: Wilco Dijkstra To: Florian Weimer , "libc-alpha@sourceware.org" , "dj@redhat.com" CC: nd Subject: Re: [PATCH][malloc] Improve malloc initialization sequence Date: Mon, 2 Oct 2017 14:24:49 +0000 Message-ID: References: , <9c906f7e-7daa-d555-2bcf-db9c1999bf85@redhat.com> In-Reply-To: <9c906f7e-7daa-d555-2bcf-db9c1999bf85@redhat.com> authentication-results: spf=none (sender IP is ) smtp.mailfrom=Wilco.Dijkstra@arm.com; x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; HE1PR0801MB2058; 6:0YjJCZDFtBRnIepCT53NB+e+YLeEcBS/CrikFEmShI9+D6GWH5qzsAPI+Sps9B1DHfPDjhFXSvuQwVoObIv5POHjY9cJQwkrv8lHZcZtmJSlzZ9mSkYtw+IyWiUIAOBauXHEO/AMUuDsNSZEIUyHqDAK4ZPMX7r34qvjNJBC2tSSsXKed/h4QSTPk+60PCbqAoL+sWCmUCjDqH2zFl1vcDA0PjxV0AJ0Y1/0z7K9m3RmondFxHbniIVPi1mJko9sHLmC8TemyCE3epY66GmyqqgQc9ltG9yRIYB5GKzs5O9QeSJcm9JzENmef5N20ebM0L8Kul3V7d+sw3X8fjd04g==; 5:raEtxf0FR0qwnVOrCRAHLFTwKs/DwEq36Sr2NRd9tzH3dFBGbm7NvWNP+SJg68gUJAZfqmuTQidJeqtnKPFJGjzLG95DqmOwix0F3li5t4C8S6pJc5d3c4fGn30hMNCA9dt0CnEgX6WOnTvTsgBieQ==; 24:9P5IDg3jcWWRMcC7ArVrBa0URHKPLC9bY75bLDrScIh/XSKVCLx7207FfI8wldNYwOJ4U1vEpKJWdvxG0gkCyHcmbhF1ch32AOzLAQKg2D4=; 7:XQJqDsg2VdXqh6NpaWsJOXrG7UEBHmIeKH+N/c67miWJF43HdE4nupOH2pWdjT0UJfbDdPrXZdwvYWT3uXdFXQO/lHOUlHmNJKV5oNnaHF7Fu9hTw6ZpWLdNTV7MXZauZpu5o8COmJ6t1ee4lfPe+QpyO3ezQw5/h8v0loIeS4SpLspWXscOCxlVu6Y8ZFkU/4Vm8idh8/9CUi2LXOvCFGuljBZ1wLuMASDjqmsl/C0= x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-correlation-id: 62936c50-ecb2-41d2-aea8-08d509a156e8 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(2017030254152)(48565401081)(2017052603199)(201703131423075)(201703031133081)(201702281549075); SRVR:HE1PR0801MB2058; x-ms-traffictypediagnostic: HE1PR0801MB2058: nodisclaimer: True x-exchange-antispam-report-test: UriScan:(180628864354917); x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(10201501046)(3002001)(93006095)(93001095)(100000703101)(100105400095)(6055026)(6041248)(20161123558100)(20161123560025)(20161123555025)(20161123564025)(20161123562025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:HE1PR0801MB2058; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:HE1PR0801MB2058; x-forefront-prvs: 0448A97BF2 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(39860400002)(346002)(199003)(377424004)(24454002)(54534003)(189002)(2900100001)(189998001)(2950100002)(5660300001)(25786009)(33656002)(97736004)(229853002)(7696004)(74316002)(110136005)(4326008)(106356001)(105586002)(55016002)(86362001)(316002)(6246003)(2501003)(68736007)(53936002)(99286003)(5250100002)(66066001)(2201001)(9686003)(8936002)(6116002)(102836003)(81156014)(81166006)(50986999)(76176999)(54356999)(3846002)(14454004)(305945005)(2906002)(72206003)(3280700002)(6506006)(478600001)(101416001)(6436002)(8676002)(3660700001)(7736002); DIR:OUT; SFP:1101; SCL:1; SRVR:HE1PR0801MB2058; H:HE1PR0801MB2058.eurprd08.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Oct 2017 14:24:49.3861 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB2058 Florian Weimer wrote: > The locking is unnecessary.  You should remove it and call > malloc_init_state before the tunables preprocessor conditional. > > I believe this fixes bug 22159, so please reference this bug (both in > the commit message and the ChangeLog). Sure, here is the updated version: The current malloc initialization is quite convoluted. Instead of sometimes calling malloc_consolidate from ptmalloc_init, call malloc_init_state early so that the main_arena is always initialized. The special initialization can now be removed from malloc_consolidate. This also fixes BZ #22159. GLIBC builds and tests pass, including --enable-tunables=no, OK for commit? ChangeLog: 2017-10-02 Wilco Dijkstra [BZ #22159] * malloc/arena.c (ptmalloc_init): Call malloc_init_state. * malloc/malloc.c (malloc_consolidate): Remove initialization. diff --git a/malloc/arena.c b/malloc/arena.c index 9e5a62d260bf2f5e6d76da4ccaf7b7dcb388c296..85b985e193d513b633bd148b275515a29a710584 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -307,13 +307,9 @@ ptmalloc_init (void) thread_arena = &main_arena; -#if HAVE_TUNABLES - /* Ensure initialization/consolidation and do it under a lock so that a - thread attempting to use the arena in parallel waits on us till we - finish. */ - __libc_lock_lock (main_arena.mutex); - malloc_consolidate (&main_arena); + malloc_init_state (&main_arena); +#if HAVE_TUNABLES TUNABLE_GET (check, int32_t, TUNABLE_CALLBACK (set_mallopt_check)); TUNABLE_GET (top_pad, size_t, TUNABLE_CALLBACK (set_top_pad)); TUNABLE_GET (perturb, int32_t, TUNABLE_CALLBACK (set_perturb_byte)); @@ -322,13 +318,12 @@ ptmalloc_init (void) TUNABLE_GET (mmap_max, int32_t, TUNABLE_CALLBACK (set_mmaps_max)); TUNABLE_GET (arena_max, size_t, TUNABLE_CALLBACK (set_arena_max)); TUNABLE_GET (arena_test, size_t, TUNABLE_CALLBACK (set_arena_test)); -#if USE_TCACHE +# if USE_TCACHE TUNABLE_GET (tcache_max, size_t, TUNABLE_CALLBACK (set_tcache_max)); TUNABLE_GET (tcache_count, size_t, TUNABLE_CALLBACK (set_tcache_count)); TUNABLE_GET (tcache_unsorted_limit, size_t, TUNABLE_CALLBACK (set_tcache_unsorted_limit)); -#endif - __libc_lock_unlock (main_arena.mutex); +# endif #else const char *s = NULL; if (__glibc_likely (_environ != NULL)) diff --git a/malloc/malloc.c b/malloc/malloc.c index 88cfd25766eba6787faeb7195d95b73d7a4637ab..162e423e7bd18a07e4e97dc618be406d8bc9c529 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -4410,12 +4410,7 @@ static void malloc_consolidate(mstate av) mchunkptr bck; mchunkptr fwd; - /* - If max_fast is 0, we know that av hasn't - yet been initialized, in which case do so below - */ - - if (get_max_fast () != 0) { + { atomic_store_relaxed (&av->have_fastchunks, false); unsorted_bin = unsorted_chunks(av); @@ -4484,10 +4479,6 @@ static void malloc_consolidate(mstate av) } } while (fb++ != maxfb); } - else { - malloc_init_state(av); - check_malloc_state(av); - } } /*