From patchwork Tue Oct 3 12:22:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 77014 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 6A872385840C for ; Tue, 3 Oct 2023 12:23:14 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by sourceware.org (Postfix) with ESMTPS id 75F4D3858D38 for ; Tue, 3 Oct 2023 12:22:59 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 75F4D3858D38 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1c60a514f3aso6445785ad.3 for ; Tue, 03 Oct 2023 05:22:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1696335778; x=1696940578; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=0BnZhnyLpxUW3RaKIcNqMQA3Ad1H2ZAPclge8iexDEA=; b=pEH0JUTROa6ppy4YgcN9mOSWnSI1n7gyDkgPPyfg4sBJa+m9FCpeSSBJyT7RmCVaGk kUaz10pfbo/xIFVbZHAIY81rrPmjvu3Muvkfwu6gaTgsFN3UBRqthdbWa9FdHhthMIYc csaDioanh9CfwgQkyk1QEG2wRKtLfhxg0rm8J6b5rcHUXtK4a9H4NdlRh0LBrtgdgvGg NkY+bjSoijXaMsBDXLUhXx0PSWIIXu+2pIMnQ4iGGDGpYfzrdS0iQ9RS31Zpg3B/fCVB 6AzsTMWYOjFn37uahGR/bJ767gZyi96nFCWDGXpsnwWbbYZwIowluFlXHEVbkDNCWXrl dtgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696335778; x=1696940578; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0BnZhnyLpxUW3RaKIcNqMQA3Ad1H2ZAPclge8iexDEA=; b=Pl+LAnAgLJC99YeeUUH3XzeX5Wt8id1iAT7srF371DEQn+FiF64gKmVN6IGt7HTuXj SFI8MMHKBOHO3KIIMkB6l3heyyb1/kyIi6KoDY761R0nEiQH3Q+qyZvS0sd3OLmk1OCo qIweT+kM1qkZQhlGVc3dwbYWlmn8uzVkE5NFcO6VvmNX//hWEnmOnIhUxCuYEzO35dhM C4oOVAldBZrxO/n/5FbbU/pYXEwxlCXA03OIRbHQA+0Ua6k/n7DkltExz7C1ENf5DFq/ 0Vg2pwcFE5zMiZnHOmkBcgJbQZnSoVf1S0YVwCTN4U6+I8lXSATmD+19GlmpmILNenJS /pyg== X-Gm-Message-State: AOJu0YyX020As/ZqR35TChM7lV+Q2MILE6w7Yz4GGab4hYenSmCvQhxN CxjD5FI54r6ipMpqAw1UzsLZMVhXLcNmNqGjqZaX0g== X-Google-Smtp-Source: AGHT+IF8O2wDN0nnzPR4GHKioHj6Dj9kKkBW8HODhj225LEyqVIBrCOrUZK9Y24yqg8vkQcKc+D95Q== X-Received: by 2002:a17:902:ed54:b0:1c7:2f33:7ccd with SMTP id y20-20020a170902ed5400b001c72f337ccdmr14233261plb.33.1696335777947; Tue, 03 Oct 2023 05:22:57 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c1:feaf:31ef:b40c:b4e5:77c]) by smtp.gmail.com with ESMTPSA id b1-20020a170902d30100b001c5de2f1686sm1403881plc.99.2023.10.03.05.22.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 05:22:57 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org, Noah Goldstein , Paul Eggert , Florian Weimer Subject: [PATCH v8 1/7] string: Add internal memswap implementation Date: Tue, 3 Oct 2023 09:22:45 -0300 Message-Id: <20231003122251.3325435-2-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> References: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org The prototype is: void __memswap (void *restrict p1, void *restrict p2, size_t n) The function swaps the content of two memory blocks P1 and P2 of len N. Memory overlap is NOT handled. It will be used on qsort optimization. Checked on x86_64-linux-gnu and aarch64-linux-gnu. Reviewed-by: Noah Goldstein --- string/Makefile | 12 +++ string/test-memswap.c | 192 ++++++++++++++++++++++++++++++++++++++ sysdeps/generic/memswap.h | 41 ++++++++ 3 files changed, 245 insertions(+) create mode 100644 string/test-memswap.c create mode 100644 sysdeps/generic/memswap.h diff --git a/string/Makefile b/string/Makefile index 8cdfd5b000..fb101db778 100644 --- a/string/Makefile +++ b/string/Makefile @@ -209,6 +209,18 @@ tests := \ tst-xbzero-opt \ # tests +tests-static-internal := \ + test-memswap \ +# tests-static-internal + +tests-internal := \ + $(tests-static-internal) \ + # tests-internal + +tests-static := \ + $(tests-static-internal) \ + # tests-static + # Both tests require the .mo translation files generated by msgfmt. tests-translation := \ tst-strerror \ diff --git a/string/test-memswap.c b/string/test-memswap.c new file mode 100644 index 0000000000..162beb91e3 --- /dev/null +++ b/string/test-memswap.c @@ -0,0 +1,192 @@ +/* Test and measure memcpy functions. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include + +#define TEST_MAIN +#define BUF1PAGES 3 +#include "test-string.h" + +static unsigned char *ref1; +static unsigned char *ref2; + +static void +do_one_test (unsigned char *p1, unsigned char *ref1, unsigned char *p2, + unsigned char *ref2, size_t len) +{ + __memswap (p1, p2, len); + + TEST_COMPARE_BLOB (p1, len, ref2, len); + TEST_COMPARE_BLOB (p2, len, ref1, len); +} + +static inline void +do_test (size_t align1, size_t align2, size_t len) +{ + align1 &= page_size; + if (align1 + len >= page_size) + return; + + align2 &= page_size; + if (align2 + len >= page_size) + return; + + unsigned char *p1 = buf1 + align1; + unsigned char *p2 = buf2 + align2; + for (size_t repeats = 0; repeats < 2; ++repeats) + { + size_t i, j; + for (i = 0, j = 1; i < len; i++, j += 23) + { + ref1[i] = p1[i] = j; + ref2[i] = p2[i] = UCHAR_MAX - j; + } + + do_one_test (p1, ref1, p2, ref2, len); + } +} + +static void +do_random_tests (void) +{ + for (size_t n = 0; n < ITERATIONS; n++) + { + size_t len, size, size1, size2, align1, align2; + + if (n == 0) + { + len = getpagesize (); + size = len + 512; + size1 = size; + size2 = size; + align1 = 512; + align2 = 512; + } + else + { + if ((random () & 255) == 0) + size = 65536; + else + size = 768; + if (size > page_size) + size = page_size; + size1 = size; + size2 = size; + size_t i = random (); + if (i & 3) + size -= 256; + if (i & 1) + size1 -= 256; + if (i & 2) + size2 -= 256; + if (i & 4) + { + len = random () % size; + align1 = size1 - len - (random () & 31); + align2 = size2 - len - (random () & 31); + if (align1 > size1) + align1 = 0; + if (align2 > size2) + align2 = 0; + } + else + { + align1 = random () & 63; + align2 = random () & 63; + len = random () % size; + if (align1 + len > size1) + align1 = size1 - len; + if (align2 + len > size2) + align2 = size2 - len; + } + } + unsigned char *p1 = buf1 + page_size - size1; + unsigned char *p2 = buf2 + page_size - size2; + size_t j = align1 + len + 256; + if (j > size1) + j = size1; + for (size_t i = 0; i < j; ++i) + ref1[i] = p1[i] = random () & 255; + + j = align2 + len + 256; + if (j > size2) + j = size2; + + for (size_t i = 0; i < j; ++i) + ref2[i] = p2[i] = random () & 255; + + do_one_test (p1 + align1, ref1 + align1, p2 + align2, ref2 + align2, len); + } +} + +static int +test_main (void) +{ + test_init (); + /* Use the start of buf1 for reference buffers. */ + ref1 = buf1; + ref2 = buf1 + page_size; + buf1 = ref2 + page_size; + + printf ("%23s", ""); + printf ("\t__memswap\n"); + + for (size_t i = 0; i < 18; ++i) + { + do_test (0, 0, 1 << i); + do_test (i, 0, 1 << i); + do_test (0, i, 1 << i); + do_test (i, i, 1 << i); + } + + for (size_t i = 0; i < 32; ++i) + { + do_test (0, 0, i); + do_test (i, 0, i); + do_test (0, i, i); + do_test (i, i, i); + } + + for (size_t i = 3; i < 32; ++i) + { + if ((i & (i - 1)) == 0) + continue; + do_test (0, 0, 16 * i); + do_test (i, 0, 16 * i); + do_test (0, i, 16 * i); + do_test (i, i, 16 * i); + } + + for (size_t i = 19; i <= 25; ++i) + { + do_test (255, 0, 1 << i); + do_test (0, 4000, 1 << i); + do_test (0, 255, i); + do_test (0, 4000, i); + } + + do_test (0, 0, getpagesize ()); + + do_random_tests (); + + return 0; +} + +#include diff --git a/sysdeps/generic/memswap.h b/sysdeps/generic/memswap.h new file mode 100644 index 0000000000..f09dae1ebb --- /dev/null +++ b/sysdeps/generic/memswap.h @@ -0,0 +1,41 @@ +/* Swap the content of two memory blocks, overlap is NOT handled. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +static inline void +__memswap (void *__restrict p1, void *__restrict p2, size_t n) +{ + /* Use multiple small memcpys with constant size to enable inlining on most + targets. */ + enum { SWAP_GENERIC_SIZE = 32 }; + unsigned char tmp[SWAP_GENERIC_SIZE]; + while (n > SWAP_GENERIC_SIZE) + { + memcpy (tmp, p1, SWAP_GENERIC_SIZE); + p1 = __mempcpy (p1, p2, SWAP_GENERIC_SIZE); + p2 = __mempcpy (p2, tmp, SWAP_GENERIC_SIZE); + n -= SWAP_GENERIC_SIZE; + } + while (n > 0) + { + unsigned char t = ((unsigned char *)p1)[--n]; + ((unsigned char *)p1)[n] = ((unsigned char *)p2)[n]; + ((unsigned char *)p2)[n] = t; + } +} From patchwork Tue Oct 3 12:22:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 77016 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5165E3861892 for ; Tue, 3 Oct 2023 12:23:37 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by sourceware.org (Postfix) with ESMTPS id 7FDD63858436 for ; Tue, 3 Oct 2023 12:23:01 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 7FDD63858436 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-pg1-x531.google.com with SMTP id 41be03b00d2f7-584bfb14c59so546209a12.0 for ; Tue, 03 Oct 2023 05:23:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1696335780; x=1696940580; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=5YL53T0CWZlgfNRw37iJeGmfu3dUNrIcttKVQiTLn0A=; b=gR9u0SXKHpkkCQImBVUkJsslyArWnwEnMLizX2MQmOuzpBKJXAKtU4Hrp7ygVHop7f keq/kjMSlx/z8fVCMe81bZS7HEd9sOiwsnXkUFzFebZaBcTDXGF/VUvaYO8MZ/YrbXVH FSYSv3xNYPA8mfk4cfWmkkOChsV9E/ucsWubAxt2/7jQMrNmGutIvhTKV8tXIbsnBqvD /nT7447jCzKFNrtKEPvkxfvy0Gk0Wm8/00jOa+EHUab3LxO7172nasQ4OTQx0fTVLOLZ dqKr5KiYeWpiczbwn+Gi6jn/L0cnejvmCw+SAPmHEFHscDHITV0zH6PfjacjHvvI3qbR 6BTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696335780; x=1696940580; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5YL53T0CWZlgfNRw37iJeGmfu3dUNrIcttKVQiTLn0A=; b=X/bNorUTnwg4AYW4IoHpvnkT1iAHxWv2Prv2sQQPktWtgYsPBZ5FhVyJ78ntZyfxBL SO9C9wwj6rw3LXlv+t41NIZYsAH5Q1fWNMCwaagG150LX6IXr1My24PWIjrCNPYEozpo V25YKZdcEo0AtxRrL5/cbrS1X/uT49YDVIDbqNT7QvdAb+uKHW1KNjp/ec27VZX2vO3V 3VKtZbWu975zH9MD4SKLrM1pcJ4GijF+P01pRVbiw8AfYKUNfMz7ZmaMUAqap0T4hwpf 3staPsiHm5JRvMjedHwO5iOO5PaAeIAvNCSC4zZ6zIP0aeXFwcZgO4z0j2GPAPjhm8YO dxKQ== X-Gm-Message-State: AOJu0Yw+lxPs75118PoLPkrXk/RErRNil+hjXSwDVyafsGJ7ct4UJNgP 6qfyeixC9LfYL+TjLg6QM4dUneoe6WlvQyvYXKU/XA== X-Google-Smtp-Source: AGHT+IHqxY7iGSpfcOsxJHz66z9QkloisuJjGv8Jp42PRRBN/Blp76V6pryq/+ZVTtkP4HzexCbCDA== X-Received: by 2002:a05:6a20:1456:b0:134:a4e2:4ac8 with SMTP id a22-20020a056a20145600b00134a4e24ac8mr14231939pzi.39.1696335779886; Tue, 03 Oct 2023 05:22:59 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c1:feaf:31ef:b40c:b4e5:77c]) by smtp.gmail.com with ESMTPSA id b1-20020a170902d30100b001c5de2f1686sm1403881plc.99.2023.10.03.05.22.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 05:22:59 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org, Noah Goldstein , Paul Eggert , Florian Weimer Subject: [PATCH v8 2/7] stdlib: Optimization qsort{_r} swap implementation Date: Tue, 3 Oct 2023 09:22:46 -0300 Message-Id: <20231003122251.3325435-3-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> References: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org The optimization takes in consideration both the most common elements are either 32 or 64 bit in size and inputs are aligned to the word boundary. This is similar to what msort does. For large buffer the swap operation uses memcpy/mempcpy with a small fixed size buffer (so compiler might inline the operations). Checked on x86_64-linux-gnu. Reviewed-by: Noah Goldstein --- stdlib/qsort.c | 95 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 77 insertions(+), 18 deletions(-) diff --git a/stdlib/qsort.c b/stdlib/qsort.c index 728a0ed370..072ccdfb95 100644 --- a/stdlib/qsort.c +++ b/stdlib/qsort.c @@ -21,22 +21,73 @@ #include #include +#include #include #include +#include -/* Byte-wise swap two items of size SIZE. */ -#define SWAP(a, b, size) \ - do \ - { \ - size_t __size = (size); \ - char *__a = (a), *__b = (b); \ - do \ - { \ - char __tmp = *__a; \ - *__a++ = *__b; \ - *__b++ = __tmp; \ - } while (--__size > 0); \ - } while (0) +/* Swap SIZE bytes between addresses A and B. These helpers are provided + along the generic one as an optimization. */ + +enum swap_type_t + { + SWAP_WORDS_64, + SWAP_WORDS_32, + SWAP_BYTES + }; + +/* If this function returns true, elements can be safely copied using word + loads and stores. Otherwise, it might not be safe. BASE (as an integer) + must be a multiple of the word alignment. SIZE must be a multiple of + WORDSIZE. Since WORDSIZE must be a multiple of the word alignment, and + WORDSIZE is a power of two on all supported platforms, this function for + speed merely checks that BASE and SIZE are both multiples of the word + size. */ +static inline bool +is_aligned (const void *base, size_t size, size_t wordsize) +{ + return (((uintptr_t) base | size) & (wordsize - 1)) == 0; +} + +static inline void +swap_words_64 (void * restrict a, void * restrict b, size_t n) +{ + typedef uint64_t __attribute__ ((__may_alias__)) u64_alias_t; + do + { + n -= 8; + u64_alias_t t = *(u64_alias_t *)(a + n); + *(u64_alias_t *)(a + n) = *(u64_alias_t *)(b + n); + *(u64_alias_t *)(b + n) = t; + } while (n); +} + +static inline void +swap_words_32 (void * restrict a, void * restrict b, size_t n) +{ + typedef uint32_t __attribute__ ((__may_alias__)) u32_alias_t; + do + { + n -= 4; + u32_alias_t t = *(u32_alias_t *)(a + n); + *(u32_alias_t *)(a + n) = *(u32_alias_t *)(b + n); + *(u32_alias_t *)(b + n) = t; + } while (n); +} + +/* Replace the indirect call with a serie of if statements. It should help + the branch predictor. */ +static void +do_swap (void * restrict a, void * restrict b, size_t size, + enum swap_type_t swap_type) +{ + if (swap_type == SWAP_WORDS_64) + swap_words_64 (a, b, size); + else if (swap_type == SWAP_WORDS_32) + swap_words_32 (a, b, size); + else + __memswap (a, b, size); +} /* Discontinue quicksort algorithm when partition gets below this size. This particular magic number was chosen to work best on a Sun 4/260. */ @@ -96,6 +147,14 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, /* Avoid lossage with unsigned arithmetic below. */ return; + enum swap_type_t swap_type; + if (is_aligned (pbase, size, 8)) + swap_type = SWAP_WORDS_64; + else if (is_aligned (pbase, size, 4)) + swap_type = SWAP_WORDS_32; + else + swap_type = SWAP_BYTES; + if (total_elems > MAX_THRESH) { char *lo = base_ptr; @@ -119,13 +178,13 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, char *mid = lo + size * ((hi - lo) / size >> 1); if ((*cmp) ((void *) mid, (void *) lo, arg) < 0) - SWAP (mid, lo, size); + do_swap (mid, lo, size, swap_type); if ((*cmp) ((void *) hi, (void *) mid, arg) < 0) - SWAP (mid, hi, size); + do_swap (mid, hi, size, swap_type); else goto jump_over; if ((*cmp) ((void *) mid, (void *) lo, arg) < 0) - SWAP (mid, lo, size); + do_swap (mid, lo, size, swap_type); jump_over:; left_ptr = lo + size; @@ -144,7 +203,7 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, if (left_ptr < right_ptr) { - SWAP (left_ptr, right_ptr, size); + do_swap (left_ptr, right_ptr, size, swap_type); if (mid == left_ptr) mid = right_ptr; else if (mid == right_ptr) @@ -216,7 +275,7 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, tmp_ptr = run_ptr; if (tmp_ptr != base_ptr) - SWAP (tmp_ptr, base_ptr, size); + do_swap (tmp_ptr, base_ptr, size, swap_type); /* Insertion sort, running from left-hand-side up to right-hand-side. */ From patchwork Tue Oct 3 12:22:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 77015 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B8298385DC2B for ; Tue, 3 Oct 2023 12:23:26 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by sourceware.org (Postfix) with ESMTPS id B4AD23857B98 for ; Tue, 3 Oct 2023 12:23:03 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org B4AD23857B98 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1c77449a6daso7094265ad.0 for ; Tue, 03 Oct 2023 05:23:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1696335782; x=1696940582; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=j99v4iQydOXsc8F79xhW/Un25AOKIPWlheKdtEKl/OA=; b=waD2ZUOruC4Ajrl8wHMVlMHZ5PKhMVn+QCU2hdefkLRxbUo40dGEYhIEk1/1rsQB0A cq8zWA8w0YtSWjn8DjBvUZG0c4SPCa0MXVpIXKBaRI8fjeozC4k+mOanBGPR76kHu3Xo TjEPtbrI0EnyS29Lt5KghnWd9UQsjee2yrlzzI/aXCiJbet7dDjZbvofKcWwz2Sht3xA 7i2Y/vc6NvI/eUVVn7iCglt9xMseg/JC9Y1h4XxJYPaBIVW536Mh6Ln8jXmOuaJFWZLZ rSyfebPYCvb/knv8QDimqwjAe9gyJijymsP6oXPQMkn77EWNRavyK/6Tui5k6I54L7ju jOJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696335782; x=1696940582; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j99v4iQydOXsc8F79xhW/Un25AOKIPWlheKdtEKl/OA=; b=S8yw8TZfFxK1cypEbAKOz/+lxGS00J7cbJHTOGA0f2r4IhYWJMlwbRQBqOSnfrbksG P+qe0DT9Toxkb6e3RcYnLpnt361kG3SY1rBAOeWehATPgTR4H2l9OD9JuFs7RCCgPsuW E4pfxoMNsdn6ppCk6/71pEhF10eiJWueYxHndUWKiJdslXkdndM+/lOv8nm9YVUElso3 Zn3ExPZqGuhpzPG/OyegfqDtdaB37fyosSKUdVM3EV7JQlSDEQhlwqHtnPtHYpUCbbC3 ofro0Kv+FRhipp3Tt/Zfh2PJrwaKVf82zNCiapa2gjVjOY/Dmyop8SDOHbFs/lQKJbEw x58w== X-Gm-Message-State: AOJu0YwR32EjQlC9Iv+tD89intPA6BlBkgmc8VCN/p8uSbch2QieyXBM U61T09RQdWCzYZWphjW7iVec5FSxlD3O2FBd8l4rcw== X-Google-Smtp-Source: AGHT+IF6gWu0tna2tlGTxjhaURiu2gTGXOgGmrTyvoDDnTAIl6R8zIovO+hVGf1KhOTH4QmTdHbT4w== X-Received: by 2002:a17:902:9b84:b0:1c5:b855:38f with SMTP id y4-20020a1709029b8400b001c5b855038fmr13403156plp.24.1696335782194; Tue, 03 Oct 2023 05:23:02 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c1:feaf:31ef:b40c:b4e5:77c]) by smtp.gmail.com with ESMTPSA id b1-20020a170902d30100b001c5de2f1686sm1403881plc.99.2023.10.03.05.23.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 05:23:01 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org, Noah Goldstein , Paul Eggert , Florian Weimer Subject: [PATCH v8 3/7] stdlib: Move insertion sort out qsort Date: Tue, 3 Oct 2023 09:22:47 -0300 Message-Id: <20231003122251.3325435-4-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> References: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org --- stdlib/qsort.c | 101 ++++++++++++++++++++++++++----------------------- 1 file changed, 54 insertions(+), 47 deletions(-) Reviewed-by: Noah Goldstein diff --git a/stdlib/qsort.c b/stdlib/qsort.c index 072ccdfb95..5691249a9b 100644 --- a/stdlib/qsort.c +++ b/stdlib/qsort.c @@ -111,6 +111,58 @@ typedef struct #define STACK_NOT_EMPTY (stack < top) +static inline void +insertion_sort_qsort_partitions (void *const pbase, size_t total_elems, + size_t size, enum swap_type_t swap_type, + __compar_d_fn_t cmp, void *arg) +{ + char *base_ptr = (char *) pbase; + char *const end_ptr = &base_ptr[size * (total_elems - 1)]; + char *tmp_ptr = base_ptr; +#define min(x, y) ((x) < (y) ? (x) : (y)) + const size_t max_thresh = MAX_THRESH * size; + char *thresh = min(end_ptr, base_ptr + max_thresh); + char *run_ptr; + + /* Find smallest element in first threshold and place it at the + array's beginning. This is the smallest array element, + and the operation speeds up insertion sort's inner loop. */ + + for (run_ptr = tmp_ptr + size; run_ptr <= thresh; run_ptr += size) + if (cmp (run_ptr, tmp_ptr, arg) < 0) + tmp_ptr = run_ptr; + + if (tmp_ptr != base_ptr) + do_swap (tmp_ptr, base_ptr, size, swap_type); + + /* Insertion sort, running from left-hand-side up to right-hand-side. */ + + run_ptr = base_ptr + size; + while ((run_ptr += size) <= end_ptr) + { + tmp_ptr = run_ptr - size; + while (cmp (run_ptr, tmp_ptr, arg) < 0) + tmp_ptr -= size; + + tmp_ptr += size; + if (tmp_ptr != run_ptr) + { + char *trav; + + trav = run_ptr + size; + while (--trav >= run_ptr) + { + char c = *trav; + char *hi, *lo; + + for (hi = lo = trav; (lo -= size) >= tmp_ptr; hi = lo) + *hi = *lo; + *hi = c; + } + } + } +} + /* Order size using quicksort. This implementation incorporates four optimizations discussed in Sedgewick: @@ -257,51 +309,6 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, for partitions below MAX_THRESH size. BASE_PTR points to the beginning of the array to sort, and END_PTR points at the very last element in the array (*not* one beyond it!). */ - -#define min(x, y) ((x) < (y) ? (x) : (y)) - - { - char *const end_ptr = &base_ptr[size * (total_elems - 1)]; - char *tmp_ptr = base_ptr; - char *thresh = min(end_ptr, base_ptr + max_thresh); - char *run_ptr; - - /* Find smallest element in first threshold and place it at the - array's beginning. This is the smallest array element, - and the operation speeds up insertion sort's inner loop. */ - - for (run_ptr = tmp_ptr + size; run_ptr <= thresh; run_ptr += size) - if ((*cmp) ((void *) run_ptr, (void *) tmp_ptr, arg) < 0) - tmp_ptr = run_ptr; - - if (tmp_ptr != base_ptr) - do_swap (tmp_ptr, base_ptr, size, swap_type); - - /* Insertion sort, running from left-hand-side up to right-hand-side. */ - - run_ptr = base_ptr + size; - while ((run_ptr += size) <= end_ptr) - { - tmp_ptr = run_ptr - size; - while ((*cmp) ((void *) run_ptr, (void *) tmp_ptr, arg) < 0) - tmp_ptr -= size; - - tmp_ptr += size; - if (tmp_ptr != run_ptr) - { - char *trav; - - trav = run_ptr + size; - while (--trav >= run_ptr) - { - char c = *trav; - char *hi, *lo; - - for (hi = lo = trav; (lo -= size) >= tmp_ptr; hi = lo) - *hi = *lo; - *hi = c; - } - } - } - } + insertion_sort_qsort_partitions (pbase, total_elems, size, swap_type, cmp, + arg); } From patchwork Tue Oct 3 12:22:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 77018 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EBFD73875462 for ; Tue, 3 Oct 2023 12:23:57 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by sourceware.org (Postfix) with ESMTPS id 087093857342 for ; Tue, 3 Oct 2023 12:23:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 087093857342 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1c60a514f3aso6446685ad.3 for ; Tue, 03 Oct 2023 05:23:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1696335784; x=1696940584; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=lalwyLl0SNT9fXfI43KwDyeZlEexF65og/8SxMtD+Sk=; b=kdI0ucU/S2UiaIsGdKAZAb09BUru01CQ7r7DP+daGpyVAIMnkin9Jnv1d0HCQ1id8V jNj3Lo8Kq1nIYcYspKvRy46L2cZbnybAebhQphhROjEo1YpTJrXO4MKFVXzcA8soVFVf 7qGD7flySseeVMpRM6KA6ZC1TMNml1NUQQQzvkssVPUuki0iKaFSx6H7fa0EJz27MKil N3xJ0dZa0GLRHtT1kkh45BiMAj7gZyx/nsjxpNoYQ3Wagl4qTu64SwGK5Y05VAyG+fLH 6cetc54zTI2KtV+c1TdqOpZiQw0Y+mQjSK4j3UIVUiXxFde8iw0K+mZadi7sYKpwJUPv gVyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696335784; x=1696940584; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lalwyLl0SNT9fXfI43KwDyeZlEexF65og/8SxMtD+Sk=; b=Dl2H+YHB+lg8Ba3HJRHvn6tTb629KW2Cielo7gKn7yH+xHZm5AKaNEd2BmtwQ69OBT fldzKkTbgf16l2xuQJK4kzTozXTIb0bUBs1GvqBb/Rg1yBQGAgXvzCPKug8DjLn/kfzV 4Fs8cBCzLDL60yN6ffyHOQR0Z8gNskurSdTcvFw1VpidDtgiGsFlCDrl/iqNRQWjX0KM hXF7jipmLKDoTOq4MxS1ab9ODqjfdsqQfViTnMSxAGwcqwtYE07APVVUZwRSPog+3Fee 3SUF6aJtZ/GnlXRr1qQjk/qVOlUuC9kSZezwPCcO1riVWezbM74pIsn6ih5mCj3GcU5t nP8w== X-Gm-Message-State: AOJu0Yz0KCJWKZyaDnoEgiQd/+znI864aL7vko/2AhKMgUynxqz3140Y 4/iBdGNWKEDSuSDCgU3LCVYAktkdfj1e/fDriIC23g== X-Google-Smtp-Source: AGHT+IHtreGKeWdpKM+6B66G/X6BhaImTYHltO9GmnUwrYfqWtE3ZAg7PoodExQNANO46j+eTMEmfQ== X-Received: by 2002:a17:902:ecce:b0:1c7:1fbc:b9e7 with SMTP id a14-20020a170902ecce00b001c71fbcb9e7mr16325402plh.43.1696335784339; Tue, 03 Oct 2023 05:23:04 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c1:feaf:31ef:b40c:b4e5:77c]) by smtp.gmail.com with ESMTPSA id b1-20020a170902d30100b001c5de2f1686sm1403881plc.99.2023.10.03.05.23.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 05:23:03 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org, Noah Goldstein , Paul Eggert , Florian Weimer Subject: [PATCH v8 4/7] stdlib: qsort: Move some macros to inline function Date: Tue, 3 Oct 2023 09:22:48 -0300 Message-Id: <20231003122251.3325435-5-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> References: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org --- stdlib/qsort.c | 35 +++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-) Reviewed-by: Noah Goldstein diff --git a/stdlib/qsort.c b/stdlib/qsort.c index 5691249a9b..80706b3357 100644 --- a/stdlib/qsort.c +++ b/stdlib/qsort.c @@ -100,15 +100,28 @@ typedef struct char *hi; } stack_node; -/* The next 4 #defines implement a very fast in-line stack abstraction. */ /* The stack needs log (total_elements) entries (we could even subtract log(MAX_THRESH)). Since total_elements has type size_t, we get as upper bound for log (total_elements): bits per byte (CHAR_BIT) * sizeof(size_t). */ -#define STACK_SIZE (CHAR_BIT * sizeof (size_t)) -#define PUSH(low, high) ((void) ((top->lo = (low)), (top->hi = (high)), ++top)) -#define POP(low, high) ((void) (--top, (low = top->lo), (high = top->hi))) -#define STACK_NOT_EMPTY (stack < top) +enum { STACK_SIZE = CHAR_BIT * sizeof (size_t) }; + +static inline stack_node * +push (stack_node *top, char *lo, char *hi) +{ + top->lo = lo; + top->hi = hi; + return ++top; +} + +static inline stack_node * +pop (stack_node *top, char **lo, char **hi) +{ + --top; + *lo = top->lo; + *hi = top->hi; + return top; +} static inline void @@ -212,11 +225,9 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, char *lo = base_ptr; char *hi = &lo[size * (total_elems - 1)]; stack_node stack[STACK_SIZE]; - stack_node *top = stack; - - PUSH (NULL, NULL); + stack_node *top = stack + 1; - while (STACK_NOT_EMPTY) + while (stack < top) { char *left_ptr; char *right_ptr; @@ -281,7 +292,7 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, { if ((size_t) (hi - left_ptr) <= max_thresh) /* Ignore both small partitions. */ - POP (lo, hi); + top = pop (top, &lo, &hi); else /* Ignore small left partition. */ lo = left_ptr; @@ -292,13 +303,13 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, else if ((right_ptr - lo) > (hi - left_ptr)) { /* Push larger left partition indices. */ - PUSH (lo, right_ptr); + top = push (top, lo, right_ptr); lo = left_ptr; } else { /* Push larger right partition indices. */ - PUSH (left_ptr, hi); + top = push (top, left_ptr, hi); hi = right_ptr; } } From patchwork Tue Oct 3 12:22:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 77019 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 99543385B53E for ; Tue, 3 Oct 2023 12:24:01 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by sourceware.org (Postfix) with ESMTPS id 084983858C33 for ; Tue, 3 Oct 2023 12:23:08 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 084983858C33 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1bf55a81eeaso6430265ad.0 for ; Tue, 03 Oct 2023 05:23:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1696335786; x=1696940586; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=9m8Ahb3Yj9ux4Gp/JQXWajleurIMjvebrE9EcJ39dr0=; b=Aole4ElhFdJbe9vNDWbIEKgL6XO+u+mTRY4g0UafLm5kfYkpC7dQFu88jFe1BwTItA rlgl+ldtsyBjnkfpLMOzAJ4wODq3UPN6D1Gvxu53Qt78m+8OrnYTPUH8jKcJL6h7aqxH ezvTHRFIFsbFDmBwzSnmIwUutbT6wjivnw8ADtVpZwFgI788fneRh7/msq1dCziELq7f KC4pCYJYcICO3OZGXrkhuxT+Zr2W0KFJVcbdY2VWNFywe2syk9h8bmrztfwcFL+GDXwb FIlllQiEV8LmZrawAe/IDrWubxdmlOnxSKu1BIf+0wu+JU5w0phs+BjOJEx/KkDW8iaL N47g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696335786; x=1696940586; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9m8Ahb3Yj9ux4Gp/JQXWajleurIMjvebrE9EcJ39dr0=; b=toEk00+utmyGxmQYzzJ63PxqOLgOBAVtxt1o8b22YeTKPvjB2Abzxudz7vsM+bqOE3 Dk8ffI6cI5KCgSfWVqJki5bgrJ2tgO/CYdqoa07wxlPEu85av50lUcD6I4vJOfZsbFu8 stwsepLWWm1bEpbNdSASIQFju+JHDBGY3KsNLI36W9lq/ZBHy4lIkh+p0ym5LCbG0Dq1 hA4KRm4Qm+8ss6vUussPgkhDDGDuYOMibxxanDnQPOfQhHJ/S3SR0CXjh0Tam2XKklW0 eajTalI2JPTGgLLBy9kKZo035KqTQHUa6KM0Z/h5LJ3Q4v6PXCHhF8vshW8eGTn+6JyU 0aGA== X-Gm-Message-State: AOJu0YwihRuIZvT4e4zH4CwWPHOPFI7V8iPO6mnQdqEyaQ02eFPPWGS9 SqBlRvhztNxQV1MBVZMLqIWHsoewoVkUw29S05QXIw== X-Google-Smtp-Source: AGHT+IEHmWTx0n1Dw6zjjqbVPwJGqqoMDFsK7UcdrF4HuqX63hMLQ4ZdpUvgdG7blqwB0tAHaUPyIA== X-Received: by 2002:a17:902:a983:b0:1c7:3002:6ba3 with SMTP id bh3-20020a170902a98300b001c730026ba3mr11439399plb.8.1696335786426; Tue, 03 Oct 2023 05:23:06 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c1:feaf:31ef:b40c:b4e5:77c]) by smtp.gmail.com with ESMTPSA id b1-20020a170902d30100b001c5de2f1686sm1403881plc.99.2023.10.03.05.23.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 05:23:05 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org, Noah Goldstein , Paul Eggert , Florian Weimer Subject: [PATCH v8 5/7] stdlib: Implement introsort for qsort (BZ 19305) Date: Tue, 3 Oct 2023 09:22:49 -0300 Message-Id: <20231003122251.3325435-6-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> References: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org This patch makes the quicksort implementation to acts as introsort, to avoid worse-case performance (and thus making it O(nlog n)). It switch to heapsort when the depth level reaches 2*log2(total elements). The heapsort is a textbook implementation. Checked on x86_64-linux-gnu and aarch64-linux-gnu. Reviewed-by: Noah Goldstein --- stdlib/qsort.c | 89 ++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 82 insertions(+), 7 deletions(-) diff --git a/stdlib/qsort.c b/stdlib/qsort.c index 80706b3357..d5f205affc 100644 --- a/stdlib/qsort.c +++ b/stdlib/qsort.c @@ -98,6 +98,7 @@ typedef struct { char *lo; char *hi; + size_t depth; } stack_node; /* The stack needs log (total_elements) entries (we could even subtract @@ -107,22 +108,85 @@ typedef struct enum { STACK_SIZE = CHAR_BIT * sizeof (size_t) }; static inline stack_node * -push (stack_node *top, char *lo, char *hi) +push (stack_node *top, char *lo, char *hi, size_t depth) { top->lo = lo; top->hi = hi; + top->depth = depth; return ++top; } static inline stack_node * -pop (stack_node *top, char **lo, char **hi) +pop (stack_node *top, char **lo, char **hi, size_t *depth) { --top; *lo = top->lo; *hi = top->hi; + *depth = top->depth; return top; } +/* NB: N is inclusive bound for BASE. */ +static inline void +siftdown (void *base, size_t size, size_t k, size_t n, + enum swap_type_t swap_type, __compar_d_fn_t cmp, void *arg) +{ + while (k <= n / 2) + { + size_t j = 2 * k; + if (j < n && cmp (base + (j * size), base + ((j + 1) * size), arg) < 0) + j++; + + if (cmp (base + (k * size), base + (j * size), arg) >= 0) + break; + + do_swap (base + (size * j), base + (k * size), size, swap_type); + k = j; + } +} + +static inline void +heapify (void *base, size_t size, size_t n, enum swap_type_t swap_type, + __compar_d_fn_t cmp, void *arg) +{ + size_t k = n / 2; + while (1) + { + siftdown (base, size, k, n, swap_type, cmp, arg); + if (k-- == 0) + break; + } +} + +/* A non-recursive heapsort, used on introsort implementation as a fallback + routine with worst-case performance of O(nlog n) and worst-case space + complexity of O(1). It sorts the array starting at BASE and ending at + END, with each element of SIZE bytes. The SWAP_TYPE is the callback + function used to swap elements, and CMP is the function used to compare + elements. */ +static void +heapsort_r (void *base, void *end, size_t size, enum swap_type_t swap_type, + __compar_d_fn_t cmp, void *arg) +{ + const size_t count = ((uintptr_t) end - (uintptr_t) base) / size; + + if (count < 2) + return; + + size_t n = count - 1; + + /* Build the binary heap, largest value at the base[0]. */ + heapify (base, size, n, swap_type, cmp, arg); + + /* On each iteration base[0:n] is the binary heap, while base[n:count] + is sorted. */ + while (n > 0) + { + do_swap (base, base + (n * size), size, swap_type); + n--; + siftdown (base, size, 0, n, swap_type, cmp, arg); + } +} static inline void insertion_sort_qsort_partitions (void *const pbase, size_t total_elems, @@ -208,7 +272,7 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, const size_t max_thresh = MAX_THRESH * size; - if (total_elems == 0) + if (total_elems <= 1) /* Avoid lossage with unsigned arithmetic below. */ return; @@ -220,15 +284,26 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, else swap_type = SWAP_BYTES; + /* Maximum depth before quicksort switches to heapsort. */ + size_t depth = 2 * (sizeof (size_t) * CHAR_BIT - 1 + - __builtin_clzl (total_elems)); + if (total_elems > MAX_THRESH) { char *lo = base_ptr; char *hi = &lo[size * (total_elems - 1)]; stack_node stack[STACK_SIZE]; - stack_node *top = stack + 1; + stack_node *top = push (stack, NULL, NULL, depth); while (stack < top) { + if (depth == 0) + { + heapsort_r (lo, hi, size, swap_type, cmp, arg); + top = pop (top, &lo, &hi, &depth); + continue; + } + char *left_ptr; char *right_ptr; @@ -292,7 +367,7 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, { if ((size_t) (hi - left_ptr) <= max_thresh) /* Ignore both small partitions. */ - top = pop (top, &lo, &hi); + top = pop (top, &lo, &hi, &depth); else /* Ignore small left partition. */ lo = left_ptr; @@ -303,13 +378,13 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, else if ((right_ptr - lo) > (hi - left_ptr)) { /* Push larger left partition indices. */ - top = push (top, lo, right_ptr); + top = push (top, lo, right_ptr, depth - 1); lo = left_ptr; } else { /* Push larger right partition indices. */ - top = push (top, left_ptr, hi); + top = push (top, left_ptr, hi, depth - 1); hi = right_ptr; } } From patchwork Tue Oct 3 12:22:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 77020 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id D85DD3861805 for ; Tue, 3 Oct 2023 12:24:25 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by sourceware.org (Postfix) with ESMTPS id 4800B385696B for ; Tue, 3 Oct 2023 12:23:10 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4800B385696B Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1c62d61dc96so5925335ad.0 for ; Tue, 03 Oct 2023 05:23:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1696335789; x=1696940589; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ADdZOGOBAap69KtGf8mVQAxDNBIJ0Rc1dDiIn9voD3U=; b=hWbdCq9AN+6s2738sh+pJsFLu3T61pceVwE+9ofdHs8zOe8N7TwnQ/gRu7NDXpdN9D 0CqK6+ebx1C2XPo+mwxCaThvT1PHv5tLotx+cevH0weFB7n77j/0jjwwpDQaiG/CA/bj PDjXLReTb9Sh4Nwn5mYspe6t8b00tvFaL+sxLm20FCG/f19VWqUAvcyObiHDUbok1b7j ANRfKm4m7zWETVLNcnR5wYrmBZVKP7vp7FZ7GOtU2dnGTQI7IOlwbLeKpkLFPbgDgMBo /qM5A6LoF6FZmWvljah+clwf8QXF3D6PA3g57805ZxJyjkGirG+1ID+my9M5hVckPDSk FWGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696335789; x=1696940589; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ADdZOGOBAap69KtGf8mVQAxDNBIJ0Rc1dDiIn9voD3U=; b=j8kwn7uBeheDjja/vPiFxnfQ/Okm3IleZ+emAvgFxCx90AUqIVz2GHfkGmu0ajsNtc buWjRGSBwwH19p0VGDEGObDr/G1tPZOG638eQsCwTq7nlrlhBPg6SLfmUaemQhh3HbIv dcn5wze1E6pu/6vKSN9167bOAbYzv6zfr+o5eLAY9AK8oNAPeWdMh/lcpiBnyjjApzdp gJf8UG7/oZXKaFIq2obZruC/SJjVKZH0vOEPWPhuqglEqbYYbto//KRWwL0cZgxy5y0C MaxpUYLVCQ7gTtbnicpXsJ0Y/r88tK+t3KFb1+jZOZ19S8oeaKjZxtuTxYnw6fZ6sI/u c70w== X-Gm-Message-State: AOJu0YxdwYjWE9aJd1hVQU4axxyNW7eXPnfiYywOLGFOGwvIOd+bJ9yk GdglPSEFbEwzbgs69MSxMX1rEoZuZ0oNPJM/StGeuw== X-Google-Smtp-Source: AGHT+IGxhdwk/lG5mqdy6+jGftAWb0JUd35zVRDjEbuEv0P7S++/s+G6TG3Z3sXQ6NdmS2aBSWaIow== X-Received: by 2002:a17:902:ea0c:b0:1c1:fbec:bc1c with SMTP id s12-20020a170902ea0c00b001c1fbecbc1cmr13596190plg.42.1696335788644; Tue, 03 Oct 2023 05:23:08 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c1:feaf:31ef:b40c:b4e5:77c]) by smtp.gmail.com with ESMTPSA id b1-20020a170902d30100b001c5de2f1686sm1403881plc.99.2023.10.03.05.23.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 05:23:08 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org, Noah Goldstein , Paul Eggert , Florian Weimer Subject: [PATCH v8 6/7] stdlib: Remove use of mergesort on qsort (BZ 21719) Date: Tue, 3 Oct 2023 09:22:50 -0300 Message-Id: <20231003122251.3325435-7-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> References: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org This patch removes the mergesort optimization on qsort implementation and uses the introsort instead. The mergesort implementation has some issues: - It is as-safe only for certain types sizes (if total size is less than 1 KB with large element sizes also forcing memory allocation) which contradicts the function documentation. Although not required by the C standard, it is preferable and doable to have an O(1) space implementation. - The malloc for certain element size and element number adds arbitrary latency (might even be worse if malloc is interposed). - To avoid trigger swap from memory allocation the implementation relies on system information that might be virtualized (for instance VMs with overcommit memory) which might lead to potentially use of swap even if system advertise more memory than actually has. The check also have the downside of issuing syscalls where none is expected (although only once per execution). - The mergesort is suboptimal on an already sorted array (BZ#21719). The introsort implementation is already optimized to use constant extra space (due to the limit of total number of elements from maximum VM size) and thus can be used to avoid the malloc usage issues. Resulting performance is slower due the usage of qsort, specially in the worst-case scenario (partialy or sorted arrays) and due the fact mergesort uses a slight improved swap operations. This change also renders the BZ#21719 fix unrequired (since it is meant to fix the sorted input performance degradation for mergesort). The manual is also updated to indicate the function is now async-cancel safe. Checked on x86_64-linux-gnu. Reviewed-by: Noah Goldstein --- include/stdlib.h | 2 - manual/argp.texi | 2 +- manual/locale.texi | 3 +- manual/search.texi | 7 +- stdlib/Makefile | 2 - stdlib/msort.c | 309 --------------------------------------------- stdlib/qsort.c | 14 +- 7 files changed, 16 insertions(+), 323 deletions(-) delete mode 100644 stdlib/msort.c diff --git a/include/stdlib.h b/include/stdlib.h index 0ed8271d9b..580da9be15 100644 --- a/include/stdlib.h +++ b/include/stdlib.h @@ -149,8 +149,6 @@ extern int __posix_openpt (int __oflag) attribute_hidden; extern int __add_to_environ (const char *name, const char *value, const char *combines, int replace) attribute_hidden; -extern void _quicksort (void *const pbase, size_t total_elems, - size_t size, __compar_d_fn_t cmp, void *arg); extern int __on_exit (void (*__func) (int __status, void *__arg), void *__arg); diff --git a/manual/argp.texi b/manual/argp.texi index 0023441812..b77ad68285 100644 --- a/manual/argp.texi +++ b/manual/argp.texi @@ -735,7 +735,7 @@ for options, bad phase of the moon, etc. @c hol_set_group ok @c hol_find_entry ok @c hol_sort @mtslocale @acucorrupt -@c qsort dup @acucorrupt +@c qsort dup @c hol_entry_qcmp @mtslocale @c hol_entry_cmp @mtslocale @c group_cmp ok diff --git a/manual/locale.texi b/manual/locale.texi index 720e0ca952..f6afa5dc44 100644 --- a/manual/locale.texi +++ b/manual/locale.texi @@ -253,7 +253,7 @@ The symbols in this section are defined in the header file @file{locale.h}. @c calculate_head_size ok @c __munmap ok @c compute_hashval ok -@c qsort dup @acucorrupt +@c qsort dup @c rangecmp ok @c malloc @ascuheap @acsmem @c strdup @ascuheap @acsmem @@ -275,7 +275,6 @@ The symbols in this section are defined in the header file @file{locale.h}. @c realloc @ascuheap @acsmem @c realloc @ascuheap @acsmem @c fclose @ascuheap @asulock @acsmem @acsfd @aculock -@c qsort @ascuheap @acsmem @c alias_compare dup @c libc_lock_unlock @aculock @c _nl_explode_name @ascuheap @acsmem diff --git a/manual/search.texi b/manual/search.texi index 5691bf2f2b..a550858478 100644 --- a/manual/search.texi +++ b/manual/search.texi @@ -159,7 +159,7 @@ To sort an array using an arbitrary comparison function, use the @deftypefun void qsort (void *@var{array}, size_t @var{count}, size_t @var{size}, comparison_fn_t @var{compare}) @standards{ISO, stdlib.h} -@safety{@prelim{}@mtsafe{}@assafe{}@acunsafe{@acucorrupt{}}} +@safety{@prelim{}@mtsafe{}@assafe{}@acsafe{}} The @code{qsort} function sorts the array @var{array}. The array contains @var{count} elements, each of which is of size @var{size}. @@ -199,9 +199,8 @@ Functions}): The @code{qsort} function derives its name from the fact that it was originally implemented using the ``quick sort'' algorithm. -The implementation of @code{qsort} in this library might not be an -in-place sort and might thereby use an extra amount of memory to store -the array. +The implementation of @code{qsort} in this library is an in-place sort +and uses a constant extra space (allocated on the stack). @end deftypefun @node Search/Sort Example diff --git a/stdlib/Makefile b/stdlib/Makefile index 25e42a77e7..095518eef4 100644 --- a/stdlib/Makefile +++ b/stdlib/Makefile @@ -96,7 +96,6 @@ routines := \ mbtowc \ mrand48 \ mrand48_r \ - msort \ nrand48 \ nrand48_r \ old_atexit \ @@ -380,7 +379,6 @@ generated += \ # generated CFLAGS-bsearch.c += $(uses-callbacks) -CFLAGS-msort.c += $(uses-callbacks) CFLAGS-qsort.c += $(uses-callbacks) CFLAGS-system.c += -fexceptions CFLAGS-system.os = -fomit-frame-pointer diff --git a/stdlib/msort.c b/stdlib/msort.c deleted file mode 100644 index bbaa5e9f82..0000000000 --- a/stdlib/msort.c +++ /dev/null @@ -1,309 +0,0 @@ -/* An alternative to qsort, with an identical interface. - This file is part of the GNU C Library. - Copyright (C) 1992-2023 Free Software Foundation, Inc. - - The GNU C Library is free software; you can redistribute it and/or - modify it under the terms of the GNU Lesser General Public - License as published by the Free Software Foundation; either - version 2.1 of the License, or (at your option) any later version. - - The GNU C Library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Lesser General Public License for more details. - - You should have received a copy of the GNU Lesser General Public - License along with the GNU C Library; if not, see - . */ - -#include -#include -#include -#include -#include -#include -#include -#include - -struct msort_param -{ - size_t s; - size_t var; - __compar_d_fn_t cmp; - void *arg; - char *t; -}; -static void msort_with_tmp (const struct msort_param *p, void *b, size_t n); - -static void -msort_with_tmp (const struct msort_param *p, void *b, size_t n) -{ - char *b1, *b2; - size_t n1, n2; - - if (n <= 1) - return; - - n1 = n / 2; - n2 = n - n1; - b1 = b; - b2 = (char *) b + (n1 * p->s); - - msort_with_tmp (p, b1, n1); - msort_with_tmp (p, b2, n2); - - char *tmp = p->t; - const size_t s = p->s; - __compar_d_fn_t cmp = p->cmp; - void *arg = p->arg; - switch (p->var) - { - case 0: - while (n1 > 0 && n2 > 0) - { - if ((*cmp) (b1, b2, arg) <= 0) - { - *(uint32_t *) tmp = *(uint32_t *) b1; - b1 += sizeof (uint32_t); - --n1; - } - else - { - *(uint32_t *) tmp = *(uint32_t *) b2; - b2 += sizeof (uint32_t); - --n2; - } - tmp += sizeof (uint32_t); - } - break; - case 1: - while (n1 > 0 && n2 > 0) - { - if ((*cmp) (b1, b2, arg) <= 0) - { - *(uint64_t *) tmp = *(uint64_t *) b1; - b1 += sizeof (uint64_t); - --n1; - } - else - { - *(uint64_t *) tmp = *(uint64_t *) b2; - b2 += sizeof (uint64_t); - --n2; - } - tmp += sizeof (uint64_t); - } - break; - case 2: - while (n1 > 0 && n2 > 0) - { - unsigned long *tmpl = (unsigned long *) tmp; - unsigned long *bl; - - tmp += s; - if ((*cmp) (b1, b2, arg) <= 0) - { - bl = (unsigned long *) b1; - b1 += s; - --n1; - } - else - { - bl = (unsigned long *) b2; - b2 += s; - --n2; - } - while (tmpl < (unsigned long *) tmp) - *tmpl++ = *bl++; - } - break; - case 3: - while (n1 > 0 && n2 > 0) - { - if ((*cmp) (*(const void **) b1, *(const void **) b2, arg) <= 0) - { - *(void **) tmp = *(void **) b1; - b1 += sizeof (void *); - --n1; - } - else - { - *(void **) tmp = *(void **) b2; - b2 += sizeof (void *); - --n2; - } - tmp += sizeof (void *); - } - break; - default: - while (n1 > 0 && n2 > 0) - { - if ((*cmp) (b1, b2, arg) <= 0) - { - tmp = (char *) __mempcpy (tmp, b1, s); - b1 += s; - --n1; - } - else - { - tmp = (char *) __mempcpy (tmp, b2, s); - b2 += s; - --n2; - } - } - break; - } - - if (n1 > 0) - memcpy (tmp, b1, n1 * s); - memcpy (b, p->t, (n - n2) * s); -} - - -void -__qsort_r (void *b, size_t n, size_t s, __compar_d_fn_t cmp, void *arg) -{ - size_t size = n * s; - char *tmp = NULL; - struct msort_param p; - - /* For large object sizes use indirect sorting. */ - if (s > 32) - size = 2 * n * sizeof (void *) + s; - - if (size < 1024) - /* The temporary array is small, so put it on the stack. */ - p.t = __alloca (size); - else - { - /* We should avoid allocating too much memory since this might - have to be backed up by swap space. */ - static long int phys_pages; - static int pagesize; - - if (pagesize == 0) - { - phys_pages = __sysconf (_SC_PHYS_PAGES); - - if (phys_pages == -1) - /* Error while determining the memory size. So let's - assume there is enough memory. Otherwise the - implementer should provide a complete implementation of - the `sysconf' function. */ - phys_pages = (long int) (~0ul >> 1); - - /* The following determines that we will never use more than - a quarter of the physical memory. */ - phys_pages /= 4; - - /* Make sure phys_pages is written to memory. */ - atomic_write_barrier (); - - pagesize = __sysconf (_SC_PAGESIZE); - } - - /* Just a comment here. We cannot compute - phys_pages * pagesize - and compare the needed amount of memory against this value. - The problem is that some systems might have more physical - memory then can be represented with a `size_t' value (when - measured in bytes. */ - - /* If the memory requirements are too high don't allocate memory. */ - if (size / pagesize > (size_t) phys_pages) - { - _quicksort (b, n, s, cmp, arg); - return; - } - - /* It's somewhat large, so malloc it. */ - int save = errno; - tmp = malloc (size); - __set_errno (save); - if (tmp == NULL) - { - /* Couldn't get space, so use the slower algorithm - that doesn't need a temporary array. */ - _quicksort (b, n, s, cmp, arg); - return; - } - p.t = tmp; - } - - p.s = s; - p.var = 4; - p.cmp = cmp; - p.arg = arg; - - if (s > 32) - { - /* Indirect sorting. */ - char *ip = (char *) b; - void **tp = (void **) (p.t + n * sizeof (void *)); - void **t = tp; - void *tmp_storage = (void *) (tp + n); - - while ((void *) t < tmp_storage) - { - *t++ = ip; - ip += s; - } - p.s = sizeof (void *); - p.var = 3; - msort_with_tmp (&p, p.t + n * sizeof (void *), n); - - /* tp[0] .. tp[n - 1] is now sorted, copy around entries of - the original array. Knuth vol. 3 (2nd ed.) exercise 5.2-10. */ - char *kp; - size_t i; - for (i = 0, ip = (char *) b; i < n; i++, ip += s) - if ((kp = tp[i]) != ip) - { - size_t j = i; - char *jp = ip; - memcpy (tmp_storage, ip, s); - - do - { - size_t k = (kp - (char *) b) / s; - tp[j] = jp; - memcpy (jp, kp, s); - j = k; - jp = kp; - kp = tp[k]; - } - while (kp != ip); - - tp[j] = jp; - memcpy (jp, tmp_storage, s); - } - } - else - { - if ((s & (sizeof (uint32_t) - 1)) == 0 - && ((uintptr_t) b) % __alignof__ (uint32_t) == 0) - { - if (s == sizeof (uint32_t)) - p.var = 0; - else if (s == sizeof (uint64_t) - && ((uintptr_t) b) % __alignof__ (uint64_t) == 0) - p.var = 1; - else if ((s & (sizeof (unsigned long) - 1)) == 0 - && ((uintptr_t) b) - % __alignof__ (unsigned long) == 0) - p.var = 2; - } - msort_with_tmp (&p, b, n); - } - free (tmp); -} -libc_hidden_def (__qsort_r) -weak_alias (__qsort_r, qsort_r) - - -void -qsort (void *b, size_t n, size_t s, __compar_fn_t cmp) -{ - return __qsort_r (b, n, s, (__compar_d_fn_t) cmp, NULL); -} -libc_hidden_def (qsort) diff --git a/stdlib/qsort.c b/stdlib/qsort.c index d5f205affc..fd32a165e7 100644 --- a/stdlib/qsort.c +++ b/stdlib/qsort.c @@ -19,7 +19,6 @@ Engineering a sort function; Jon Bentley and M. Douglas McIlroy; Software - Practice and Experience; Vol. 23 (11), 1249-1265, 1993. */ -#include #include #include #include @@ -265,8 +264,8 @@ insertion_sort_qsort_partitions (void *const pbase, size_t total_elems, stack size is needed (actually O(1) in this case)! */ void -_quicksort (void *const pbase, size_t total_elems, size_t size, - __compar_d_fn_t cmp, void *arg) +__qsort_r (void *const pbase, size_t total_elems, size_t size, + __compar_d_fn_t cmp, void *arg) { char *base_ptr = (char *) pbase; @@ -398,3 +397,12 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, insertion_sort_qsort_partitions (pbase, total_elems, size, swap_type, cmp, arg); } +libc_hidden_def (__qsort_r) +weak_alias (__qsort_r, qsort_r) + +void +qsort (void *b, size_t n, size_t s, __compar_fn_t cmp) +{ + return __qsort_r (b, n, s, (__compar_d_fn_t) cmp, NULL); +} +libc_hidden_def (qsort) From patchwork Tue Oct 3 12:22:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 77017 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id F125C386192C for ; Tue, 3 Oct 2023 12:23:48 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by sourceware.org (Postfix) with ESMTPS id 68E263856965 for ; Tue, 3 Oct 2023 12:23:12 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 68E263856965 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-pg1-x536.google.com with SMTP id 41be03b00d2f7-584a761b301so536731a12.3 for ; Tue, 03 Oct 2023 05:23:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1696335791; x=1696940591; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=7TcY49J55BYUyAkq/YRTuBz+HDpV/m8KrGvfXTPX2bw=; b=P5VpTV9QUgLnuUxF4jU/23P3UBvg7vBdf0Qu9p1I4ibb+LniWsSNGgMfYTSLnFA3La xOAsiIFY54zKp85/Lb/Cki8a1YjUnpqHQAfPNdMPV04lgtEHiStMmISfAyA/HZ2TJu84 ozKooqxUh81qeW0Hn1h+QhIxqS5p4BEipPva6dkbqYemFdEVQn9O/AII6KXLPqxTUCXu +HjeNEjSegw0PISywk+6sitlcoy3mGUSSZXkjMyXhAgdgUxlomgUX3K0537vQdKzeXsh 7IW+QEbQbwIElny9rm7qn7iOcOPUv8MjCMgzikPXw8W//JYgq2OMYzFysBHkTNs6s1Ry 63RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696335791; x=1696940591; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7TcY49J55BYUyAkq/YRTuBz+HDpV/m8KrGvfXTPX2bw=; b=qKGSeofDc6Ovt6NvFwDgjkD+/cc1JYf/ETtW8i1jAgSC46v3SyFdAcnp43JtvKZeFu u/iDAaGOrX6HgLXqhoV8KVjPegi+tiQIWatSKLEkJlro/TRFo5daMMV62gJS9MzjQqzo OvD8NN9rJc3n1QWMTfOIkh0R9ptHcO4ZbN/3AIXN4rqcrNjCyCKNomhKEdqpGGfniunN ADR+uiANnB+0eivn72i5/evuiEDHlcJ8kUWTThIhoLLoNV/7l39DqYmuf0meu+srHbKl QEXRawglAuYPkJ1ufXtOO4/iK7ek53E6AkDOr/uZ7Zz2I9XeKtQcAxLSPticDzkFlq0f krlQ== X-Gm-Message-State: AOJu0YzYNRFM/fOrnhmm9jCW5hF9QKWIm6FLjLjEbSurn4CsNJ9UJFob i9ufdZ4O2Y6B44sjp3BA7oWO+xDtCi0nQCO/zKOb6g== X-Google-Smtp-Source: AGHT+IEKk7uN1TXPvMZse97F2RZKEsYecYjKKlEaiwAhzGXQw2xho6YVe01FkW+NEhb7u0fUq50lRQ== X-Received: by 2002:a05:6a20:12ca:b0:140:6d68:ce07 with SMTP id v10-20020a056a2012ca00b001406d68ce07mr16911683pzg.52.1696335790779; Tue, 03 Oct 2023 05:23:10 -0700 (PDT) Received: from mandiga.. ([2804:1b3:a7c1:feaf:31ef:b40c:b4e5:77c]) by smtp.gmail.com with ESMTPSA id b1-20020a170902d30100b001c5de2f1686sm1403881plc.99.2023.10.03.05.23.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 05:23:10 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org, Noah Goldstein , Paul Eggert , Florian Weimer Subject: [PATCH v8 7/7] stdlib: Add more qsort{_r} coverage Date: Tue, 3 Oct 2023 09:22:51 -0300 Message-Id: <20231003122251.3325435-8-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> References: <20231003122251.3325435-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.7 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org This patch adds a qsort and qsort_r to trigger the worst case scenario for the quicksort (which glibc current lacks coverage). The test is done with random input, dfferent internal types (uint8_t, uint16_t, uint32_t, uint64_t, large size), and with different set of element numbers. Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: Noah Goldstein --- stdlib/Makefile | 1 + stdlib/tst-qsort3.c | 366 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 367 insertions(+) create mode 100644 stdlib/tst-qsort3.c diff --git a/stdlib/Makefile b/stdlib/Makefile index 095518eef4..6af606136e 100644 --- a/stdlib/Makefile +++ b/stdlib/Makefile @@ -214,6 +214,7 @@ tests := \ tst-on_exit \ tst-qsort \ tst-qsort2 \ + tst-qsort3 \ tst-quick_exit \ tst-rand48 \ tst-rand48-2 \ diff --git a/stdlib/tst-qsort3.c b/stdlib/tst-qsort3.c new file mode 100644 index 0000000000..421560d744 --- /dev/null +++ b/stdlib/tst-qsort3.c @@ -0,0 +1,366 @@ +/* qsort(_r) tests to trigger worst case for quicksort. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +typedef enum +{ + Sorted, + Random, + Repeated, + Bitonic, + Duplicated, +} arraytype_t; + +/* Ratio of total of elements which will be repeated. */ +static const double RepeatedRatio = 0.2; + +/* Ratio of duplicated element . */ +static const double DuplicatedRatio = 0.4; + +struct array_t +{ + arraytype_t type; + const char *name; +} static const arraytypes[] = +{ + { Sorted, "Sorted" }, + { Random, "Random" }, + { Repeated, "Repeated" }, + { Bitonic, "Bitonic" }, + { Duplicated, "Duplicated" }, +}; + +/* Return the index of BASE as interpreted as an array of elements + of size SIZE. */ +static inline void * +arr (void *base, size_t idx, size_t size) +{ + return (void*)((uintptr_t)base + (idx * size)); +} + +/* Functions used to check qsort. */ +static int +uint8_t_cmp (const void *a, const void *b) +{ + uint8_t ia = *(uint8_t*)a; + uint8_t ib = *(uint8_t*)b; + return (ia > ib) - (ia < ib); +} + +static int +uint16_t_cmp (const void *a, const void *b) +{ + uint16_t ia = *(uint16_t*)a; + uint16_t ib = *(uint16_t*)b; + return (ia > ib) - (ia < ib); +} + +static int +uint32_t_cmp (const void *a, const void *b) +{ + uint32_t ia = *(uint32_t*)a; + uint32_t ib = *(uint32_t*)b; + return (ia > ib) - (ia < ib); +} + +static int +uint64_t_cmp (const void *a, const void *b) +{ + uint64_t ia = *(uint64_t*)a; + uint64_t ib = *(uint64_t*)b; + return (ia > ib) - (ia < ib); +} + +#define LARGE_SIZE 47 + +static int +large_cmp (const void *a, const void *b) +{ + return memcmp (a, b, LARGE_SIZE); +} + +/* Function used to check qsort_r. */ +typedef enum +{ + UINT8_CMP_T, + UINT16_CMP_T, + UINT32_CMP_T, + UINT64_CMP_T, + LARGE_CMP_T +} type_cmp_t; + +static type_cmp_t +uint_t_cmp_type (size_t sz) +{ + switch (sz) + { + case sizeof (uint8_t): return UINT8_CMP_T; + case sizeof (uint16_t): return UINT16_CMP_T; + case sizeof (uint64_t): return UINT64_CMP_T; + case sizeof (uint32_t): return UINT32_CMP_T; + default: return LARGE_CMP_T; + } +} + +static int +uint_t_cmp (const void *a, const void *b, void *arg) +{ + type_cmp_t type = *(type_cmp_t*) arg; + switch (type) + { + case UINT8_CMP_T: return uint8_t_cmp (a, b); + case UINT32_CMP_T: return uint32_t_cmp (a, b); + case UINT16_CMP_T: return uint16_t_cmp (a, b); + case UINT64_CMP_T: return uint64_t_cmp (a, b); + default: return large_cmp (a, b); + } +} + +static void +seq (void *elem, size_t type_size, int value) +{ + if (type_size == sizeof (uint8_t)) + *(uint8_t*)elem = value; + else if (type_size == sizeof (uint16_t)) + *(uint16_t*)elem = value; + else if (type_size == sizeof (uint32_t)) + *(uint32_t*)elem = value; + else if (type_size == sizeof (uint64_t)) + *(uint64_t*)elem = value; + else + memset (elem, value, type_size); +} + +static void +fill_array (void *array, void *refarray, size_t nmemb, size_t type_size, + arraytype_t type) +{ + size_t size = nmemb * type_size; + + switch (type) + { + case Sorted: + for (size_t i = 0; i < nmemb; i++) + seq (arr (array, i, type_size), type_size, i); + break; + + case Random: + arc4random_buf (array, size); + break; + + case Repeated: + { + arc4random_buf (array, size); + + void *randelem = xmalloc (type_size); + arc4random_buf (randelem, type_size); + + /* Repeat REPEATED elements (based on RepeatRatio ratio) in the random + array. */ + size_t repeated = (size_t)(nmemb * RepeatedRatio); + for (size_t i = 0; i < repeated; i++) + { + size_t pos = arc4random_uniform (nmemb - 1); + memcpy (arr (array, pos, type_size), randelem, type_size); + } + free (randelem); + } + break; + + case Bitonic: + { + size_t i; + for (i = 0; i < nmemb / 2; i++) + seq (arr (array, i, type_size), type_size, i); + for ( ; i < nmemb; i++) + seq (arr (array, i, type_size), type_size, (nmemb - 1) - i); + } + break; + + case Duplicated: + { + int randelem1 = arc4random (); + for (size_t i = 0; i < nmemb; i++) + seq (arr (array, i, type_size), type_size, randelem1); + + size_t duplicates = (size_t)(nmemb * DuplicatedRatio); + int randelem2 = arc4random (); + for (size_t i = 0; i < duplicates; i++) + { + size_t pos = arc4random_uniform (nmemb - 1); + seq (arr (array, pos, type_size), type_size, randelem2); + } + } + break; + } + + memcpy (refarray, array, size); +} + +typedef int (*cmpfunc_t)(const void *, const void *); + +/* Simple insertion sort to use as reference sort. */ +static void +qsort_r_ref (void *p, size_t n, size_t s, __compar_d_fn_t cmp, void *arg) +{ + if (n <= 1) + return; + + int i = 1; + char tmp[s]; + while (i < n) + { + memcpy (tmp, arr (p, i, s), s); + int j = i - 1; + while (j >= 0 && cmp (arr (p, j, s), tmp, arg) > 0) + { + memcpy (arr (p, j + 1, s), arr (p, j, s), s); + j = j - 1; + } + memcpy (arr (p, j + 1, s), tmp, s); + i = i + 1; + } +} + +static void +qsort_ref (void *b, size_t n, size_t s, __compar_fn_t cmp) +{ + return qsort_r_ref (b, n, s, (__compar_d_fn_t) cmp, NULL); +} + +/* Check if ARRAY of total NMEMB element of size SIZE is sorted + based on CMPFUNC. */ +static void +check_array (void *array, void *refarray, size_t nmemb, size_t type_size, + cmpfunc_t cmpfunc) +{ + for (size_t i = 1; i < nmemb; i++) + { + int ret = cmpfunc (arr (array, i, type_size), + arr (array, i-1, type_size)); + TEST_VERIFY_EXIT (ret >= 0); + } + + size_t size = nmemb * type_size; + TEST_COMPARE_BLOB (array, size, refarray, size); +} + +static void +check_qsort (void *buf, void *refbuf, size_t nelem, size_t type_size, + arraytype_t type, cmpfunc_t cmpfunc) +{ + fill_array (buf, refbuf, nelem, type_size, type); + + qsort (buf, nelem, type_size, cmpfunc); + qsort_ref (refbuf, nelem, type_size, cmpfunc); + + check_array (buf, refbuf, nelem, type_size, cmpfunc); +} + +static void +check_qsort_r (void *buf, void *refbuf, size_t nelem, size_t type_size, + arraytype_t type, cmpfunc_t cmpfunc) +{ + fill_array (buf, refbuf, nelem, type_size, type); + + type_cmp_t typecmp = uint_t_cmp_type (type_size); + + qsort_r (buf, nelem, type_size, uint_t_cmp, &typecmp); + qsort_r_ref (refbuf, nelem, type_size, uint_t_cmp, &typecmp); + + check_array (buf, refbuf, nelem, type_size, cmpfunc); +} + +static int +do_test (void) +{ + /* Some random sizes. */ + static const size_t nelems[] = { 0, 1, 7, 20, 32, 100, 256, 1024, 4256 }; + size_t max_nelems = 0; + for (int i = 0; i < array_length (nelems); i++) + if (nelems[i] > max_nelems) + max_nelems = nelems[i]; + + static const struct test_t + { + size_t type_size; + cmpfunc_t cmpfunc; + } + tests[] = + { + { sizeof (uint8_t), uint8_t_cmp }, + { sizeof (uint16_t), uint16_t_cmp }, + { sizeof (uint32_t), uint32_t_cmp }, + { sizeof (uint64_t), uint64_t_cmp }, + /* Test swap with large elements. */ + { LARGE_SIZE, large_cmp }, + }; + size_t max_type_size = 0; + for (int i = 0; i < array_length (tests); i++) + if (tests[i].type_size > max_type_size) + max_type_size = tests[i].type_size; + + void *buf = reallocarray (NULL, max_nelems, max_type_size); + TEST_VERIFY_EXIT (buf != NULL); + void *refbuf = reallocarray (NULL, max_nelems, max_type_size); + TEST_VERIFY_EXIT (refbuf != NULL); + + for (const struct test_t *test = tests; test < array_end (tests); ++test) + { + if (test_verbose > 0) + printf ("info: testing qsort with type_size=%zu\n", test->type_size); + for (const struct array_t *arraytype = arraytypes; + arraytype < array_end (arraytypes); + ++arraytype) + { + if (test_verbose > 0) + printf (" distribution=%s\n", arraytype->name); + for (const size_t *nelem = nelems; + nelem < array_end (nelems); + ++nelem) + { + if (test_verbose > 0) + printf (" nelem=%zu, total size=%zu\n", *nelem, + *nelem * test->type_size); + + check_qsort (buf, refbuf, *nelem, test->type_size, + arraytype->type, test->cmpfunc); + check_qsort_r (buf, refbuf, *nelem, test->type_size, + arraytype->type, test->cmpfunc); + } + } + } + + free (buf); + free (refbuf); + + return 0; +} + +#include