From patchwork Fri Sep 3 17:11:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella Netto X-Patchwork-Id: 44843 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 62FA2384781C for ; Fri, 3 Sep 2021 17:14:17 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 62FA2384781C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1630689257; bh=CwMYT9DPegeFfin3Jj1L59oCD9VF3rvVt+SDb26UQtY=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=VBH1oBwzvYjZ8oENWkx4PDkYkzFoIAY7YhmPinq6U96eHyE6KcYXI924a5Qr0Ww6q VdzCZGNnT1As54l2fW5HLUSkxdUWty3eRi0m2hUu1E6fsa2ll2Qw/2CDO76ObTnztw /A+RRmJrcRfAvkwy3QZMAfbk7JGqfCaHLrZKzmyo= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-qt1-x830.google.com (mail-qt1-x830.google.com [IPv6:2607:f8b0:4864:20::830]) by sourceware.org (Postfix) with ESMTPS id 1E28C3848431 for ; Fri, 3 Sep 2021 17:11:53 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 1E28C3848431 Received: by mail-qt1-x830.google.com with SMTP id x5so5070324qtq.13 for ; Fri, 03 Sep 2021 10:11:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CwMYT9DPegeFfin3Jj1L59oCD9VF3rvVt+SDb26UQtY=; b=AalMmo53wcsgiT3BBIKjz7sOTwI6r5K43d2rDNGNuUNb+15grkK42Kq/Olv+7wbhKU 8BOCpTNwNXBpWmFzEQYVuIN7QPrNunfhDvWqXzLRg4FLKiKN0Ut3WjB5vExJOauFyICT uEGqbRzwWRw7SrrmxlsKLyiS3TLEXYUnEWq6G1GL3+ygWe+etXUZf0Akiwi2AAT/Vodb GsIjTxlHjosyYcZ4RkovH8S8iOZq6iB2oF9dmJTE9Y/WFDGdATrpm1DlEbBW+Y1+0hDS nZbxKNU5nT3ck1u0LwX1bP3rcxC6oqZXE82/eydylJyeelA+Zhq5HSNRsZd99bBqZ4aD ZV/Q== X-Gm-Message-State: AOAM530FlAvbwhQ1fSb3jXdY9CBeEZxlk2ttkW0eq5VNRx4K/SjMdUeY dIrqeOL9as2fusYBW8vHh+6RVFzMoyUS6w== X-Google-Smtp-Source: ABdhPJwzgtm2AkHendV7QRp/g2M/TSFydkLkJjU9hF4c0Q9Q1JWWbq5Rzs7jQ5qDyw8Yygsok8FtqQ== X-Received: by 2002:ac8:7769:: with SMTP id h9mr14361qtu.144.1630689112501; Fri, 03 Sep 2021 10:11:52 -0700 (PDT) Received: from birita.. ([2804:431:c7cb:733d:fff8:7487:556e:e293]) by smtp.gmail.com with ESMTPSA id r4sm3207071qtw.5.2021.09.03.10.11.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Sep 2021 10:11:52 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH v3 3/7] stdlib: Optimization qsort{_r} swap implementation (BZ #19305) Date: Fri, 3 Sep 2021 14:11:40 -0300 Message-Id: <20210903171144.952737-4-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210903171144.952737-1-adhemerval.zanella@linaro.org> References: <20210903171144.952737-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-12.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Netto Reply-To: Adhemerval Zanella Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" It optimizes take in consideration both the most common elements are either 32 or 64 bit in size [1] and inputs are aligned to the word boundary. This is similar to the optimization done on lib/sort.c from Linux. This patchs adds an optimized swap operation on qsort based in previous msort one. Instead of byte operation, three variants are provided: 1. Using uint32_t loads and stores. 2. Using uint64_t loads and stores. 3. Generic one with a temporary buffer and memcpy/mempcpy. The 1. and 2. options are selected only either if architecture defines _STRING_ARCH_unaligned or if base pointer is aligned to required type. It also fixes BZ#19305 by checking input size against number of elements 1 besides 0. Checked on x86_64-linux-gnu. [1] https://sourceware.org/pipermail/libc-alpha/2018-August/096984.html --- stdlib/qsort.c | 109 +++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 91 insertions(+), 18 deletions(-) diff --git a/stdlib/qsort.c b/stdlib/qsort.c index 23f2d28314..59458d151b 100644 --- a/stdlib/qsort.c +++ b/stdlib/qsort.c @@ -24,20 +24,85 @@ #include #include #include +#include -/* Byte-wise swap two items of size SIZE. */ -#define SWAP(a, b, size) \ - do \ - { \ - size_t __size = (size); \ - char *__a = (a), *__b = (b); \ - do \ - { \ - char __tmp = *__a; \ - *__a++ = *__b; \ - *__b++ = __tmp; \ - } while (--__size > 0); \ - } while (0) +/* Swap SIZE bytes between addresses A and B. These helpers are provided + along the generic one as an optimization. */ + +typedef void (*swap_func_t)(void * restrict, void * restrict, size_t); + +/* Return trues is elements can be copied used word load and sortes. + The size must be a multiple of the alignment, and the base address. */ +static inline bool +is_aligned_to_copy (const void *base, size_t size, size_t align) +{ + unsigned char lsbits = size; +#if !_STRING_ARCH_unaligned + lsbits |= (unsigned char)(uintptr_t) base; +#endif + return (lsbits & (align - 1)) == 0; +} + +#define SWAP_WORDS_64 (swap_func_t)0 +#define SWAP_WORDS_32 (swap_func_t)1 +#define SWAP_BYTES (swap_func_t)2 + +static void +swap_words_64 (void * restrict a, void * restrict b, size_t n) +{ + do + { + n -= 8; + uint64_t t = *(uint64_t *)(a + n); + *(uint64_t *)(a + n) = *(uint64_t *)(b + n); + *(uint64_t *)(b + n) = t; + } while (n); +} + +static void +swap_words_32 (void * restrict a, void * restrict b, size_t n) +{ + do + { + n -= 4; + uint32_t t = *(uint32_t *)(a + n); + *(uint32_t *)(a + n) = *(uint32_t *)(b + n); + *(uint32_t *)(b + n) = t; + } while (n); +} + +static void +swap_bytes (void * restrict a, void * restrict b, size_t n) +{ + /* Use multiple small memcpys with constant size to enable inlining + on most targets. */ + enum { SWAP_GENERIC_SIZE = 32 }; + unsigned char tmp[SWAP_GENERIC_SIZE]; + while (n > SWAP_GENERIC_SIZE) + { + memcpy (tmp, a, SWAP_GENERIC_SIZE); + a = memcpy (a, b, SWAP_GENERIC_SIZE) + SWAP_GENERIC_SIZE; + b = memcpy (b, tmp, SWAP_GENERIC_SIZE) + SWAP_GENERIC_SIZE; + n -= SWAP_GENERIC_SIZE; + } + memcpy (tmp, a, n); + memcpy (a, b, n); + memcpy (b, tmp, n); +} + +/* Replace the indirect call with a serie of if statements. It should help + the branch predictor. */ +static void +do_swap (void * restrict a, void * restrict b, size_t size, + swap_func_t swap_func) +{ + if (swap_func == SWAP_WORDS_64) + swap_words_64 (a, b, size); + else if (swap_func == SWAP_WORDS_32) + swap_words_32 (a, b, size); + else + swap_bytes (a, b, size); +} /* Discontinue quicksort algorithm when partition gets below this size. This particular magic number was chosen to work best on a Sun 4/260. */ @@ -97,6 +162,14 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, /* Avoid lossage with unsigned arithmetic below. */ return; + swap_func_t swap_func; + if (is_aligned_to_copy (pbase, size, 8)) + swap_func = SWAP_WORDS_64; + else if (is_aligned_to_copy (pbase, size, 4)) + swap_func = SWAP_WORDS_32; + else + swap_func = SWAP_BYTES; + if (total_elems > MAX_THRESH) { char *lo = base_ptr; @@ -120,13 +193,13 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, char *mid = lo + size * ((hi - lo) / size >> 1); if ((*cmp) ((void *) mid, (void *) lo, arg) < 0) - SWAP (mid, lo, size); + do_swap (mid, lo, size, swap_func); if ((*cmp) ((void *) hi, (void *) mid, arg) < 0) - SWAP (mid, hi, size); + do_swap (mid, hi, size, swap_func); else goto jump_over; if ((*cmp) ((void *) mid, (void *) lo, arg) < 0) - SWAP (mid, lo, size); + do_swap (mid, lo, size, swap_func); jump_over:; left_ptr = lo + size; @@ -145,7 +218,7 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, if (left_ptr < right_ptr) { - SWAP (left_ptr, right_ptr, size); + do_swap (left_ptr, right_ptr, size, swap_func); if (mid == left_ptr) mid = right_ptr; else if (mid == right_ptr) @@ -217,7 +290,7 @@ _quicksort (void *const pbase, size_t total_elems, size_t size, tmp_ptr = run_ptr; if (tmp_ptr != base_ptr) - SWAP (tmp_ptr, base_ptr, size); + do_swap (tmp_ptr, base_ptr, size, swap_func); /* Insertion sort, running from left-hand-side up to right-hand-side. */