From patchwork Sun Feb 18 08:26:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Tirta Halim X-Patchwork-Id: 85934 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 1244B386101E for ; Sun, 18 Feb 2024 08:28:33 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-io1-xd35.google.com (mail-io1-xd35.google.com [IPv6:2607:f8b0:4864:20::d35]) by sourceware.org (Postfix) with ESMTPS id 531C7385B509 for ; Sun, 18 Feb 2024 08:26:50 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 531C7385B509 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 531C7385B509 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::d35 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1708244829; cv=none; b=TxvfcBTBXLAqka2PjvMCVDrBheb/iKNetVYlfKXKVRv56nx0Mr9neMomrw+12Lk4ZIyrMw3eUfMZ7Yc6o2YJh5H880DCyEQ0lJPT73Aiy57wkR1wwede+TkoI9ZJmERFkR1erU5XQjB6ejIGhRqic2m/UE7rQtSIAL23cYyesIQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1708244829; c=relaxed/simple; bh=3xfdK4+VVwC7wrRGm7dwgq1ccobVLpC6CuWhszOeLNk=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=uMAiS9dyULdnFWz/OgzhNtH3EL8rdjGDfsrgrvfUPZYF+mvnPTpOsGOulouS1b7w/Bto8RDqiIPJYqWjKfpfx2VpQQGC2nZpg5cxo/fN8GyKprl7p6XBKkkajhoZyBxrTKqqNCyMbHbbjKCM2IRzgRmLSr330mCsfhrnVhoZ9VE= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-io1-xd35.google.com with SMTP id ca18e2360f4ac-7baa8da5692so185744839f.0 for ; Sun, 18 Feb 2024 00:26:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708244809; x=1708849609; darn=sourceware.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=O/9X/v9TvePuf7VsNd9SKKbvW+ZpK0sSW4OUzkwFvcU=; b=ZPf38WKpfXntfwXeiayX2XGstP0ZIPXI1O1MQeeuBNsgqyTfreG9QK3lIsPT+skgE4 B66YnO+tV0odLvCFD2Xn3ctlYTIp/MAaoyHM6/CnegAUTTfQkfVro8LicGZOq0nm3wyD ziCTNbpIqkW0ZF9uK8FKhalHLr5xWTiCLS2bUhbhyH9w+ZTM0yF/cmgRPEdFOfvUrAok G4m8G5iytnnz5I2H1ey8pGXSOoFjAO8vp6VoqYA4V/s37TYKquX0ZmSdKXqMPU+xXKzW BsVJo66kyoP6Vs/Z1EP9i5Dw2q4K32YM7nGQv/pZRkLBkOdFdXampiZPpTH2lvrLFcuw OBLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708244809; x=1708849609; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=O/9X/v9TvePuf7VsNd9SKKbvW+ZpK0sSW4OUzkwFvcU=; b=fxa+If285Uw4RApbVxUZ28OaNa0PKaiZNpWudsqXKvRFQfqNJuUmJGILJk7RuEmsgx 8lYFKmev2rMP0fkkMGTXPePgmjQtsuWj8KQQ+98NXOaHJBJ+2ahW5Q/o+057TX8LtJeN jFjXeNTCAgoW4RonOSHWR070iyE1AapGVXYeAu7p2fW7lbJar5UNCm1n5hOV9hJIl25Q w+a8wWWFHAttEdxSkn5YoZoEMZf6zLONxOhi1LbG3hNIEbUe+ftsh3ogP4vXczHDplcs 5+x3Yt5ggp0gp/DDnHtXZYqeiMaiRObWGaYfdMi1AS6r3KvFJ3Dqbh/Td7F3ltcxsUsx SkDg== X-Gm-Message-State: AOJu0YzONNfvpfhlDhGobDMhQM1I0RTe02N87w685n6VOVLNpybr6hgK CIIMdgGsbuZCB3nV+DWlKzuBuSLZTfl3GbZfAgD+zMiK54X/wfp28t7j96TDbSQ= X-Google-Smtp-Source: AGHT+IGg2nYJxSImtEUKBO7cvq9V2a3XNvQrn2UmLnZzDj2KYXTCvdxB2IV60MrXCOksVemEz7SLDw== X-Received: by 2002:a05:6e02:12c2:b0:365:80d:9aee with SMTP id i2-20020a056e0212c200b00365080d9aeemr8582714ilm.31.1708244808339; Sun, 18 Feb 2024 00:26:48 -0800 (PST) Received: from localhost.localdomain ([2001:448a:20a0:4608:fb5b:7672:6c0f:a3c0]) by smtp.gmail.com with ESMTPSA id kk8-20020a170903070800b001d6f8b31ddcsm2408102plb.3.2024.02.18.00.26.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Feb 2024 00:26:47 -0800 (PST) From: James Tirta Halim To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, James Tirta Halim Subject: [PATCH v6] sysdeps/x86_64/multiarch/memmem-avx2.c: add memmem-avx2.c Date: Sun, 18 Feb 2024 15:26:21 +0700 Message-ID: <20240218082621.131128-1-tirtajames45@gmail.com> X-Mailer: git-send-email 2.43.2 MIME-Version: 1.0 X-Spam-Status: No, score=-8.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, KAM_STOCKGEN, RCVD_IN_DNSWL_NONE, SCC_10_SHORT_WORD_LINES, SCC_20_SHORT_WORD_LINES, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Find the rarest byte in NE. Find the parts of HS that matches the rare byte and the byte after it. If found, shift back to the start of NE in HS and vector compare the first VEC_SIZE with NE. If matches, compare the rest with MEMCMPEQ. Timings (Core i3-1115G4): basic_memmem twoway_memmem __memmem_avx512 __memmem_avx2 __memmem_generic Total: 6.80124e+06 1.06087e+06 219483 345385 768041 Average: 25958.9 4049.11 837.721 1318.26 2931.45 Passes make check. Changes in v1: 1. Add memmem-avx2.c Changes in v2: 1. Add avx512 support with a generic header file 2. Use __memcmpeq instead of memcmp 3. Remove scalar loop 4. Fix unsafe unaligned load Changes in v3: 1. Avoid checking for alignment to the start of the page since that will be rare 2. Use __memcmpeq instead of __memcmpeq_avx2 (it generates undefined reference errors) 3. Add memmem.c (needs review) 4. Add __memcmpeq_avx2 and __memcmpeq_avx512 to ifunc-impl-list.c (needs review) 5. Add libc_hidden_builtin_def and MEMMEM to memmem.c (needs review) Changes in v4: 1. Correct the cpu feature checks in ifunc-impl-list.c and memmem.c to use AVX512BW and BMI1 for AVX512 and AVX2 and BMI1 for AVX2 2. Correct the Makefile to use the appropriate flags 3. Rename memmem-vectorized-avx.h to memmem-avx-base.h 4. Remove unused vector macros (POPCNT and LZCNT) Changes in v5: 1. Rename SHIFT to RARE, OFF to OFF_S, OFF2 to OFF_E 2. Remove conditional for VEC_SIZE and ONES, and remove unused MASK_SIZE 3. Add comments 4. Limit needle length to VEC_SIZE when finding the rare byte Changes in v6: 1. Fix patch apply error in memmem.c 2. Correctly use MIN(ne_len, VEC_SIZE) when checking if RARE is found at the end of needle 3. Always do unaligned load at the tail code 4. Rename rarebyte_table to ___rarebyte_table 5. Add memmem-avx-base.c in which ___rarebyte_table is defined 6. Add memmem-avx-base to the Makefile 7. Add always_inline to find_rarest_byte 8. Change ((m << off) >> off) to (m & (ONES >> off)) 9. Change void * to unsigned char * in find_rarest_byte --- string/memmem.c | 7 +- sysdeps/x86_64/multiarch/Makefile | 6 + sysdeps/x86_64/multiarch/ifunc-impl-list.c | 12 ++ sysdeps/x86_64/multiarch/memmem-avx-base.c | 20 +++ sysdeps/x86_64/multiarch/memmem-avx-base.h | 183 +++++++++++++++++++++ sysdeps/x86_64/multiarch/memmem-avx2.c | 3 + sysdeps/x86_64/multiarch/memmem-avx512.c | 12 ++ sysdeps/x86_64/multiarch/memmem.c | 67 ++++++++ 8 files changed, 309 insertions(+), 1 deletion(-) create mode 100644 sysdeps/x86_64/multiarch/memmem-avx-base.c create mode 100644 sysdeps/x86_64/multiarch/memmem-avx-base.h create mode 100644 sysdeps/x86_64/multiarch/memmem-avx2.c create mode 100644 sysdeps/x86_64/multiarch/memmem-avx512.c create mode 100644 sysdeps/x86_64/multiarch/memmem.c diff --git a/string/memmem.c b/string/memmem.c index a4117f8e1e..a315c7d0b5 100644 --- a/string/memmem.c +++ b/string/memmem.c @@ -25,6 +25,10 @@ # define __memmem memmem #endif +#ifndef MEMMEM +# define MEMMEM __memmem +#endif + #define RETURN_TYPE void * #define AVAILABLE(h, h_l, j, n_l) ((j) <= (h_l) - (n_l)) #define FASTSEARCH(S,C,N) (void*) memchr ((void *)(S), (C), (N)) @@ -50,7 +54,7 @@ The limit also implies worst-case performance is linear. Needles larger than 256 characters use the linear-time Two-Way algorithm. */ void * -__memmem (const void *haystack, size_t hs_len, +MEMMEM (const void *haystack, size_t hs_len, const void *needle, size_t ne_len) { const unsigned char *hs = (const unsigned char *) haystack; @@ -127,3 +131,4 @@ __memmem (const void *haystack, size_t hs_len, libc_hidden_def (__memmem) weak_alias (__memmem, memmem) libc_hidden_weak (memmem) +libc_hidden_builtin_def (MEMMEM) diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile index d3d2270394..0b46d5f341 100644 --- a/sysdeps/x86_64/multiarch/Makefile +++ b/sysdeps/x86_64/multiarch/Makefile @@ -15,6 +15,9 @@ sysdep_routines += \ memcmpeq-avx2-rtm \ memcmpeq-evex \ memcmpeq-sse2 \ + memmem-avx-base \ + memmem-avx2 \ + memmem-avx512 \ memmove-avx-unaligned-erms \ memmove-avx-unaligned-erms-rtm \ memmove-avx512-no-vzeroupper \ @@ -122,6 +125,9 @@ sysdep_routines += \ varshift \ # sysdep_routines +CFLAGS-memmem-avx2.c += -mavx2 -mbmi -O3 +CFLAGS-memmem-avx512.c += -mavx512f -mavx512bw -mbmi -O3 + CFLAGS-strcspn-sse4.c += -msse4 CFLAGS-strpbrk-sse4.c += -msse4 CFLAGS-strspn-sse4.c += -msse4 diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c index c4a21d4b7c..5fe1440235 100644 --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c @@ -798,6 +798,18 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, __strstr_avx512) IFUNC_IMPL_ADD (array, i, strstr, 1, __strstr_sse2_unaligned) IFUNC_IMPL_ADD (array, i, strstr, 1, __strstr_generic)) + + /* Support sysdeps/x86_64/multiarch/memmem.c. */ + IFUNC_IMPL (i, name, memmem, + IFUNC_IMPL_ADD (array, i, memmem, + (CPU_FEATURE_USABLE (AVX512BW) + && CPU_FEATURE_USABLE (BMI1)), + __memmem_avx512) + IFUNC_IMPL_ADD (array, i, memmem, + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI1)), + __memmem_avx2) + IFUNC_IMPL_ADD (array, i, memmem, 1, __memmem_generic)) /* Support sysdeps/x86_64/multiarch/wcschr.c. */ IFUNC_IMPL (i, name, wcschr, diff --git a/sysdeps/x86_64/multiarch/memmem-avx-base.c b/sysdeps/x86_64/multiarch/memmem-avx-base.c new file mode 100644 index 0000000000..212d75c96f --- /dev/null +++ b/sysdeps/x86_64/multiarch/memmem-avx-base.c @@ -0,0 +1,20 @@ +const unsigned char ___rarebyte_table[256] attribute_hidden + = { 0, 1, 13, 56, 59, 60, 61, 62, 63, 232, 248, 2, 158, 4, + 5, 6, 7, 8, 9, 10, 14, 20, 26, 29, 37, 46, 52, 53, + 54, 55, 57, 58, 255, 172, 242, 193, 162, 174, 178, 182, 218, 219, + 212, 180, 249, 197, 221, 210, 253, 231, 230, 224, 225, 226, 227, 223, + 222, 220, 176, 213, 184, 229, 188, 164, 159, 209, 181, 203, 189, 216, + 196, 192, 185, 205, 161, 168, 215, 187, 211, 194, 195, 165, 206, 204, + 214, 198, 173, 179, 175, 183, 167, 202, 239, 201, 160, 241, 163, 246, + 233, 238, 240, 254, 237, 208, 234, 250, 169, 186, 236, 217, 245, 243, + 228, 170, 247, 244, 251, 235, 199, 200, 252, 207, 177, 191, 171, 190, + 166, 3, 140, 134, 124, 126, 86, 128, 95, 117, 114, 93, 81, 87, + 132, 96, 112, 97, 103, 82, 139, 89, 98, 88, 119, 74, 156, 115, + 104, 75, 120, 106, 76, 155, 90, 122, 107, 125, 152, 145, 136, 137, + 101, 116, 102, 108, 99, 141, 77, 78, 118, 79, 109, 100, 150, 73, + 94, 72, 121, 151, 113, 135, 110, 105, 83, 91, 11, 12, 64, 149, + 146, 111, 65, 69, 66, 15, 16, 17, 18, 19, 130, 92, 144, 123, + 21, 22, 23, 24, 131, 133, 127, 142, 25, 70, 129, 27, 28, 67, + 153, 84, 143, 138, 147, 157, 148, 68, 71, 30, 31, 32, 33, 34, + 35, 36, 154, 38, 39, 40, 41, 42, 80, 43, 44, 45, 47, 48, + 85, 49, 50, 51 }; diff --git a/sysdeps/x86_64/multiarch/memmem-avx-base.h b/sysdeps/x86_64/multiarch/memmem-avx-base.h new file mode 100644 index 0000000000..1333eac5b5 --- /dev/null +++ b/sysdeps/x86_64/multiarch/memmem-avx-base.h @@ -0,0 +1,183 @@ +#include +#include +#include +#include + +#ifndef FUNC_NAME +# define __memmem_avx2 +#endif +#ifndef VEC +# define VEC __m256i +#endif +#ifndef MASK +# define MASK uint32_t +#endif +#ifndef LOAD +# define LOAD(x) _mm256_load_si256 (x) +#endif +#ifndef LOADU +# define LOADU(x) _mm256_loadu_si256 (x) +#endif +#ifndef CMPEQ8_MASK +# define CMPEQ8_MASK(x, y) _mm256_movemask_epi8 (_mm256_cmpeq_epi8 (x, y)) +#endif +#ifndef SETONE8 +# define SETONE8(x) _mm256_set1_epi8 (x) +#endif +#ifndef TZCNT +# define TZCNT(x) _tzcnt_u32 (x) +#endif +#ifndef BLSR +# define BLSR(x) _blsr_u32 (x) +#endif +#define VEC_SIZE sizeof (VEC) +#define ONES ((MASK) -1) + +#ifndef MEMCMPEQ +# define MEMCMPEQ __memcmpeq +#endif +#ifndef MEMCPY +# define MEMCPY memcpy +#endif +#ifndef MEMCHR +# define MEMCHR memchr +#endif +#ifndef PAGE_SIZE +# define PAGE_SIZE 4096 +#endif +#define MIN(x, y) (((x) < (y)) ? (x) : (y)) + +/* Lower is rarer. The table is based on the + *.c and *.h files in glibc. */ +extern const unsigned char ___rarebyte_table[256] attribute_hidden; + +static inline void *__attribute__ ((always_inline)) +find_rarest_byte (const unsigned char *rare, size_t n) +{ + const unsigned char *p = (const unsigned char *) rare; + int c_rare = ___rarebyte_table[*rare]; + int c; + for (; n--; ++p) + { + c = ___rarebyte_table[*p]; + if (c < c_rare) + { + rare = p; + c_rare = c; + } + } + return (void *) rare; +} + +void * +FUNC_NAME (const void *hs, size_t hs_len, const void *ne, size_t ne_len) +{ + if (ne_len == 1) + return (void *) MEMCHR (hs, *(unsigned char *) ne, hs_len); + if (__glibc_unlikely (ne_len == 0)) + return (void *) hs; + if (__glibc_unlikely (hs_len < ne_len)) + return NULL; + VEC hv0, hv1, hv, nv; + MASK i, hm0, hm1, m, cmpm; + const unsigned int matchsh = ne_len < VEC_SIZE ? VEC_SIZE - ne_len : 0; + const MASK matchm = ONES << matchsh; + const unsigned char *h = (const unsigned char *) hs; + const unsigned char *const end = h + hs_len - ne_len; + const unsigned char *hp; + size_t rare = PTR_DIFF (find_rarest_byte ((const unsigned char *)ne, MIN (ne_len, VEC_SIZE)), ne); + /* RARE will always be the first byte to find. + If RARE is at the end of the needle, use the byte before it. */ + if (rare == MIN (ne_len, VEC_SIZE) - 1) + --rare; + const VEC nv0 = SETONE8 (*((char *) ne + rare)); + const VEC nv1 = SETONE8 (*((char *) ne + rare + 1)); + unsigned int off_e = (PTR_DIFF (end, h) < VEC_SIZE) + ? VEC_SIZE - (unsigned int) (end - h) - 1 + : 0; + /* Start from the position of RARE. */ + h += rare; + /* Load the needle vector. */ + if (((uintptr_t) ne & (PAGE_SIZE - 1)) > (PAGE_SIZE - VEC_SIZE) + || ne_len >= VEC_SIZE) + nv = LOADU ((VEC *) ne); + else + MEMCPY (&nv, ne, MIN (VEC_SIZE, ne_len)); + const unsigned int off_s = PTR_DIFF (h, PTR_ALIGN_DOWN (h, VEC_SIZE)); + /* Align down to VEC_SIZE. */ + h -= off_s; + hv0 = LOAD ((const VEC *) h); + hm0 = (MASK) CMPEQ8_MASK (hv0, nv0); + hm1 = (MASK) CMPEQ8_MASK (hv0, nv1) >> 1; + /* Clear the irrelevant bits from aligning down (OFF_S) and ones that are out + * of bounds (OFF_E). */ + m = ((hm0 & hm1) >> off_s) & (ONES >> off_e); + while (m) + { + i = TZCNT (m); + m = BLSR (m); + hp = h + off_s + i - rare; + if (PTR_DIFF (PTR_ALIGN_UP (hp, PAGE_SIZE), hp) >= VEC_SIZE) + { + /* Do a vector compare if we are not crossing a page. */ + hv = LOADU ((VEC *) hp); + cmpm = (MASK) CMPEQ8_MASK (hv, nv) << matchsh; + /* Compare only the relevant bits of the needle vector. */ + if (cmpm == matchm) + /* Compare the rest of the needle. */ + if (ne_len <= VEC_SIZE + || !MEMCMPEQ (hp + VEC_SIZE, (const char *) ne + VEC_SIZE, + ne_len - VEC_SIZE)) + return (void *) hp; + } + else + { + if (!MEMCMPEQ (hp, ne, ne_len)) + return (void *) hp; + } + } + h += VEC_SIZE - 1; + for (; h - rare + VEC_SIZE <= end; h += VEC_SIZE) + { + hv0 = LOADU ((const VEC *) h); + hv1 = LOAD ((const VEC *) (h + 1)); + hm1 = (MASK) CMPEQ8_MASK (hv1, nv1); + hm0 = (MASK) CMPEQ8_MASK (hv0, nv0); + m = hm0 & hm1; + while (m) + { + match: + i = TZCNT (m); + m = BLSR (m); + hp = h + i - rare; + if (PTR_DIFF (PTR_ALIGN_UP (hp, PAGE_SIZE), hp) >= VEC_SIZE) + { + hv = LOADU ((VEC *) hp); + cmpm = (MASK) CMPEQ8_MASK (hv, nv) << matchsh; + if (cmpm == matchm) + if (ne_len <= VEC_SIZE + || !MEMCMPEQ (hp + VEC_SIZE, (const char *) ne + VEC_SIZE, + ne_len - VEC_SIZE)) + return (void *) hp; + } + else + { + if (!MEMCMPEQ (hp, ne, ne_len)) + return (void *) hp; + } + } + } + if (h - rare <= end) + { + off_e = VEC_SIZE - (unsigned int) (end - (h - rare)) - 1; + hv0 = LOADU ((const VEC *) h); + hv1 = LOAD ((const VEC *) (h + 1)); + hm1 = (MASK) CMPEQ8_MASK (hv1, nv1); + hm0 = (MASK) CMPEQ8_MASK (hv0, nv0); + /* Clear the irrelevant bits that are out of bounds. */ + m = hm0 & hm1 & (ONES >> off_e); + if (m) + goto match; + } + return NULL; +} diff --git a/sysdeps/x86_64/multiarch/memmem-avx2.c b/sysdeps/x86_64/multiarch/memmem-avx2.c new file mode 100644 index 0000000000..91f5d5d331 --- /dev/null +++ b/sysdeps/x86_64/multiarch/memmem-avx2.c @@ -0,0 +1,3 @@ +#define FUNC_NAME __memmem_avx2 + +#include "memmem-avx-base.h" diff --git a/sysdeps/x86_64/multiarch/memmem-avx512.c b/sysdeps/x86_64/multiarch/memmem-avx512.c new file mode 100644 index 0000000000..76016c1cfe --- /dev/null +++ b/sysdeps/x86_64/multiarch/memmem-avx512.c @@ -0,0 +1,12 @@ +#define VEC __m512i +#define MASK uint64_t +#define LOAD(x) _mm512_load_si512 (x) +#define LOADU(x) _mm512_loadu_si512 (x) +#define CMPEQ8_MASK(x, y) _mm512_cmpeq_epi8_mask (x, y) +#define SETONE8(x) _mm512_set1_epi8 (x) +#define TZCNT(x) _tzcnt_u64 (x) +#define BLSR(x) _blsr_u64 (x) + +#define FUNC_NAME __memmem_avx512 + +#include "memmem-avx-base.h" diff --git a/sysdeps/x86_64/multiarch/memmem.c b/sysdeps/x86_64/multiarch/memmem.c new file mode 100644 index 0000000000..8fe7b77d33 --- /dev/null +++ b/sysdeps/x86_64/multiarch/memmem.c @@ -0,0 +1,67 @@ +/* Multiple versions of memmem. + All versions must be listed in ifunc-impl-list.c. + Copyright (C) 2012-2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* Redefine memmem so that the compiler won't complain about the type + mismatch with the IFUNC selector in strong_alias, below. */ +#undef memmem +#define memmem __redirect_memmem +#include +#undef memmem + +#define MEMMEM __memmem_generic +#ifdef SHARED +# undef libc_hidden_builtin_def +# define libc_hidden_builtin_def(name) \ + __hidden_ver1 (__memmem_generic, __GI_memmem, __memmem_generic); +#endif + +#include "string/memmem.c" + +extern __typeof (__redirect_memmem) __memmem_avx2 attribute_hidden; +extern __typeof (__redirect_memmem) __memmem_generic attribute_hidden; +extern __typeof (__redirect_memmem) __memmem_avx512 attribute_hidden; + +#define SYMBOL_NAME memmem + +#include "init-arch.h" + +/* Avoid DWARF definition DIE on ifunc symbol so that GDB can handle + ifunc symbol properly. */ +extern __typeof (__redirect_memmem) __libc_memmem; + +static inline void * +IFUNC_SELECTOR (void) +{ + const struct cpu_features *cpu_features = __get_cpu_features (); + + if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_AVX512) + && CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) + && CPU_FEATURE_USABLE_P (cpu_features, BMI1)) + return __memmem_avx512; + + if (CPU_FEATURE_USABLE_P (cpu_features, AVX2) + && CPU_FEATURE_USABLE_P (cpu_features, BMI1)) + return __memmem_avx2; + + return __memmem_generic; +} + +libc_ifunc_redirected (__redirect_memmem, __libc_memmem, IFUNC_SELECTOR ()); +#undef memmem +strong_alias (__libc_memmem, __memmem)