From patchwork Mon Sep 13 23:05:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 44963 X-Patchwork-Delegate: carlos@redhat.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 1BD093858002 for ; Mon, 13 Sep 2021 23:22:08 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1BD093858002 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1631575328; bh=OFtg0gPtEGHJuCJcm7UANpGqrn4yBP/l84mFCGodt3k=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=jQLNKszw3BkgugqdGsAqqN/fuQ0wsLB1/oJ/tKAeI6zIRpJomLU81MM3iJg4zOTHU Y4N9Xi0loQ6jWBqYYLSlNnplWf7D696cgq0f34TBAy7A2kCGax7Z+H8GOUxUgTq/4Z ZnryrQrAx9t35sQBcE6fPHoYZ2pydRZ0yzcLZZgU= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-io1-xd35.google.com (mail-io1-xd35.google.com [IPv6:2607:f8b0:4864:20::d35]) by sourceware.org (Postfix) with ESMTPS id C4C603858413 for ; Mon, 13 Sep 2021 23:21:03 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org C4C603858413 Received: by mail-io1-xd35.google.com with SMTP id a22so14413633iok.12 for ; Mon, 13 Sep 2021 16:21:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OFtg0gPtEGHJuCJcm7UANpGqrn4yBP/l84mFCGodt3k=; b=MsjEfFLOcKOfGQxwQU0r17TYqhltm+wOs/c3YwYxhoXxCsyfhmDqbUFQG1UfCZShK3 K8JX2DUMvZtQkSSi3wB9LpNc1dhLHjirNDgbXZ/NpPy/hYjM9eEeCMBbVgcIDoSRYwEz kxhMTc/JkiKaMFLa+kQyweOZdfeGwt3a3YGNAKT1KFY9MFV3ffopOmsF0dWDnCkIcXxc dOFQDlnKvg7R+fta3+fDGG1vBp0S0y1Ikr25w3niQGMfKNMD9peQavQ+35vby1MS323K JWZgs6zEOD4Qz42EX+6gM/jm5rcO17rtBegcg4lCzkh7BzU8iU554YHyDurbrp6egI+n im0w== X-Gm-Message-State: AOAM530Kxyw9fOYyrKywKPQSZa5AhGHOptcv+L6EclUWCUT7OwkoOWU2 U/PuskQsxe8avtbUL1CjPGT3gCnTn4k= X-Google-Smtp-Source: ABdhPJxyU4gc5Dk8174eTTU5Qqg3JBw88+lHqtvbKnzpw6rB44cELoRZGOSuWpuzSOuLmOQ+vOvcbg== X-Received: by 2002:a5d:914b:: with SMTP id y11mr11401063ioq.6.1631575263021; Mon, 13 Sep 2021 16:21:03 -0700 (PDT) Received: from localhost.localdomain (mobile-130-126-255-38.near.illinois.edu. [130.126.255.38]) by smtp.googlemail.com with ESMTPSA id s5sm5508857iol.33.2021.09.13.16.21.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Sep 2021 16:21:02 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH 2/5] x86_64: Add sse2 optimized bcmp implementation in memcmp.S Date: Mon, 13 Sep 2021 18:05:05 -0500 Message-Id: <20210913230506.546749-2-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210913230506.546749-1-goldstein.w.n@gmail.com> References: <20210913230506.546749-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" No bug. This commit does not modify any of the memcmp implementation. It just adds bcmp ifdefs to skip obvious cases where computing the proper 1/-1 required by memcmp is not needed. test-memcmp, test-bcmp, and test-wmemcmp are all passing. --- sysdeps/x86_64/memcmp.S | 55 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 51 insertions(+), 4 deletions(-) diff --git a/sysdeps/x86_64/memcmp.S b/sysdeps/x86_64/memcmp.S index dfd0269db2..21607e7c91 100644 --- a/sysdeps/x86_64/memcmp.S +++ b/sysdeps/x86_64/memcmp.S @@ -49,34 +49,63 @@ L(s2b): movzwl (%rdi), %eax movzwl (%rdi, %rsi), %edx subq $2, %r10 +#ifdef USE_AS_BCMP + je L(finz1) +#else je L(fin2_7) +#endif addq $2, %rdi cmpl %edx, %eax +#ifdef USE_AS_BCMP + jnz L(neq_early) +#else jnz L(fin2_7) +#endif L(s4b): testq $4, %r10 jz L(s8b) movl (%rdi), %eax movl (%rdi, %rsi), %edx subq $4, %r10 +#ifdef USE_AS_BCMP + je L(finz1) +#else je L(fin2_7) +#endif addq $4, %rdi cmpl %edx, %eax +#ifdef USE_AS_BCMP + jnz L(neq_early) +#else jnz L(fin2_7) +#endif L(s8b): testq $8, %r10 jz L(s16b) movq (%rdi), %rax movq (%rdi, %rsi), %rdx subq $8, %r10 +#ifdef USE_AS_BCMP + je L(sub_return8) +#else je L(fin2_7) +#endif addq $8, %rdi cmpq %rdx, %rax +#ifdef USE_AS_BCMP + jnz L(neq_early) +#else jnz L(fin2_7) +#endif L(s16b): movdqu (%rdi), %xmm1 movdqu (%rdi, %rsi), %xmm0 pcmpeqb %xmm0, %xmm1 +#ifdef USE_AS_BCMP + pmovmskb %xmm1, %eax + subl $0xffff, %eax + ret +#else pmovmskb %xmm1, %edx xorl %eax, %eax subl $0xffff, %edx @@ -86,7 +115,7 @@ L(s16b): movzbl (%rcx), %eax movzbl (%rsi, %rcx), %edx jmp L(finz1) - +#endif .p2align 4,, 4 L(finr1b): movzbl (%rdi), %eax @@ -95,7 +124,15 @@ L(finz1): subl %edx, %eax L(exit): ret - +#ifdef USE_AS_BCMP + .p2align 4,, 4 +L(sub_return8): + subq %rdx, %rax + movl %eax, %edx + shrq $32, %rax + orl %edx, %eax + ret +#else .p2align 4,, 4 L(fin2_7): cmpq %rdx, %rax @@ -111,12 +148,17 @@ L(fin2_7): movzbl %dl, %edx subl %edx, %eax ret - +#endif .p2align 4,, 4 L(finz): xorl %eax, %eax ret - +#ifdef USE_AS_BCMP + .p2align 4,, 4 +L(neq_early): + movl $1, %eax + ret +#endif /* For blocks bigger than 32 bytes 1. Advance one of the addr pointer to be 16B aligned. 2. Treat the case of both addr pointers aligned to 16B @@ -246,11 +288,16 @@ L(mt16): .p2align 4,, 4 L(neq): +#ifdef USE_AS_BCMP + movl $1, %eax + ret +#else bsfl %edx, %ecx movzbl (%rdi, %rcx), %eax addq %rdi, %rsi movzbl (%rsi,%rcx), %edx jmp L(finz1) +#endif .p2align 4,, 4 L(ATR):