From patchwork Tue Apr 19 21:28:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella Netto X-Patchwork-Id: 53053 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EACB93857836 for ; Tue, 19 Apr 2022 21:33:12 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org EACB93857836 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1650403993; bh=gERMFarbnIEHboXioVfR8Iy+48hhIqrKJoIIjHCSLlk=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=Cg/mmXVvUiTyMNh2lYdCn+DQfV9BKL2tJSyMusdlUxdiKNdfJnh9TMycv558WLqIF iwr0j41I5pUrrJTP9PwfJMi7nxY03PL5VY869AO38DNzqsSxGjfTAJFvTAFlh0aQtk tMyZlliungXyagEQitpUCeJknyUvzbqxLjP8aciQ= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ot1-x32a.google.com (mail-ot1-x32a.google.com [IPv6:2607:f8b0:4864:20::32a]) by sourceware.org (Postfix) with ESMTPS id 6C7FA3857348 for ; Tue, 19 Apr 2022 21:28:31 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 6C7FA3857348 Received: by mail-ot1-x32a.google.com with SMTP id i3-20020a056830010300b00605468119c3so6341843otp.11 for ; Tue, 19 Apr 2022 14:28:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gERMFarbnIEHboXioVfR8Iy+48hhIqrKJoIIjHCSLlk=; b=vIqqtplAPencd3JE9a7izNG4fp8uq539irqG0N/LPt/hIt9mAqwTrJZIZ2PpI60elG hKU0u0GwryxWNqOPVUwbk9GdYx+bHZlSXTZeVVYZwupdNR8bHbw4I2eyXw9k/Ddpzmy3 qz0BB8KQ1G059cIEfppJ7MEQ4fCf2FjMz2MrLMkFGFyFitRUd1bN1oA5mCYp1gzXwFRl lO43is2TU3UhFniZOrHHTFew8nNWzNLvk7h5PM2Oil1/6iCJkQYNwaJXHwlOxBmZUmDy oc0um028WClqd3Zgh9IZmSO3R7wPbZplIzqdtJskhZ3N+PfCmNlKNIDoSVeI0pZeEGYQ sLIA== X-Gm-Message-State: AOAM530rlHJ30bBPUcu+phjUU4dceN8beRlQlnGPBjL6RpOukGq7exvz M4SwFfF/aK2sOCdOZ2h4cB5Va2WWMITgvg== X-Google-Smtp-Source: ABdhPJyBV3ND0GDCpKfgd16a/hxi3Jfk5Ev5TolqVr1COCZg+43XXoNR+CFlrA3jdCa903l0krr6Pw== X-Received: by 2002:a05:6830:402c:b0:605:4eed:d8df with SMTP id i12-20020a056830402c00b006054eedd8dfmr3441342ots.319.1650403710201; Tue, 19 Apr 2022 14:28:30 -0700 (PDT) Received: from birita.. ([2804:431:c7ca:c9d0:98f6:7aed:2f61:2745]) by smtp.gmail.com with ESMTPSA id nf9-20020a056871460900b000e2c44ca8edsm5473321oab.6.2022.04.19.14.28.28 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Apr 2022 14:28:29 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH v3 6/9] x86: Add AVX2 optimized chacha20 Date: Tue, 19 Apr 2022 18:28:09 -0300 Message-Id: <20220419212812.2688764-7-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220419212812.2688764-1-adhemerval.zanella@linaro.org> References: <20220419212812.2688764-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-11.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_DNSWL_NONE, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Netto Reply-To: Adhemerval Zanella Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" It adds vectorized ChaCha20 implementation based on libgcrypt cipher/chacha20-amd64-avx2.S. It is used only if AVX2 is supported and enabled by the architecture. As for generic implementation, the last step that XOR with the input is omited. On a Ryzen 9 5900X it shows the following improvements (using formatted bench-arc4random data): SSE2: -------------------------------------------------- arc4random [single-thread] 637.06 arc4random_buf(16) [single-thread] 856.62 arc4random_buf(32) [single-thread] 1129.41 arc4random_buf(48) [single-thread] 1260.61 arc4random_buf(64) [single-thread] 1330.56 arc4random_buf(80) [single-thread] 1353.84 arc4random_buf(96) [single-thread] 1376.53 arc4random_buf(112) [single-thread] 1405.74 arc4random_buf(128) [single-thread] 1422.59 -------------------------------------------------- AVX2: Function MB/s -------------------------------------------------- arc4random [single-thread] 809.53 arc4random_buf(16) [single-thread] 1242.56 arc4random_buf(32) [single-thread] 1915.90 arc4random_buf(48) [single-thread] 2230.03 arc4random_buf(64) [single-thread] 2429.68 arc4random_buf(80) [single-thread] 2489.70 arc4random_buf(96) [single-thread] 2598.88 arc4random_buf(112) [single-thread] 2699.93 arc4random_buf(128) [single-thread] 2747.31 Checked on x86_64-linux-gnu. --- LICENSES | 5 +- sysdeps/x86_64/Makefile | 1 + sysdeps/x86_64/chacha20-avx2.S | 313 +++++++++++++++++++++++++++++++++ sysdeps/x86_64/chacha20_arch.h | 22 ++- 4 files changed, 333 insertions(+), 8 deletions(-) create mode 100644 sysdeps/x86_64/chacha20-avx2.S diff --git a/LICENSES b/LICENSES index 415991e208..05a5c07fcf 100644 --- a/LICENSES +++ b/LICENSES @@ -390,8 +390,9 @@ Copyright 2001 by Stephen L. Moshier License along with this library; if not, see . */ -sysdeps/aarch64/chacha20.S and sysdeps/x86_64/chacha20-sse2.S -import code from libgcrypt, with the following notices: +sysdeps/aarch64/chacha20.S, sysdeps/x86_64/chacha20-sse2.S, and +sysdeps/x86_64/chacha20-avx2.S import code from libgcrypt, with the +following notices: Copyright (C) 2017-2019 Jussi Kivilinna diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile index c8fbc30857..0fa8897404 100644 --- a/sysdeps/x86_64/Makefile +++ b/sysdeps/x86_64/Makefile @@ -8,6 +8,7 @@ endif ifeq ($(subdir),stdlib) sysdep_routines += \ chacha20-sse2 \ + chacha20-avx2 \ # sysdep_routines endif diff --git a/sysdeps/x86_64/chacha20-avx2.S b/sysdeps/x86_64/chacha20-avx2.S new file mode 100644 index 0000000000..fb76865890 --- /dev/null +++ b/sysdeps/x86_64/chacha20-avx2.S @@ -0,0 +1,313 @@ +/* Optimized AVX2 implementation of ChaCha20 cipher. + Copyright (C) 2022 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +/* Based on D. J. Bernstein reference implementation at + http://cr.yp.to/chacha.html: + + chacha-regs.c version 20080118 + D. J. Bernstein + Public domain. */ + +#ifdef PIC +# define rRIP (%rip) +#else +# define rRIP +#endif + +/* register macros */ +#define INPUT %rdi +#define DST %rsi +#define SRC %rdx +#define NBLKS %rcx +#define ROUND %eax + +/* stack structure */ +#define STACK_VEC_X12 (32) +#define STACK_VEC_X13 (32 + STACK_VEC_X12) +#define STACK_TMP (32 + STACK_VEC_X13) +#define STACK_TMP1 (32 + STACK_TMP) + +#define STACK_MAX (32 + STACK_TMP1) + +/* vector registers */ +#define X0 %ymm0 +#define X1 %ymm1 +#define X2 %ymm2 +#define X3 %ymm3 +#define X4 %ymm4 +#define X5 %ymm5 +#define X6 %ymm6 +#define X7 %ymm7 +#define X8 %ymm8 +#define X9 %ymm9 +#define X10 %ymm10 +#define X11 %ymm11 +#define X12 %ymm12 +#define X13 %ymm13 +#define X14 %ymm14 +#define X15 %ymm15 + +#define X0h %xmm0 +#define X1h %xmm1 +#define X2h %xmm2 +#define X3h %xmm3 +#define X4h %xmm4 +#define X5h %xmm5 +#define X6h %xmm6 +#define X7h %xmm7 +#define X8h %xmm8 +#define X9h %xmm9 +#define X10h %xmm10 +#define X11h %xmm11 +#define X12h %xmm12 +#define X13h %xmm13 +#define X14h %xmm14 +#define X15h %xmm15 + +/********************************************************************** + helper macros + **********************************************************************/ + +/* 4x4 32-bit integer matrix transpose */ +#define transpose_4x4(x0,x1,x2,x3,t1,t2) \ + vpunpckhdq x1, x0, t2; \ + vpunpckldq x1, x0, x0; \ + \ + vpunpckldq x3, x2, t1; \ + vpunpckhdq x3, x2, x2; \ + \ + vpunpckhqdq t1, x0, x1; \ + vpunpcklqdq t1, x0, x0; \ + \ + vpunpckhqdq x2, t2, x3; \ + vpunpcklqdq x2, t2, x2; + +/* 2x2 128-bit matrix transpose */ +#define transpose_16byte_2x2(x0,x1,t1) \ + vmovdqa x0, t1; \ + vperm2i128 $0x20, x1, x0, x0; \ + vperm2i128 $0x31, x1, t1, x1; + +/********************************************************************** + 8-way chacha20 + **********************************************************************/ + +#define ROTATE2(v1,v2,c,tmp) \ + vpsrld $(32 - (c)), v1, tmp; \ + vpslld $(c), v1, v1; \ + vpaddb tmp, v1, v1; \ + vpsrld $(32 - (c)), v2, tmp; \ + vpslld $(c), v2, v2; \ + vpaddb tmp, v2, v2; + +#define ROTATE_SHUF_2(v1,v2,shuf) \ + vpshufb shuf, v1, v1; \ + vpshufb shuf, v2, v2; + +#define XOR(ds,s) \ + vpxor s, ds, ds; + +#define PLUS(ds,s) \ + vpaddd s, ds, ds; + +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,\ + interleave_op1,interleave_op2,\ + interleave_op3,interleave_op4) \ + vbroadcasti128 .Lshuf_rol16 rRIP, tmp1; \ + interleave_op1; \ + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ + ROTATE_SHUF_2(d1, d2, tmp1); \ + interleave_op2; \ + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ + ROTATE2(b1, b2, 12, tmp1); \ + vbroadcasti128 .Lshuf_rol8 rRIP, tmp1; \ + interleave_op3; \ + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ + ROTATE_SHUF_2(d1, d2, tmp1); \ + interleave_op4; \ + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ + ROTATE2(b1, b2, 7, tmp1); + + .section .text.avx2, "ax", @progbits + .align 32 +chacha20_data: +L(shuf_rol16): + .byte 2,3,0,1,6,7,4,5,10,11,8,9,14,15,12,13 +L(shuf_rol8): + .byte 3,0,1,2,7,4,5,6,11,8,9,10,15,12,13,14 +L(inc_counter): + .byte 0,1,2,3,4,5,6,7 +L(unsigned_cmp): + .long 0x80000000 + + .hidden __chacha20_avx2_blocks8 +ENTRY (__chacha20_avx2_blocks8) + /* input: + * %rdi: input + * %rsi: dst + * %rdx: src + * %rcx: nblks (multiple of 8) + */ + vzeroupper; + + pushq %rbp; + cfi_adjust_cfa_offset(8); + cfi_rel_offset(rbp, 0) + movq %rsp, %rbp; + cfi_def_cfa_register(rbp); + + subq $STACK_MAX, %rsp; + andq $~31, %rsp; + +L(loop8): + mov $20, ROUND; + + /* Construct counter vectors X12 and X13 */ + vpmovzxbd L(inc_counter) rRIP, X0; + vpbroadcastd L(unsigned_cmp) rRIP, X2; + vpbroadcastd (12 * 4)(INPUT), X12; + vpbroadcastd (13 * 4)(INPUT), X13; + vpaddd X0, X12, X12; + vpxor X2, X0, X0; + vpxor X2, X12, X1; + vpcmpgtd X1, X0, X0; + vpsubd X0, X13, X13; + vmovdqa X12, (STACK_VEC_X12)(%rsp); + vmovdqa X13, (STACK_VEC_X13)(%rsp); + + /* Load vectors */ + vpbroadcastd (0 * 4)(INPUT), X0; + vpbroadcastd (1 * 4)(INPUT), X1; + vpbroadcastd (2 * 4)(INPUT), X2; + vpbroadcastd (3 * 4)(INPUT), X3; + vpbroadcastd (4 * 4)(INPUT), X4; + vpbroadcastd (5 * 4)(INPUT), X5; + vpbroadcastd (6 * 4)(INPUT), X6; + vpbroadcastd (7 * 4)(INPUT), X7; + vpbroadcastd (8 * 4)(INPUT), X8; + vpbroadcastd (9 * 4)(INPUT), X9; + vpbroadcastd (10 * 4)(INPUT), X10; + vpbroadcastd (11 * 4)(INPUT), X11; + vpbroadcastd (14 * 4)(INPUT), X14; + vpbroadcastd (15 * 4)(INPUT), X15; + vmovdqa X15, (STACK_TMP)(%rsp); + +L(round2): + QUARTERROUND2(X0, X4, X8, X12, X1, X5, X9, X13, tmp:=,X15,,,,) + vmovdqa (STACK_TMP)(%rsp), X15; + vmovdqa X8, (STACK_TMP)(%rsp); + QUARTERROUND2(X2, X6, X10, X14, X3, X7, X11, X15, tmp:=,X8,,,,) + QUARTERROUND2(X0, X5, X10, X15, X1, X6, X11, X12, tmp:=,X8,,,,) + vmovdqa (STACK_TMP)(%rsp), X8; + vmovdqa X15, (STACK_TMP)(%rsp); + QUARTERROUND2(X2, X7, X8, X13, X3, X4, X9, X14, tmp:=,X15,,,,) + sub $2, ROUND; + jnz L(round2); + + vmovdqa X8, (STACK_TMP1)(%rsp); + + /* tmp := X15 */ + vpbroadcastd (0 * 4)(INPUT), X15; + PLUS(X0, X15); + vpbroadcastd (1 * 4)(INPUT), X15; + PLUS(X1, X15); + vpbroadcastd (2 * 4)(INPUT), X15; + PLUS(X2, X15); + vpbroadcastd (3 * 4)(INPUT), X15; + PLUS(X3, X15); + vpbroadcastd (4 * 4)(INPUT), X15; + PLUS(X4, X15); + vpbroadcastd (5 * 4)(INPUT), X15; + PLUS(X5, X15); + vpbroadcastd (6 * 4)(INPUT), X15; + PLUS(X6, X15); + vpbroadcastd (7 * 4)(INPUT), X15; + PLUS(X7, X15); + transpose_4x4(X0, X1, X2, X3, X8, X15); + transpose_4x4(X4, X5, X6, X7, X8, X15); + vmovdqa (STACK_TMP1)(%rsp), X8; + transpose_16byte_2x2(X0, X4, X15); + transpose_16byte_2x2(X1, X5, X15); + transpose_16byte_2x2(X2, X6, X15); + transpose_16byte_2x2(X3, X7, X15); + vmovdqa (STACK_TMP)(%rsp), X15; + vmovdqu X0, (64 * 0 + 16 * 0)(DST) + vmovdqu X1, (64 * 1 + 16 * 0)(DST) + vpbroadcastd (8 * 4)(INPUT), X0; + PLUS(X8, X0); + vpbroadcastd (9 * 4)(INPUT), X0; + PLUS(X9, X0); + vpbroadcastd (10 * 4)(INPUT), X0; + PLUS(X10, X0); + vpbroadcastd (11 * 4)(INPUT), X0; + PLUS(X11, X0); + vmovdqa (STACK_VEC_X12)(%rsp), X0; + PLUS(X12, X0); + vmovdqa (STACK_VEC_X13)(%rsp), X0; + PLUS(X13, X0); + vpbroadcastd (14 * 4)(INPUT), X0; + PLUS(X14, X0); + vpbroadcastd (15 * 4)(INPUT), X0; + PLUS(X15, X0); + vmovdqu X2, (64 * 2 + 16 * 0)(DST) + vmovdqu X3, (64 * 3 + 16 * 0)(DST) + + /* Update counter */ + addq $8, (12 * 4)(INPUT); + + transpose_4x4(X8, X9, X10, X11, X0, X1); + transpose_4x4(X12, X13, X14, X15, X0, X1); + vmovdqu X4, (64 * 4 + 16 * 0)(DST) + vmovdqu X5, (64 * 5 + 16 * 0)(DST) + transpose_16byte_2x2(X8, X12, X0); + transpose_16byte_2x2(X9, X13, X0); + transpose_16byte_2x2(X10, X14, X0); + transpose_16byte_2x2(X11, X15, X0); + vmovdqu X6, (64 * 6 + 16 * 0)(DST) + vmovdqu X7, (64 * 7 + 16 * 0)(DST) + vmovdqu X8, (64 * 0 + 16 * 2)(DST) + vmovdqu X9, (64 * 1 + 16 * 2)(DST) + vmovdqu X10, (64 * 2 + 16 * 2)(DST) + vmovdqu X11, (64 * 3 + 16 * 2)(DST) + vmovdqu X12, (64 * 4 + 16 * 2)(DST) + vmovdqu X13, (64 * 5 + 16 * 2)(DST) + vmovdqu X14, (64 * 6 + 16 * 2)(DST) + vmovdqu X15, (64 * 7 + 16 * 2)(DST) + + sub $8, NBLKS; + lea (8 * 64)(DST), DST; + lea (8 * 64)(SRC), SRC; + jnz L(loop8); + + /* clear the used vector registers and stack */ + vpxor X0, X0, X0; + vmovdqa X0, (STACK_VEC_X12)(%rsp); + vmovdqa X0, (STACK_VEC_X13)(%rsp); + vmovdqa X0, (STACK_TMP)(%rsp); + vmovdqa X0, (STACK_TMP1)(%rsp); + vzeroall; + + /* eax zeroed by round loop. */ + leave; + cfi_adjust_cfa_offset(-8) + cfi_def_cfa_register(%rsp); + ret; + int3; +END(__chacha20_avx2_blocks8) diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h index 5738c840a9..bfdc6c0a36 100644 --- a/sysdeps/x86_64/chacha20_arch.h +++ b/sysdeps/x86_64/chacha20_arch.h @@ -23,16 +23,26 @@ unsigned int __chacha20_sse2_blocks4 (uint32_t *state, uint8_t *dst, const uint8_t *src, size_t nblks) attribute_hidden; +unsigned int __chacha20_avx2_blocks8 (uint32_t *state, uint8_t *dst, + const uint8_t *src, size_t nblks) + attribute_hidden; static inline void chacha20_crypt (uint32_t *state, uint8_t *dst, const uint8_t *src, size_t bytes) { - _Static_assert (CHACHA20_BUFSIZE % 4 == 0, - "CHACHA20_BUFSIZE not multiple of 4"); - _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 4, - "CHACHA20_BUFSIZE <= CHACHA20_BLOCK_SIZE * 4"); + _Static_assert (CHACHA20_BUFSIZE % 4 == 0 && CHACHA20_BUFSIZE % 8 == 0, + "CHACHA20_BUFSIZE not multiple of 4 or 8"); + _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 8, + "CHACHA20_BUFSIZE < CHACHA20_BLOCK_SIZE * 8"); + const struct cpu_features* cpu_features = __get_cpu_features (); - __chacha20_sse2_blocks4 (state, dst, src, - CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); + /* AVX2 version uses vzeroupper, so disable it if RTM is enabled. */ + if (CPU_FEATURE_USABLE_P (cpu_features, AVX2) + && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER)) + __chacha20_avx2_blocks8 (state, dst, src, + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); + else + __chacha20_sse2_blocks4 (state, dst, src, + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); }