From patchwork Wed Apr 13 20:23:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella Netto X-Patchwork-Id: 52879 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4BC643857827 for ; Wed, 13 Apr 2022 20:27:30 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4BC643857827 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1649881650; bh=xIgQVXP/gGwpIyyKYmGRQaUdZ7ZEdTN9xFvo9szU2SI=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=dKL6ngHra1/KLlUxOdSmr2ke2X/N5LJQ4iXKazB3BIRric/adKuWrI6Gt4q/iBWvS p/qJLV1z5uRc0sGthCkQcbJH7LJ8rqOx6icNzutVFIOjh9cSkElYlGpmg0ETHQkMHR B2hHv6JMP8lS7Y+VzN2DX8H3mAXSZOhMYRKuSzbA= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ot1-x336.google.com (mail-ot1-x336.google.com [IPv6:2607:f8b0:4864:20::336]) by sourceware.org (Postfix) with ESMTPS id 4F45C385781A for ; Wed, 13 Apr 2022 20:24:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 4F45C385781A Received: by mail-ot1-x336.google.com with SMTP id e25-20020a0568301e5900b005b236d5d74fso1974326otj.0 for ; Wed, 13 Apr 2022 13:24:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xIgQVXP/gGwpIyyKYmGRQaUdZ7ZEdTN9xFvo9szU2SI=; b=NSRJhDhMdEtNRzdvsEmB7SLXPqpY9UfhlBu3xUIknczxM6S2yNtXiurAKr+x4gvHke OujkBUpV6OvulS2heVDgydTVSCMXOOf5AXMifk0z1c+IS6KlTrE7oOcNeVu90Xbmnh1S HD/ed7x9cncrsIVCoYGd8fV9HZmNdZqIV9UTPRtRhZ4HFkgG+O6sACGAUKA9r91k+/r/ S8GsJWJO2m4KhXS25VO0xNNAotQZQLvY2iKGWWPEf5Tv5XQkC26oy4Zy5D/Uoacojje/ 37E13n8VuTHQkf3zV0sGFd3iwYP3lsT9F86vYOoQQEl6TTz6h9MZKmFrhJIIfbwZvHet /xTQ== X-Gm-Message-State: AOAM530Y5/3AcEtp8fC20TLmXPWRJ3tphZAw/yTUBuefVw6+pdfFXH0N ND30UEx5o6Xs3IPHTqKJIlJzgE87MpkwNQ== X-Google-Smtp-Source: ABdhPJxOvQoRSonan6hjQnsl+8Vp06Q8YIavItg7gmqea06GaTHC7THUXIOqtSAQzatqiVQQIj8BNg== X-Received: by 2002:a05:6830:3114:b0:5e6:d2bf:6333 with SMTP id b20-20020a056830311400b005e6d2bf6333mr8200223ots.262.1649881457142; Wed, 13 Apr 2022 13:24:17 -0700 (PDT) Received: from birita.. ([2804:431:c7ca:431f:889f:8960:cca1:4a60]) by smtp.gmail.com with ESMTPSA id o8-20020a05680803c800b00321034c99a6sm26562oie.3.2022.04.13.13.24.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Apr 2022 13:24:16 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH 4/7] x86: Add SSSE3 optimized chacha20 Date: Wed, 13 Apr 2022 17:23:58 -0300 Message-Id: <20220413202401.408267-5-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220413202401.408267-1-adhemerval.zanella@linaro.org> References: <20220413202401.408267-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 X-Spam-Status: No, score=-11.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_DNSWL_NONE, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Adhemerval Zanella via Libc-alpha From: Adhemerval Zanella Netto Reply-To: Adhemerval Zanella Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" It adds vectorized ChaCha20 implementation based on libgcrypt cipher/chacha20-amd64-ssse3.S. It is used only if SSSE3 is supported and enable by the architecture. On a Ryzen 9 5900X it shows the following improvements (using formatted bench-arc4random data): GENERIC Function MB/s -------------------------------------------------- arc4random [single-thread] 375.06 arc4random_buf(0) [single-thread] 498.50 arc4random_buf(16) [single-thread] 576.86 arc4random_buf(32) [single-thread] 615.76 arc4random_buf(64) [single-thread] 633.97 -------------------------------------------------- arc4random [multi-thread] 359.86 arc4random_buf(0) [multi-thread] 479.27 arc4random_buf(16) [multi-thread] 543.65 arc4random_buf(32) [multi-thread] 581.98 arc4random_buf(64) [multi-thread] 603.01 -------------------------------------------------- SSSE3: Function MB/s -------------------------------------------------- arc4random [single-thread] 576.55 arc4random_buf(0) [single-thread] 961.77 arc4random_buf(16) [single-thread] 1309.38 arc4random_buf(32) [single-thread] 1558.69 arc4random_buf(64) [single-thread] 1728.54 -------------------------------------------------- arc4random [multi-thread] 589.52 arc4random_buf(0) [multi-thread] 967.39 arc4random_buf(16) [multi-thread] 1319.27 arc4random_buf(32) [multi-thread] 1552.96 arc4random_buf(64) [multi-thread] 1734.27 -------------------------------------------------- Checked on x86_64-linux-gnu. --- LICENSES | 20 ++ sysdeps/generic/chacha20_arch.h | 24 +++ sysdeps/x86_64/Makefile | 6 + sysdeps/x86_64/chacha20-ssse3.S | 330 ++++++++++++++++++++++++++++++++ sysdeps/x86_64/chacha20_arch.h | 42 ++++ 5 files changed, 422 insertions(+) create mode 100644 sysdeps/generic/chacha20_arch.h create mode 100644 sysdeps/x86_64/chacha20-ssse3.S create mode 100644 sysdeps/x86_64/chacha20_arch.h diff --git a/LICENSES b/LICENSES index 530893b1dc..2563abd9e2 100644 --- a/LICENSES +++ b/LICENSES @@ -389,3 +389,23 @@ Copyright 2001 by Stephen L. Moshier You should have received a copy of the GNU Lesser General Public License along with this library; if not, see . */ + +sysdeps/x86_64/chacha20-ssse3.S import code from libgcrypt, with the +following notices: + +Copyright (C) 2017-2019 Jussi Kivilinna + +This file is part of Libgcrypt. + +Libgcrypt is free software; you can redistribute it and/or modify +it under the terms of the GNU Lesser General Public License as +published by the Free Software Foundation; either version 2.1 of +the License, or (at your option) any later version. + +Libgcrypt is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Lesser General Public License for more details. + +You should have received a copy of the GNU Lesser General Public +License along with this program; if not, see . diff --git a/sysdeps/generic/chacha20_arch.h b/sysdeps/generic/chacha20_arch.h new file mode 100644 index 0000000000..d7200ac583 --- /dev/null +++ b/sysdeps/generic/chacha20_arch.h @@ -0,0 +1,24 @@ +/* Chacha20 implementation, generic interface. + Copyright (C) 2022 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +static inline void +chacha20_crypt (struct chacha20_state *state, uint8_t *dst, + const uint8_t *src, size_t bytes) +{ + chacha20_crypt_generic (state, dst, src, bytes); +} diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile index 79365aff2a..f43b6a1180 100644 --- a/sysdeps/x86_64/Makefile +++ b/sysdeps/x86_64/Makefile @@ -5,6 +5,12 @@ ifeq ($(subdir),csu) gen-as-const-headers += link-defines.sym endif +ifeq ($(subdir),stdlib) +sysdep_routines += \ + chacha20-ssse3 \ + # sysdep_routines +endif + ifeq ($(subdir),gmon) sysdep_routines += _mcount # We cannot compile _mcount.S with -pg because that would create diff --git a/sysdeps/x86_64/chacha20-ssse3.S b/sysdeps/x86_64/chacha20-ssse3.S new file mode 100644 index 0000000000..f221daf634 --- /dev/null +++ b/sysdeps/x86_64/chacha20-ssse3.S @@ -0,0 +1,330 @@ +/* Optimized SSSE3 implementation of ChaCha20 cipher. + Copyright (C) 2022 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* Based on D. J. Bernstein reference implementation at + http://cr.yp.to/chacha.html: + + chacha-regs.c version 20080118 + D. J. Bernstein + Public domain. */ + +#include + +#ifdef PIC +# define rRIP (%rip) +#else +# define rRIP +#endif + +/* register macros */ +#define INPUT %rdi +#define DST %rsi +#define SRC %rdx +#define NBLKS %rcx +#define ROUND %eax + +/* stack structure */ +#define STACK_VEC_X12 (16) +#define STACK_VEC_X13 (16 + STACK_VEC_X12) +#define STACK_TMP (16 + STACK_VEC_X13) +#define STACK_TMP1 (16 + STACK_TMP) +#define STACK_TMP2 (16 + STACK_TMP1) + +#define STACK_MAX (16 + STACK_TMP2) + +/* vector registers */ +#define X0 %xmm0 +#define X1 %xmm1 +#define X2 %xmm2 +#define X3 %xmm3 +#define X4 %xmm4 +#define X5 %xmm5 +#define X6 %xmm6 +#define X7 %xmm7 +#define X8 %xmm8 +#define X9 %xmm9 +#define X10 %xmm10 +#define X11 %xmm11 +#define X12 %xmm12 +#define X13 %xmm13 +#define X14 %xmm14 +#define X15 %xmm15 + +/********************************************************************** + helper macros + **********************************************************************/ + +/* 4x4 32-bit integer matrix transpose */ +#define transpose_4x4(x0, x1, x2, x3, t1, t2, t3) \ + movdqa x0, t2; \ + punpckhdq x1, t2; \ + punpckldq x1, x0; \ + \ + movdqa x2, t1; \ + punpckldq x3, t1; \ + punpckhdq x3, x2; \ + \ + movdqa x0, x1; \ + punpckhqdq t1, x1; \ + punpcklqdq t1, x0; \ + \ + movdqa t2, x3; \ + punpckhqdq x2, x3; \ + punpcklqdq x2, t2; \ + movdqa t2, x2; + +/* fill xmm register with 32-bit value from memory */ +#define pbroadcastd(mem32, xreg) \ + movd mem32, xreg; \ + pshufd $0, xreg, xreg; + +/* xor with unaligned memory operand */ +#define pxor_u(umem128, xreg, t) \ + movdqu umem128, t; \ + pxor t, xreg; + +/* xor register with unaligned src and save to unaligned dst */ +#define xor_src_dst(dst, src, offset, xreg, t) \ + pxor_u(offset(src), xreg, t); \ + movdqu xreg, offset(dst); + +#define clear(x) pxor x,x; + +/********************************************************************** + 4-way chacha20 + **********************************************************************/ + +#define ROTATE2(v1,v2,c,tmp1,tmp2) \ + movdqa v1, tmp1; \ + movdqa v2, tmp2; \ + psrld $(32 - (c)), v1; \ + pslld $(c), tmp1; \ + paddb tmp1, v1; \ + psrld $(32 - (c)), v2; \ + pslld $(c), tmp2; \ + paddb tmp2, v2; + +#define ROTATE_SHUF_2(v1,v2,shuf) \ + pshufb shuf, v1; \ + pshufb shuf, v2; + +#define XOR(ds,s) \ + pxor s, ds; + +#define PLUS(ds,s) \ + paddd s, ds; + +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,tmp2,\ + interleave_op1,interleave_op2) \ + movdqa L(shuf_rol16) rRIP, tmp1; \ + interleave_op1; \ + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ + ROTATE_SHUF_2(d1, d2, tmp1); \ + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ + ROTATE2(b1, b2, 12, tmp1, tmp2); \ + movdqa L(shuf_rol8) rRIP, tmp1; \ + interleave_op2; \ + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ + ROTATE_SHUF_2(d1, d2, tmp1); \ + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ + ROTATE2(b1, b2, 7, tmp1, tmp2); + + .text + +chacha20_data: + .align 16 +L(shuf_rol16): + .byte 2,3,0,1,6,7,4,5,10,11,8,9,14,15,12,13 +L(shuf_rol8): + .byte 3,0,1,2,7,4,5,6,11,8,9,10,15,12,13,14 +L(counter1): + .long 1,0,0,0 +L(inc_counter): + .long 0,1,2,3 +L(unsigned_cmp): + .long 0x80000000,0x80000000,0x80000000,0x80000000 + +ENTRY (__chacha20_ssse3_blocks8) + /* input: + * %rdi: input + * %rsi: dst + * %rdx: src + * %rcx: nblks (multiple of 4) + */ + + pushq %rbp; + cfi_adjust_cfa_offset(8); + cfi_rel_offset(rbp, 0) + movq %rsp, %rbp; + cfi_def_cfa_register(%rbp); + + subq $STACK_MAX, %rsp; + andq $~15, %rsp; + +L(loop4): + mov $20, ROUND; + + /* Construct counter vectors X12 and X13 */ + movdqa L(inc_counter) rRIP, X0; + movdqa L(unsigned_cmp) rRIP, X2; + pbroadcastd((12 * 4)(INPUT), X12); + pbroadcastd((13 * 4)(INPUT), X13); + paddd X0, X12; + movdqa X12, X1; + pxor X2, X0; + pxor X2, X1; + pcmpgtd X1, X0; + psubd X0, X13; + movdqa X12, (STACK_VEC_X12)(%rsp); + movdqa X13, (STACK_VEC_X13)(%rsp); + + /* Load vectors */ + pbroadcastd((0 * 4)(INPUT), X0); + pbroadcastd((1 * 4)(INPUT), X1); + pbroadcastd((2 * 4)(INPUT), X2); + pbroadcastd((3 * 4)(INPUT), X3); + pbroadcastd((4 * 4)(INPUT), X4); + pbroadcastd((5 * 4)(INPUT), X5); + pbroadcastd((6 * 4)(INPUT), X6); + pbroadcastd((7 * 4)(INPUT), X7); + pbroadcastd((8 * 4)(INPUT), X8); + pbroadcastd((9 * 4)(INPUT), X9); + pbroadcastd((10 * 4)(INPUT), X10); + pbroadcastd((11 * 4)(INPUT), X11); + pbroadcastd((14 * 4)(INPUT), X14); + pbroadcastd((15 * 4)(INPUT), X15); + movdqa X11, (STACK_TMP)(%rsp); + movdqa X15, (STACK_TMP1)(%rsp); + +L(round2_4): + QUARTERROUND2(X0, X4, X8, X12, X1, X5, X9, X13, tmp:=,X11,X15,,) + movdqa (STACK_TMP)(%rsp), X11; + movdqa (STACK_TMP1)(%rsp), X15; + movdqa X8, (STACK_TMP)(%rsp); + movdqa X9, (STACK_TMP1)(%rsp); + QUARTERROUND2(X2, X6, X10, X14, X3, X7, X11, X15, tmp:=,X8,X9,,) + QUARTERROUND2(X0, X5, X10, X15, X1, X6, X11, X12, tmp:=,X8,X9,,) + movdqa (STACK_TMP)(%rsp), X8; + movdqa (STACK_TMP1)(%rsp), X9; + movdqa X11, (STACK_TMP)(%rsp); + movdqa X15, (STACK_TMP1)(%rsp); + QUARTERROUND2(X2, X7, X8, X13, X3, X4, X9, X14, tmp:=,X11,X15,,) + sub $2, ROUND; + jnz .Lround2_4; + + /* tmp := X15 */ + movdqa (STACK_TMP)(%rsp), X11; + pbroadcastd((0 * 4)(INPUT), X15); + PLUS(X0, X15); + pbroadcastd((1 * 4)(INPUT), X15); + PLUS(X1, X15); + pbroadcastd((2 * 4)(INPUT), X15); + PLUS(X2, X15); + pbroadcastd((3 * 4)(INPUT), X15); + PLUS(X3, X15); + pbroadcastd((4 * 4)(INPUT), X15); + PLUS(X4, X15); + pbroadcastd((5 * 4)(INPUT), X15); + PLUS(X5, X15); + pbroadcastd((6 * 4)(INPUT), X15); + PLUS(X6, X15); + pbroadcastd((7 * 4)(INPUT), X15); + PLUS(X7, X15); + pbroadcastd((8 * 4)(INPUT), X15); + PLUS(X8, X15); + pbroadcastd((9 * 4)(INPUT), X15); + PLUS(X9, X15); + pbroadcastd((10 * 4)(INPUT), X15); + PLUS(X10, X15); + pbroadcastd((11 * 4)(INPUT), X15); + PLUS(X11, X15); + movdqa (STACK_VEC_X12)(%rsp), X15; + PLUS(X12, X15); + movdqa (STACK_VEC_X13)(%rsp), X15; + PLUS(X13, X15); + movdqa X13, (STACK_TMP)(%rsp); + pbroadcastd((14 * 4)(INPUT), X15); + PLUS(X14, X15); + movdqa (STACK_TMP1)(%rsp), X15; + movdqa X14, (STACK_TMP1)(%rsp); + pbroadcastd((15 * 4)(INPUT), X13); + PLUS(X15, X13); + movdqa X15, (STACK_TMP2)(%rsp); + + /* Update counter */ + addq $4, (12 * 4)(INPUT); + + transpose_4x4(X0, X1, X2, X3, X13, X14, X15); + xor_src_dst(DST, SRC, (64 * 0 + 16 * 0), X0, X15); + xor_src_dst(DST, SRC, (64 * 1 + 16 * 0), X1, X15); + xor_src_dst(DST, SRC, (64 * 2 + 16 * 0), X2, X15); + xor_src_dst(DST, SRC, (64 * 3 + 16 * 0), X3, X15); + transpose_4x4(X4, X5, X6, X7, X0, X1, X2); + movdqa (STACK_TMP)(%rsp), X13; + movdqa (STACK_TMP1)(%rsp), X14; + movdqa (STACK_TMP2)(%rsp), X15; + xor_src_dst(DST, SRC, (64 * 0 + 16 * 1), X4, X0); + xor_src_dst(DST, SRC, (64 * 1 + 16 * 1), X5, X0); + xor_src_dst(DST, SRC, (64 * 2 + 16 * 1), X6, X0); + xor_src_dst(DST, SRC, (64 * 3 + 16 * 1), X7, X0); + transpose_4x4(X8, X9, X10, X11, X0, X1, X2); + xor_src_dst(DST, SRC, (64 * 0 + 16 * 2), X8, X0); + xor_src_dst(DST, SRC, (64 * 1 + 16 * 2), X9, X0); + xor_src_dst(DST, SRC, (64 * 2 + 16 * 2), X10, X0); + xor_src_dst(DST, SRC, (64 * 3 + 16 * 2), X11, X0); + transpose_4x4(X12, X13, X14, X15, X0, X1, X2); + xor_src_dst(DST, SRC, (64 * 0 + 16 * 3), X12, X0); + xor_src_dst(DST, SRC, (64 * 1 + 16 * 3), X13, X0); + xor_src_dst(DST, SRC, (64 * 2 + 16 * 3), X14, X0); + xor_src_dst(DST, SRC, (64 * 3 + 16 * 3), X15, X0); + + sub $4, NBLKS; + lea (4 * 64)(DST), DST; + lea (4 * 64)(SRC), SRC; + jnz L(loop4); + + /* clear the used vector registers and stack */ + clear(X0); + movdqa X0, (STACK_VEC_X12)(%rsp); + movdqa X0, (STACK_VEC_X13)(%rsp); + movdqa X0, (STACK_TMP)(%rsp); + movdqa X0, (STACK_TMP1)(%rsp); + movdqa X0, (STACK_TMP2)(%rsp); + clear(X1); + clear(X2); + clear(X3); + clear(X4); + clear(X5); + clear(X6); + clear(X7); + clear(X8); + clear(X9); + clear(X10); + clear(X11); + clear(X12); + clear(X13); + clear(X14); + clear(X15); + + /* eax zeroed by round loop. */ + leave; + cfi_adjust_cfa_offset(-8) + cfi_def_cfa_register(%rsp); + ret; + int3; +END (__chacha20_ssse3_blocks8) diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h new file mode 100644 index 0000000000..37a4fdfb1f --- /dev/null +++ b/sysdeps/x86_64/chacha20_arch.h @@ -0,0 +1,42 @@ +/* Chacha20 implementation, used on arc4random. + Copyright (C) 2022 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include + +unsigned int __chacha20_ssse3_blocks8 (uint32_t *state, uint8_t *dst, + const uint8_t *src, size_t nblks); + +static inline void +chacha20_crypt (struct chacha20_state *state, uint8_t *dst, const uint8_t *src, + size_t bytes) +{ + if (CPU_FEATURE_USABLE_P (cpu_features, SSSE3) && bytes >= CHACHA20_BLOCK_SIZE * 4) + { + size_t nblocks = bytes / CHACHA20_BLOCK_SIZE; + nblocks -= nblocks % 4; + __chacha20_ssse3_blocks8 (state->ctx, dst, src, nblocks); + bytes -= nblocks * CHACHA20_BLOCK_SIZE; + dst += nblocks * CHACHA20_BLOCK_SIZE; + src += nblocks * CHACHA20_BLOCK_SIZE; + } + + if (bytes > 0) + chacha20_crypt_generic (state, dst, src, bytes); +}