From patchwork Mon Apr 7 05:57:18 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: ling.ma.program@gmail.com X-Patchwork-Id: 415 Return-Path: X-Original-To: siddhesh@wilcox.dreamhost.com Delivered-To: siddhesh@wilcox.dreamhost.com Received: from homiemail-mx22.g.dreamhost.com (mx2.sub5.homie.mail.dreamhost.com [208.113.200.128]) by wilcox.dreamhost.com (Postfix) with ESMTP id 2AD2F36007C for ; Sun, 6 Apr 2014 22:57:40 -0700 (PDT) Received: by homiemail-mx22.g.dreamhost.com (Postfix, from userid 14307373) id CABE95157249; Sun, 6 Apr 2014 22:57:39 -0700 (PDT) X-Original-To: glibc@patchwork.siddhesh.in Delivered-To: x14307373@homiemail-mx22.g.dreamhost.com Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by homiemail-mx22.g.dreamhost.com (Postfix) with ESMTPS id 9E85E5157249 for ; Sun, 6 Apr 2014 22:57:39 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:to:cc:subject:date:message-id; q=dns; s= default; b=s5G5gEwNAlLvIp9inTcxQ2epqeIfosuGLdsDczOwn+g4FyRxIqtBU 2HMcwzmeP0+CKMVfbRBZwgJcM87QMjAPU2KTrCspmMlsFZI27letX+TB4JqyQy03 W7lMDPH+ItglkkuJGRLcfpD4S2c80iCsKXkaZ/0biOyqzoEZOXhTec= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:to:cc:subject:date:message-id; s=default; bh=DCZqFUbZJ+PCxM6Q+pYwRRvcd18=; b=OkSIsVPMf4zBAvFFKQ+1EnjBz2R3 eX0k/eTU47yMCQv1R15E5tiyVWJilEaHtvhiT2nfJUzgaMUoXUb1Et3cnO22hkbZ EVw1EdeB0SPVfnN++mkjETSW86kkbvFbSZF8IEk7m/3FqVIK7Ml0HXAG/QNFfPT6 zp04srGQqKMwhH0= Received: (qmail 8450 invoked by alias); 7 Apr 2014 05:57:36 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 8434 invoked by uid 89); 7 Apr 2014 05:57:35 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: Yes, score=6.7 required=5.0 tests=AWL, BAYES_00, FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, SPAM_URI1, SPF_PASS autolearn=no version=3.3.2 X-HELO: mail-pa0-f43.google.com X-Received: by 10.66.230.166 with SMTP id sz6mr1118171pac.127.1396850251641; Sun, 06 Apr 2014 22:57:31 -0700 (PDT) From: ling.ma.program@gmail.com To: libc-alpha@sourceware.org Cc: rth@twiddle.net, aj@suse.com, neleai@seznam.cz, liubov.dmitrieva@gmail.com, hjl.tools@gmail.com, Ling Ma Subject: Re:[PATCH RFC] Imporve 64bit memset performance for Haswell CPU with AVX2 instruction Date: Mon, 7 Apr 2014 01:57:18 -0400 Message-Id: <1396850238-29041-1-git-send-email-ling.ma@alipay.com> X-DH-Original-To: glibc@patchwork.siddhesh.in From: Ling Ma In this patch we take advantage of HSW memory bandwidth, manage to reduce miss branch prediction by avoid using branch instructions and force destination to be aligned with avx instruction. The CPU2006 403.gcc benchmark also indicate this patch improves performance from 22.9% to 59% compared with original memset implemented by sse2. memset-AVX memset-SSE2 AVX vs SSE2 gcc.166.i 1877958334 2495113045 1.328630673 gcc.200.i 3507448572 4869401205 1.388302952 gcc.cp-decl.i 1742510758 2282801367 1.310064432 gcc.c-typeck.i 9546331594 12158804366 1.273662479 gcc.expr2.i 5067111165 6470777800 1.277015165 gcc.expr.i 3434703577 4420252661 1.286938614 gcc.g23.i 5141096267 6318410858 1.22900069 gcc.s04.i 8652255048 10923077090 1.262454358 gcc.scilab.i 1209694573 1925173588 1.591454265 --- We fixed code and re-test all cases, including sse2 and avx2. ChangeLog | 9 ++ sysdeps/x86_64/multiarch/Makefile | 4 +- sysdeps/x86_64/multiarch/memset-avx2.S | 192 +++++++++++++++++++++++++++++++++ sysdeps/x86_64/multiarch/memset.S | 59 ++++++++++ sysdeps/x86_64/multiarch/memset_chk.S | 44 ++++++++ 5 files changed, 307 insertions(+), 1 deletion(-) create mode 100644 sysdeps/x86_64/multiarch/memset-avx2.S create mode 100644 sysdeps/x86_64/multiarch/memset.S create mode 100644 sysdeps/x86_64/multiarch/memset_chk.S diff --git a/ChangeLog b/ChangeLog index ab23a3a..851fe9e 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,12 @@ +2014-04-04 Ling Ma + + * sysdeps/x86_64/multiarch/Makefile: Add memset-avx2 + * sysdeps/x86_64/multiarch/memset-avx2.S: New file for AVX2 memset + * sysdeps/x86_64/multiarch/memset.S: New file for multiple memset + versions + * sysdeps/x86_64/multiarch/memset_chk.S: New file for multiple memset_chk + versions + 2014-04-04 Sihai Yao * sysdeps/x86_64/multiarch/ifunc-defines.sym: Add COMMON_CPU_INDEX_7 and diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile index 57a3c13..42df96f 100644 --- a/sysdeps/x86_64/multiarch/Makefile +++ b/sysdeps/x86_64/multiarch/Makefile @@ -17,7 +17,9 @@ sysdep_routines += strncat-c stpncpy-c strncpy-c strcmp-ssse3 \ strcpy-sse2-unaligned strncpy-sse2-unaligned \ stpcpy-sse2-unaligned stpncpy-sse2-unaligned \ strcat-sse2-unaligned strncat-sse2-unaligned \ - strchr-sse2-no-bsf memcmp-ssse3 strstr-sse2-unaligned + strchr-sse2-no-bsf memcmp-ssse3 strstr-sse2-unaligned \ + memset-avx2 + ifeq (yes,$(config-cflags-sse4)) sysdep_routines += strcspn-c strpbrk-c strspn-c varshift CFLAGS-varshift.c += -msse4 diff --git a/sysdeps/x86_64/multiarch/memset-avx2.S b/sysdeps/x86_64/multiarch/memset-avx2.S new file mode 100644 index 0000000..5d4a487 --- /dev/null +++ b/sysdeps/x86_64/multiarch/memset-avx2.S @@ -0,0 +1,192 @@ +/* memset with AVX2 + Copyright (C) 2014 Free Software Foundation, Inc. + Contributed by Alibaba Group. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +#if !defined NOT_IN_libc + +#include "asm-syntax.h" +#ifndef ALIGN +# define ALIGN(n) .p2align n +#endif +#ifndef MEMSET +# define MEMSET __memset_avx2 +# define MEMSET_CHK __memset_chk_avx2 +#endif + + .section .text.avx2,"ax",@progbits +#if defined PIC +ENTRY (MEMSET_CHK) + cmpq %rdx, %rcx + jb HIDDEN_JUMPTARGET (__chk_fail) +END (MEMSET_CHK) +#endif + +ENTRY (MEMSET) + vpxor %xmm0, %xmm0, %xmm0 + vmovd %esi, %xmm1 + lea (%rdi, %rdx), %r8 + vpshufb %xmm0, %xmm1, %xmm0 + mov %rdi, %rax + cmp $256, %rdx + jae L(256bytesormore) + vmovd %xmm0, %rcx + cmp $128, %rdx + jb L(less_128bytes) + vmovups %xmm0, (%rdi) + vmovups %xmm0, 0x10(%rdi) + vmovups %xmm0, 0x20(%rdi) + vmovups %xmm0, 0x30(%rdi) + vmovups %xmm0, 0x40(%rdi) + vmovups %xmm0, 0x50(%rdi) + vmovups %xmm0, 0x60(%rdi) + vmovups %xmm0, 0x70(%rdi) + vmovups %xmm0, -0x80(%r8) + vmovups %xmm0, -0x70(%r8) + vmovups %xmm0, -0x60(%r8) + vmovups %xmm0, -0x50(%r8) + vmovups %xmm0, -0x40(%r8) + vmovups %xmm0, -0x30(%r8) + vmovups %xmm0, -0x20(%r8) + vmovups %xmm0, -0x10(%r8) + ret + ALIGN(4) +L(less_128bytes): + cmp $64, %edx + jb L(less_64bytes) + vmovups %xmm0, (%rdi) + vmovups %xmm0, 0x10(%rdi) + vmovups %xmm0, 0x20(%rdi) + vmovups %xmm0, 0x30(%rdi) + vmovups %xmm0, -0x40(%r8) + vmovups %xmm0, -0x30(%r8) + vmovups %xmm0, -0x20(%r8) + vmovups %xmm0, -0x10(%r8) + ret + ALIGN(4) +L(less_64bytes): + cmp $32, %edx + jb L(less_32bytes) + vmovups %xmm0, (%rdi) + vmovups %xmm0, 0x10(%rdi) + vmovups %xmm0, -0x20(%r8) + vmovups %xmm0, -0x10(%r8) + ret + ALIGN(4) +L(less_32bytes): + cmp $16, %edx + jb L(less_16bytes) + vmovups %xmm0, (%rdi) + vmovups %xmm0, -0x10(%r8) + ret + ALIGN(4) +L(less_16bytes): + cmp $8, %edx + jb L(less_8bytes) + mov %rcx, (%rdi) + mov %rcx, -0x08(%r8) + ret + ALIGN(4) +L(less_8bytes): + cmp $4, %edx + jb L(less_4bytes) + mov %ecx, (%rdi) + mov %ecx, -0x04(%r8) + ALIGN(4) +L(less_4bytes): + cmp $2, %edx + jb L(less_2bytes) + mov %cx, (%rdi) + mov %cx, -0x02(%r8) + ret + ALIGN(4) +L(less_2bytes): + cmp $1, %edx + jb L(less_1bytes) + mov %cl, (%rdi) +L(less_1bytes): + ret + + ALIGN(4) +L(256bytesormore): + vinserti128 $1, %xmm0, %ymm0, %ymm0 + vmovups %ymm0, (%rdi) + mov %rdi, %r9 + and $-0x20, %rdi + add $32, %rdi + sub %rdi, %r9 + add %r9, %rdx + cmp $4096, %rdx + ja L(gobble_data) + + sub $0x80, %rdx +L(gobble_128_loop): + vmovaps %ymm0, (%rdi) + vmovaps %ymm0, 0x20(%rdi) + vmovaps %ymm0, 0x40(%rdi) + vmovaps %ymm0, 0x60(%rdi) + lea 0x80(%rdi), %rdi + sub $0x80, %rdx + jae L(gobble_128_loop) + vmovups %ymm0, -0x80(%r8) + vmovups %ymm0, -0x60(%r8) + vmovups %ymm0, -0x40(%r8) + vmovups %ymm0, -0x20(%r8) + vzeroupper + ret + + ALIGN(4) +L(gobble_data): +#ifdef SHARED_CACHE_SIZE_HALF + mov $SHARED_CACHE_SIZE_HALF, %r9 +#else + mov __x86_shared_cache_size_half(%rip), %r9 +#endif + shl $4, %r9 + cmp %r9, %rdx + ja L(gobble_big_data) + mov %rax, %r9 + mov %esi, %eax + mov %rdx, %rcx + rep stosb + mov %r9, %rax + vzeroupper + ret + + ALIGN(4) +L(gobble_big_data): + sub $0x80, %rdx +L(gobble_big_data_loop): + vmovntdq %ymm0, (%rdi) + vmovntdq %ymm0, 0x20(%rdi) + vmovntdq %ymm0, 0x40(%rdi) + vmovntdq %ymm0, 0x60(%rdi) + lea 0x80(%rdi), %rdi + sub $0x80, %rdx + jae L(gobble_big_data_loop) + vmovups %ymm0, -0x80(%r8) + vmovups %ymm0, -0x60(%r8) + vmovups %ymm0, -0x40(%r8) + vmovups %ymm0, -0x20(%r8) + vzeroupper + sfence + ret + +END (MEMSET) +#endif diff --git a/sysdeps/x86_64/multiarch/memset.S b/sysdeps/x86_64/multiarch/memset.S new file mode 100644 index 0000000..df903af --- /dev/null +++ b/sysdeps/x86_64/multiarch/memset.S @@ -0,0 +1,59 @@ +/* Multiple versions of memset + Copyright (C) 2014 Free Software Foundation, Inc. + Contributed by Alibaba Group. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include + +/* Define multiple versions only for the definition in lib. */ +#ifndef NOT_IN_libc +ENTRY(memset) + .type memset, @gnu_indirect_function + cmpl $0, __cpu_features+KIND_OFFSET(%rip) + jne 1f + call __init_cpu_features +1: leaq __memset_sse2(%rip), %rax + testl $bit_AVX2_Usable, __cpu_features+FEATURE_OFFSET+index_AVX2_Usable(%rip) + jz 2f + leaq __memset_avx2(%rip), %rax +2: ret +END(memset) +#endif + +#if !defined NOT_IN_libc +# undef memset +# define memset __memset_sse2 + +# undef __memset_chk +# define __memset_chk __memset_chk_sse2 + +# ifdef SHARED +# undef libc_hidden_builtin_def +/* It doesn't make sense to send libc-internal memset calls through a PLT. + The speedup we get from using GPR instruction is likely eaten away + by the indirect call in the PLT. */ +# define libc_hidden_builtin_def(name) \ + .globl __GI_memset; __GI_memset = __memset_sse2 +# endif + +# undef strong_alias +# define strong_alias(original, alias) +#endif + +#include "../memset.S" diff --git a/sysdeps/x86_64/multiarch/memset_chk.S b/sysdeps/x86_64/multiarch/memset_chk.S new file mode 100644 index 0000000..f048dac --- /dev/null +++ b/sysdeps/x86_64/multiarch/memset_chk.S @@ -0,0 +1,44 @@ +/* Multiple versions of memset_chk + Copyright (C) 2014 Free Software Foundation, Inc. + Contributed by Alibaba Group. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* Define multiple versions only for the definition in lib. */ +#ifndef NOT_IN_libc +# ifdef SHARED +ENTRY(__memset_chk) + .type __memset_chk, @gnu_indirect_function + cmpl $0, __cpu_features+KIND_OFFSET(%rip) + jne 1f + call __init_cpu_features +1: leaq __memset_chk_sse2(%rip), %rax + testl $bit_AVX2_Usable, __cpu_features+FEATURE_OFFSET+index_AVX2_Usable(%rip) + jz 2f + leaq __memset_chk_avx2(%rip), %rax +2: ret +END(__memset_chk) + +strong_alias (__memset_chk, __memset_zero_constant_len_parameter) + .section .gnu.warning.__memset_zero_constant_len_parameter + .string "memset used with constant zero length parameter; this could be due to transposed parameters" +# else +# include "../memset_chk.S" +# endif +#endif