From patchwork Tue Mar 22 11:08:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pawar, Amit" X-Patchwork-Id: 11468 Received: (qmail 96441 invoked by alias); 22 Mar 2016 11:08:34 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 96432 invoked by uid 89); 22 Mar 2016 11:08:33 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=0.8 required=5.0 tests=AWL, BAYES_00, MIME_BASE64_BLANKS, RCVD_IN_DNSWL_NONE, SPF_HELO_PASS autolearn=ham version=3.3.2 spammy=1599, Feature, Hx-languages-length:3120, family X-HELO: na01-bl2-obe.outbound.protection.outlook.com From: "Pawar, Amit" To: "H.J. Lu" CC: "libc-alpha@sourceware.org" Subject: RE: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583 Date: Tue, 22 Mar 2016 11:08:28 +0000 Message-ID: References: In-Reply-To: authentication-results: gmail.com; dkim=none (message not signed) header.d=none; gmail.com; dmarc=none action=none header.from=amd.com; x-ms-office365-filtering-correlation-id: 872bcc70-bc12-4403-1d1f-08d352424bdb x-microsoft-exchange-diagnostics: 1; SN1PR12MB0735; 5:5CNqEpvbzYo53dEq5OBE+yyZOy+6odcpUkHTj38z+OKC7OaEq1bPsFEYPNQyrtiW0oVp7O5XkvCqY3drswxUvNiuFBGZaqxeaPV33lFXG0W6kHm4cutB6SdkOj8nWjAbg77lbR+jCbAtI03p2VuCsw==; 24:4B0FtQhF5pexgBhMxvh3vrI/mwQ1L8EmWXXMVdnwzX17PeweE9lcTY7Xg5tYvJtfBcWFu10+zQhUozDaqsmAPvJnzRAqmjcv2DZCKA7xljM=; 20:USkfZ3f+yyIBbpAt/LE61FY+YPCoJWiZIbyKYRCuafDrqlkCZA4OFbmVE1TDQNwGon+q5kUXFbBr0wHI8EP3uJdf9y6S3oSsltO6HfxLyuoayOwn7GhfdYadJHkbl45326G+2HgL75YjqG9tuLd2L2F7peHnbSPh+5RoA0jkgU960Eq+pFgokjRleZZaxPVsZxypowPTAHLtu/YRffMYaO98GOWUhXqrrDq2W3tSWZashZkiUarGZLdm9B8qMoNy x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:SN1PR12MB0735; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046); SRVR:SN1PR12MB0735; BCL:0; PCL:0; RULEID:; SRVR:SN1PR12MB0735; x-forefront-prvs: 08897B549D x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(1096002)(33656002)(102836003)(76576001)(6116002)(586003)(4326007)(1220700001)(66066001)(99286002)(93886004)(189998001)(110136002)(3660700001)(2900100001)(3280700002)(3846002)(77096005)(2950100001)(86362001)(2906002)(87936001)(10400500002)(122556002)(5003600100002)(5008740100001)(92566002)(5004730100002)(81166005)(5002640100001)(74316001)(11100500001)(50986999)(54356999)(76176999)(19627235001); DIR:OUT; SFP:1101; SCL:1; SRVR:SN1PR12MB0735; H:SN1PR12MB0733.namprd12.prod.outlook.com; FPR:; SPF:None; MLV:sfv; LANG:en; spamdiagnosticoutput: 1:23 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Mar 2016 11:08:28.1154 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB0735 >It was done based on assumption that AVX enabled machine has fast AVX unaligned load. If it isn't true for AMD CPUs, we can enable it for all Intel AVX CPUs and you can set it for AMD CPUs properly. Memcpy still needs to be fixed otherwise SSE2_Unaligned version is selected. Is it OK to fix in following way else please suggest. --Amit diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 1787716..e5c7184 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -159,9 +159,17 @@ init_cpu_features (struct cpu_features *cpu_features) if (family == 0x15) { /* "Excavator" */ +#if index_arch_Fast_Unaligned_Load != index_arch_Prefer_Fast_Copy_Backward +# error index_arch_Fast_Unaligned_Load != index_arch_Prefer_Fast_Copy_Backward +#endif +#if index_arch_Fast_Unaligned_Load != index_arch_Fast_Copy_Backward +# error index_arch_Fast_Unaligned_Load != index_arch_Fast_Copy_Backward +#endif if (model >= 0x60 && model <= 0x7f) cpu_features->feature[index_arch_Fast_Unaligned_Load] - |= bit_arch_Fast_Unaligned_Load; + |= (bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Copy_Backward + | bit_arch_Prefer_Fast_Copy_Backward); } } else diff --git a/sysdeps/x86/cpu-features.h b/sysdeps/x86/cpu-features.h index 0624a92..9750f2f 100644 --- a/sysdeps/x86/cpu-features.h +++ b/sysdeps/x86/cpu-features.h @@ -35,6 +35,7 @@ #define bit_arch_I686 (1 << 15) #define bit_arch_Prefer_MAP_32BIT_EXEC (1 << 16) #define bit_arch_Prefer_No_VZEROUPPER (1 << 17) +#define bit_arch_Prefer_Fast_Copy_Backward (1 << 18) /* CPUID Feature flags. */ @@ -101,6 +102,7 @@ # define index_arch_I686 FEATURE_INDEX_1*FEATURE_SIZE # define index_arch_Prefer_MAP_32BIT_EXEC FEATURE_INDEX_1*FEATURE_SIZE # define index_arch_Prefer_No_VZEROUPPER FEATURE_INDEX_1*FEATURE_SIZE +# define index_arch_Prefer_Fast_Copy_Backward FEATURE_INDEX_1*FEATURE_SIZE # if defined (_LIBC) && !IS_IN (nonlib) @@ -259,6 +261,7 @@ extern const struct cpu_features *__get_cpu_features (void) # define index_arch_I686 FEATURE_INDEX_1 # define index_arch_Prefer_MAP_32BIT_EXEC FEATURE_INDEX_1 # define index_arch_Prefer_No_VZEROUPPER FEATURE_INDEX_1 +# define index_arch_Prefer_Fast_Copy_Backward FEATURE_INDEX_1 #endif /* !__ASSEMBLER__ */ diff --git a/sysdeps/x86_64/multiarch/memcpy.S b/sysdeps/x86_64/multiarch/memcpy.S index 8882590..6fad5cb 100644 --- a/sysdeps/x86_64/multiarch/memcpy.S +++ b/sysdeps/x86_64/multiarch/memcpy.S @@ -40,18 +40,20 @@ ENTRY(__new_memcpy) #endif 1: lea __memcpy_avx_unaligned(%rip), %RAX_LP HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load) + jnz 3f + HAS_ARCH_FEATURE (Preferred_Fast_Copy_Backward) jnz 2f lea __memcpy_sse2_unaligned(%rip), %RAX_LP HAS_ARCH_FEATURE (Fast_Unaligned_Load) - jnz 2f - lea __memcpy_sse2(%rip), %RAX_LP + jnz 3f +2: lea __memcpy_sse2(%rip), %RAX_LP HAS_CPU_FEATURE (SSSE3) - jz 2f + jz 3f lea __memcpy_ssse3_back(%rip), %RAX_LP HAS_ARCH_FEATURE (Fast_Copy_Backward) - jnz 2f + jnz 3f lea __memcpy_ssse3(%rip), %RAX_LP -2: ret +3: ret END(__new_memcpy) # undef ENTRY