From patchwork Tue Feb 10 03:49:00 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Allan McRae X-Patchwork-Id: 5018 X-Patchwork-Delegate: allan@archlinux.org Received: (qmail 4430 invoked by alias); 10 Feb 2015 03:49:17 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 4421 invoked by uid 89); 10 Feb 2015 03:49:16 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.3 required=5.0 tests=AWL, BAYES_00, SPF_PASS, T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: nymeria.archlinux.org From: Allan McRae To: libc-alpha@sourceware.org Subject: [PATCH] Fix __memcpy_chk on non-SSE2 CPUs Date: Tue, 10 Feb 2015 13:49:00 +1000 Message-Id: <1423540140-24973-1-git-send-email-allan@archlinux.org> From: Evangelos Foutras In commit 8b4416d, the 1: jump label in __mempcpy_chk was accidentally moved. This resulted in failures of mempcpy on CPU without SSE2. --- 2015-02-10 Evangelos Foutras [BZ #17949] * sysdeps/i386/i686/multiarch/mempcpy_chk.S: Fix position of jump label. sysdeps/i386/i686/multiarch/mempcpy_chk.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sysdeps/i386/i686/multiarch/mempcpy_chk.S b/sysdeps/i386/i686/multiarch/mempcpy_chk.S index 207b648..b6fa202 100644 --- a/sysdeps/i386/i686/multiarch/mempcpy_chk.S +++ b/sysdeps/i386/i686/multiarch/mempcpy_chk.S @@ -36,8 +36,8 @@ ENTRY(__mempcpy_chk) cmpl $0, KIND_OFFSET+__cpu_features@GOTOFF(%ebx) jne 1f call __init_cpu_features - leal __mempcpy_chk_ia32@GOTOFF(%ebx), %eax -1: testl $bit_SSE2, CPUID_OFFSET+index_SSE2+__cpu_features@GOTOFF(%ebx) +1: leal __mempcpy_chk_ia32@GOTOFF(%ebx), %eax + testl $bit_SSE2, CPUID_OFFSET+index_SSE2+__cpu_features@GOTOFF(%ebx) jz 2f leal __mempcpy_chk_sse2_unaligned@GOTOFF(%ebx), %eax testl $bit_Fast_Unaligned_Load, FEATURE_OFFSET+index_Fast_Unaligned_Load+__cpu_features@GOTOFF(%ebx)