From patchwork Tue May 23 12:15:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Pluzhnikov X-Patchwork-Id: 69888 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 6B99A3858C78 for ; Tue, 23 May 2023 12:16:22 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 6B99A3858C78 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1684844182; bh=VNhTSZwq01KsFqGIhLuAQORqUfXWWJly4ilVWg8/KuA=; h=Date:Subject:To:Cc:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:List-Subscribe:From:Reply-To:From; b=oc37iSRGzM43z+G76SAUon24AWVUWx2g+i8tdzddt/yd5zgBPN14QHkD++usP3Gxb uzMg+1l7Opo4KQtRtIPA8u8RJxcOBcdD8wMQfgNZbS0FgQ5q3NJp03hLYTM2b7KIlh ZqYfFQr46bpVl/cMZKLdNjqySJRl++Lj+okls/bg= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by sourceware.org (Postfix) with ESMTPS id 5E668385842D for ; Tue, 23 May 2023 12:15:58 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5E668385842D Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1ae56528ac8so37342265ad.3 for ; Tue, 23 May 2023 05:15:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684844157; x=1687436157; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=VNhTSZwq01KsFqGIhLuAQORqUfXWWJly4ilVWg8/KuA=; b=AJjquDU8e6d8FtsXYsgN5V+TOFIagyNgIy/NvhGbETA0auLZV+QkbSbpkJW5pelVn2 1ZmPi1rQKWTl6VR7VDkP9AAoAjZ0UuKGdkJbsbZYcikKNNIvbCCeAlmecF2Sq5UduI8H z4qRImQ5dP4T3fZiOBD2IvzPzbYTxAN8F3rbo8dqhzu1XxKSYuqgJ+EfDp80G+fZrUXO Vm8n2ydrGOLpR6uWKa2Z3QOaJFX4QkRdtWbdY+s2EwHnfYq0WmU01vH6fAJ48zS//HB5 ZLok8jwLhqev+Lug2lx33payZQDBNDfE4UiE5o9lesLGoeHeYYmI9YxQM4TOF3rBPfi/ BFGA== X-Gm-Message-State: AC+VfDx9gSGxBX6rrJdFa45+JNdrb1HcY2hRdbQAouqE8+i1SclIGobL etfc55piVKSf4qarJ4PyE+gXO4lFryg84PNv+1PDejJfjApGvD0YAiclJeo2waTEAlApQh5BGFd UF3xwagir5uF0fnqunaYj1KJ1Jx5Y5HT/zzuXKN4RAlfC35bo8hC081+tEHfdzyu540RvDjlZ3G KLYxw= X-Google-Smtp-Source: ACHHUZ5nQzZrwNFi7VXY0rZlK/tGl02aN0QwHyArYQu9wacmMchhkiW13mCJjkXl30EkbBFlz3SBpaB5ocZ5Opralw== X-Received: from elbrus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:12e9]) (user=ppluzhnikov job=sendgmr) by 2002:a17:903:6cf:b0:1ae:6895:cb96 with SMTP id kj15-20020a17090306cf00b001ae6895cb96mr3343091plb.5.1684844157097; Tue, 23 May 2023 05:15:57 -0700 (PDT) Date: Tue, 23 May 2023 12:15:53 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230523121553.4094427-1-ppluzhnikov@google.com> Subject: [PATCH] Fix misspellings in sysdeps/powerpc -- BZ 25337 To: libc-alpha@sourceware.org Cc: Paul Pluzhnikov X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00, DKIMWL_WL_MED, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Paul Pluzhnikov via Libc-alpha From: Paul Pluzhnikov Reply-To: Paul Pluzhnikov Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" All fixes are in comments, so the binaries should be identical before/after this commit, but I can't verify this. Reviewed-by: Rajalakshmi Srinivasaraghavan --- sysdeps/powerpc/atomic-machine.h | 2 +- sysdeps/powerpc/bits/setjmp.h | 2 +- sysdeps/powerpc/powerpc32/405/memcpy.S | 4 ++-- sysdeps/powerpc/powerpc32/405/memset.S | 10 +++++----- sysdeps/powerpc/powerpc32/476/memset.S | 10 +++++----- .../powerpc32/power4/multiarch/strncase-power7.c | 2 +- .../powerpc32/power4/multiarch/strncase_l-power7.c | 2 +- sysdeps/powerpc/powerpc64/configure.ac | 2 +- .../le/fpu/multiarch/float128-ifunc-redirect-macros.h | 2 +- .../powerpc64/le/fpu/multiarch/float128-ifunc.h | 2 +- .../powerpc64/le/fpu/multiarch/float128_private.h | 2 +- sysdeps/powerpc/powerpc64/power7/memmove.S | 2 +- sysdeps/powerpc/powerpc64/power7/strcmp.S | 2 +- sysdeps/powerpc/powerpc64/power7/strncpy.S | 2 +- sysdeps/powerpc/powerpc64/power7/strrchr.S | 2 +- sysdeps/powerpc/powerpc64/power8/strcasestr.S | 4 ++-- sysdeps/powerpc/powerpc64/power8/strcmp.S | 2 +- sysdeps/powerpc/powerpc64/power8/strlen.S | 2 +- sysdeps/powerpc/powerpc64/power8/strncmp.S | 2 +- sysdeps/powerpc/powerpc64/power8/strncpy.S | 4 ++-- sysdeps/powerpc/powerpc64/power8/strnlen.S | 2 +- sysdeps/powerpc/powerpc64/power8/strrchr.S | 2 +- sysdeps/powerpc/powerpc64/setjmp-bug21895.c | 2 +- 23 files changed, 34 insertions(+), 34 deletions(-) diff --git a/sysdeps/powerpc/atomic-machine.h b/sysdeps/powerpc/atomic-machine.h index aae467cc50..80369e0f94 100644 --- a/sysdeps/powerpc/atomic-machine.h +++ b/sysdeps/powerpc/atomic-machine.h @@ -18,7 +18,7 @@ /* * Never include sysdeps/powerpc/atomic-machine.h directly. - * Alway use include/atomic.h which will include either + * Always use include/atomic.h which will include either * sysdeps/powerpc/powerpc32/atomic-machine.h * or * sysdeps/powerpc/powerpc64/atomic-machine.h diff --git a/sysdeps/powerpc/bits/setjmp.h b/sysdeps/powerpc/bits/setjmp.h index ac92616ec9..201208a53a 100644 --- a/sysdeps/powerpc/bits/setjmp.h +++ b/sysdeps/powerpc/bits/setjmp.h @@ -32,7 +32,7 @@ /* The current powerpc 32-bit Altivec ABI specifies for SVR4 ABI and EABI the vrsave must be at byte 248 & v20 at byte 256. So we must pad this - correctly on 32 bit. It also insists that vecregs are only gauranteed + correctly on 32 bit. It also insists that vecregs are only guaranteed 4 byte alignment so we need to use vperm in the setjmp/longjmp routines. We have to version the code because members like int __mask_was_saved in the jmp_buf will move as jmp_buf is now larger than 248 bytes. We diff --git a/sysdeps/powerpc/powerpc32/405/memcpy.S b/sysdeps/powerpc/powerpc32/405/memcpy.S index a2d0df0e32..b5db693142 100644 --- a/sysdeps/powerpc/powerpc32/405/memcpy.S +++ b/sysdeps/powerpc/powerpc32/405/memcpy.S @@ -26,10 +26,10 @@ r5:byte count Save return address in r0. - If destinationn and source are unaligned and copy count is greater than 256 + If destination and source are unaligned and copy count is greater than 256 then copy 0-3 bytes to make destination aligned. If 32 or more bytes to copy we use 32 byte copy loop. - Finaly we copy 0-31 extra bytes. */ + Finally we copy 0-31 extra bytes. */ EALIGN (memcpy, 5, 0) /* Check if bytes to copy are greater than 256 and if diff --git a/sysdeps/powerpc/powerpc32/405/memset.S b/sysdeps/powerpc/powerpc32/405/memset.S index 6c574ed79e..8ddad2274c 100644 --- a/sysdeps/powerpc/powerpc32/405/memset.S +++ b/sysdeps/powerpc/powerpc32/405/memset.S @@ -27,13 +27,13 @@ r12:temp return address Save return address in r12 - If destinationn is unaligned and count is greater tha 255 bytes + If destination is unaligned and count is greater than 255 bytes set 0-3 bytes to make destination aligned - If count is greater tha 255 bytes and setting zero to memory - use dbcz to set memeory when we can - otherwsie do the follwoing + If count is greater than 255 bytes and setting zero to memory + use dbcz to set memory when we can + otherwise do the following If 16 or more words to set we use 16 word copy loop. - Finaly we set 0-15 extra bytes with string store. */ + Finally we set 0-15 extra bytes with string store. */ EALIGN (memset, 5, 0) rlwinm r11,r4,0,24,31 diff --git a/sysdeps/powerpc/powerpc32/476/memset.S b/sysdeps/powerpc/powerpc32/476/memset.S index 527291e1b9..29b0feaccc 100644 --- a/sysdeps/powerpc/powerpc32/476/memset.S +++ b/sysdeps/powerpc/powerpc32/476/memset.S @@ -27,13 +27,13 @@ r12:temp return address Save return address in r12 - If destinationn is unaligned and count is greater tha 255 bytes + If destination is unaligned and count is greater than 255 bytes set 0-3 bytes to make destination aligned - If count is greater tha 255 bytes and setting zero to memory - use dbcz to set memeory when we can - otherwsie do the follwoing + If count is greater than 255 bytes and setting zero to memory + use dbcz to set memory when we can + otherwise do the following If 16 or more words to set we use 16 word copy loop. - Finaly we set 0-15 extra bytes with string store. */ + Finally we set 0-15 extra bytes with string store. */ EALIGN (memset, 5, 0) rlwinm r11,r4,0,24,31 diff --git a/sysdeps/powerpc/powerpc32/power4/multiarch/strncase-power7.c b/sysdeps/powerpc/powerpc32/power4/multiarch/strncase-power7.c index 4c144ac620..d5602fca6a 100644 --- a/sysdeps/powerpc/powerpc32/power4/multiarch/strncase-power7.c +++ b/sysdeps/powerpc/powerpc32/power4/multiarch/strncase-power7.c @@ -1,4 +1,4 @@ -/* Optimized strcasecmp_l implememtation for POWER7. +/* Optimized strcasecmp_l implementation for POWER7. Copyright (C) 2013-2023 Free Software Foundation, Inc. This file is part of the GNU C Library. diff --git a/sysdeps/powerpc/powerpc32/power4/multiarch/strncase_l-power7.c b/sysdeps/powerpc/powerpc32/power4/multiarch/strncase_l-power7.c index fb668a1f34..477b8e72cf 100644 --- a/sysdeps/powerpc/powerpc32/power4/multiarch/strncase_l-power7.c +++ b/sysdeps/powerpc/powerpc32/power4/multiarch/strncase_l-power7.c @@ -1,4 +1,4 @@ -/* Optimized strcasecmp_l implememtation for POWER7. +/* Optimized strcasecmp_l implementation for POWER7. Copyright (C) 2013-2023 Free Software Foundation, Inc. This file is part of the GNU C Library. diff --git a/sysdeps/powerpc/powerpc64/configure.ac b/sysdeps/powerpc/powerpc64/configure.ac index 111a0ae4b3..575745af3e 100644 --- a/sysdeps/powerpc/powerpc64/configure.ac +++ b/sysdeps/powerpc/powerpc64/configure.ac @@ -27,7 +27,7 @@ fi # We check if compiler supports @notoc generation since there is no # gain by enabling it if it will be optimized away by the linker. # It also helps linkers that might not optimize it and end up -# generating stubs with ISA 3.1 instruction even targetting older ISA. +# generating stubs with ISA 3.1 instruction even targeting older ISA. AC_CACHE_CHECK([if the compiler supports @notoc], libc_cv_ppc64_notoc, [dnl cat > conftest.c < -/* Declare these now. These prototyes are not included +/* Declare these now. These prototypes are not included in any header. */ extern __typeof (cosf128) __ieee754_cosf128; extern __typeof (asinhf128) __ieee754_asinhf128; diff --git a/sysdeps/powerpc/powerpc64/power7/memmove.S b/sysdeps/powerpc/powerpc64/power7/memmove.S index 6988eff18f..e9a9cae6a4 100644 --- a/sysdeps/powerpc/powerpc64/power7/memmove.S +++ b/sysdeps/powerpc/powerpc64/power7/memmove.S @@ -425,7 +425,7 @@ L(end_unaligned_loop): /* Return original DST pointer. */ blr - /* Start to memcpy backward implementation: the algorith first check if + /* Start to memcpy backward implementation: the algorithm first check if src and dest have the same alignment and if it does align both to 16 bytes and copy using VSX instructions. If does not, align dest to 16 bytes and use VMX (altivec) instruction diff --git a/sysdeps/powerpc/powerpc64/power7/strcmp.S b/sysdeps/powerpc/powerpc64/power7/strcmp.S index c1c2a6f6b3..bd41639c5d 100644 --- a/sysdeps/powerpc/powerpc64/power7/strcmp.S +++ b/sysdeps/powerpc/powerpc64/power7/strcmp.S @@ -17,7 +17,7 @@ . */ /* The optimization is achieved here through cmpb instruction. - 8byte aligned strings are processed with double word comparision + 8byte aligned strings are processed with double word comparison and unaligned strings are handled effectively with loop unrolling technique */ diff --git a/sysdeps/powerpc/powerpc64/power7/strncpy.S b/sysdeps/powerpc/powerpc64/power7/strncpy.S index eec0c41ccb..8d55a0cbcc 100644 --- a/sysdeps/powerpc/powerpc64/power7/strncpy.S +++ b/sysdeps/powerpc/powerpc64/power7/strncpy.S @@ -479,7 +479,7 @@ L(storebyte2): rldicl r6, r3, 0, 61 /* Recalculate padding */ mr r7, r6 - /* src is algined */ + /* src is aligned */ L(srcaligndstunalign): mr r9, r3 mr r6, r7 diff --git a/sysdeps/powerpc/powerpc64/power7/strrchr.S b/sysdeps/powerpc/powerpc64/power7/strrchr.S index 7f730c8d5e..accff65f04 100644 --- a/sysdeps/powerpc/powerpc64/power7/strrchr.S +++ b/sysdeps/powerpc/powerpc64/power7/strrchr.S @@ -31,7 +31,7 @@ ENTRY_TOCLESS (STRRCHR) clrrdi r8,r3,3 /* Align the address to doubleword boundary. */ cmpdi cr7,r4,0 ld r12,0(r8) /* Load doubleword from memory. */ - li r9,0 /* used to store last occurence */ + li r9,0 /* used to store last occurrence */ li r0,0 /* Doubleword with null chars to use with cmpb. */ diff --git a/sysdeps/powerpc/powerpc64/power8/strcasestr.S b/sysdeps/powerpc/powerpc64/power8/strcasestr.S index 1d1eeceef7..2e88481abd 100644 --- a/sysdeps/powerpc/powerpc64/power8/strcasestr.S +++ b/sysdeps/powerpc/powerpc64/power8/strcasestr.S @@ -137,7 +137,7 @@ ENTRY (STRCASESTR, 4) beq cr7, L(skipcheck) cmpw cr7, r3, r29 ble cr7, L(firstpos) - /* Move r3 to the first occurence. */ + /* Move r3 to the first occurrence. */ L(skipcheck): mr r3, r29 L(firstpos): @@ -448,7 +448,7 @@ L(loop1): beq cr7, L(skipcheck1) cmpw cr7, r3, r29 ble cr7, L(nextpos) - /* Move r3 to first occurence. */ + /* Move r3 to first occurrence. */ L(skipcheck1): mr r3, r29 L(nextpos): diff --git a/sysdeps/powerpc/powerpc64/power8/strcmp.S b/sysdeps/powerpc/powerpc64/power8/strcmp.S index 4b1cde92ae..4b36723c84 100644 --- a/sysdeps/powerpc/powerpc64/power8/strcmp.S +++ b/sysdeps/powerpc/powerpc64/power8/strcmp.S @@ -207,7 +207,7 @@ L(check_source2_byte_loop): bdnz L(check_source2_byte_loop) /* If source2 is unaligned to doubleword, the code needs to check - on each interation if the unaligned doubleword access will cross + on each iteration if the unaligned doubleword access will cross a 4k page boundary. */ .align 5 L(loop_unaligned): diff --git a/sysdeps/powerpc/powerpc64/power8/strlen.S b/sysdeps/powerpc/powerpc64/power8/strlen.S index 4e1d884cc0..33a3e6af27 100644 --- a/sysdeps/powerpc/powerpc64/power8/strlen.S +++ b/sysdeps/powerpc/powerpc64/power8/strlen.S @@ -65,7 +65,7 @@ ENTRY_TOCLESS (STRLEN, 4) L(align64): /* Proceed to the old (POWER7) implementation, checking two doublewords - per iteraction. For the first 56 bytes, we will just check for null + per iteration. For the first 56 bytes, we will just check for null characters. After that, we will also check if we are 64-byte aligned so we can jump to the vectorized implementation. We will unroll these loops to avoid excessive branching. */ diff --git a/sysdeps/powerpc/powerpc64/power8/strncmp.S b/sysdeps/powerpc/powerpc64/power8/strncmp.S index b30f970c66..65d0db49f4 100644 --- a/sysdeps/powerpc/powerpc64/power8/strncmp.S +++ b/sysdeps/powerpc/powerpc64/power8/strncmp.S @@ -101,7 +101,7 @@ L(align_8b): b L(loop_ne_align_1) /* If source2 is unaligned to doubleword, the code needs to check - on each interation if the unaligned doubleword access will cross + on each iteration if the unaligned doubleword access will cross a 4k page boundary. */ .align 4 L(loop_ne_align_0): diff --git a/sysdeps/powerpc/powerpc64/power8/strncpy.S b/sysdeps/powerpc/powerpc64/power8/strncpy.S index 79a3d5aac3..9cfa08ef95 100644 --- a/sysdeps/powerpc/powerpc64/power8/strncpy.S +++ b/sysdeps/powerpc/powerpc64/power8/strncpy.S @@ -144,7 +144,7 @@ L(short_path_2): .align 4 L(short_path_loop): /* At this point, the induction variable, r5, as well as the pointers - to dest and src (r9 and r4, respectivelly) have been updated. + to dest and src (r9 and r4, respectively) have been updated. Note: The registers r7 and r10 are induction variables derived from r5. They are used to determine if the total number of writes has @@ -351,7 +351,7 @@ L(pagecross): cmpdi cr7,r9,0 bne cr7,L(short_path_prepare_2) - /* No null byte found in the 32 bytes readed and length not reached, + /* No null byte found in the 32 bytes read and length not reached, read source again using unaligned loads and store them. */ ld r9,0(r4) addi r29,r3,16 diff --git a/sysdeps/powerpc/powerpc64/power8/strnlen.S b/sysdeps/powerpc/powerpc64/power8/strnlen.S index a8495afad5..5bc62d6fbb 100644 --- a/sysdeps/powerpc/powerpc64/power8/strnlen.S +++ b/sysdeps/powerpc/powerpc64/power8/strnlen.S @@ -166,7 +166,7 @@ L(loop_64B): vminub v6,v3,v4 vminub v7,v5,v6 vcmpequb. v7,v7,v0 /* Check for null bytes. */ - addi r5,r5,64 /* Add pointer to next iteraction. */ + addi r5,r5,64 /* Add pointer to next iteration. */ bne cr6,L(found_64B) /* If found null bytes. */ bdnz L(loop_64B) /* Continue the loop if count > 0. */ diff --git a/sysdeps/powerpc/powerpc64/power8/strrchr.S b/sysdeps/powerpc/powerpc64/power8/strrchr.S index 62b4d493e7..dad2eb11b8 100644 --- a/sysdeps/powerpc/powerpc64/power8/strrchr.S +++ b/sysdeps/powerpc/powerpc64/power8/strrchr.S @@ -74,7 +74,7 @@ ENTRY_TOCLESS (STRRCHR) clrrdi r8,r3,3 /* Align the address to doubleword boundary. */ cmpdi cr7,r4,0 ld r12,0(r8) /* Load doubleword from memory. */ - li r9,0 /* Used to store last occurence. */ + li r9,0 /* Used to store last occurrence. */ li r0,0 /* Doubleword with null chars to use with cmpb. */ diff --git a/sysdeps/powerpc/powerpc64/setjmp-bug21895.c b/sysdeps/powerpc/powerpc64/setjmp-bug21895.c index 31eab6c422..d15541c9dd 100644 --- a/sysdeps/powerpc/powerpc64/setjmp-bug21895.c +++ b/sysdeps/powerpc/powerpc64/setjmp-bug21895.c @@ -20,7 +20,7 @@ #include #include -/* Copy r1 adress to a local variable. */ +/* Copy r1 address to a local variable. */ #define GET_STACK_POINTER(sp) \ ({ \ asm volatile ("mr %0, 1\n\t" \