Message ID | 1636176325-17121-1-git-send-email-apinski@marvell.com |
---|---|
State | New |
Headers |
Return-Path: <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id A71653857C7A for <patchwork@sourceware.org>; Sat, 6 Nov 2021 05:26:03 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A71653857C7A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1636176363; bh=9IhhVzvXv0+6LzvFEc6kT1Zg0+DSYTK03BSokMAvlvA=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=HVtN2JMOGb1UOpsYAY+WYPZSDZdjcs4qU2iow4fo6tTtk+9n9mcKS6EssJ5ANiPVj +rWOuqg0dLMShHr3cP2CGVpn/KsLQgRXw34EjlXQ01uIsi28cx5yIopUjk6dO1l6DB WgLRqO082GBXL1mk5ElDmQ29J0D+wKW7HO8QcFLY= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by sourceware.org (Postfix) with ESMTPS id 7CF613858C27 for <gcc-patches@gcc.gnu.org>; Sat, 6 Nov 2021 05:25:32 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 7CF613858C27 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A62Fnfj010146 for <gcc-patches@gcc.gnu.org>; Fri, 5 Nov 2021 22:25:32 -0700 Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3c4t3gwxcs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for <gcc-patches@gcc.gnu.org>; Fri, 05 Nov 2021 22:25:31 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 5 Nov 2021 22:25:30 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 5 Nov 2021 22:25:30 -0700 Received: from linux.wrightpinski.org.com (unknown [10.69.242.198]) by maili.marvell.com (Postfix) with ESMTP id C67123F7089; Fri, 5 Nov 2021 22:25:29 -0700 (PDT) To: <gcc-patches@gcc.gnu.org> Subject: [PATCH] Fix PR target/103100 -mstrict-align and memset on not aligned buffers Date: Fri, 5 Nov 2021 22:25:25 -0700 Message-ID: <1636176325-17121-1-git-send-email-apinski@marvell.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: PmnZqPz16McXSxCi_V8Tr67jbz8Dw1pM X-Proofpoint-GUID: PmnZqPz16McXSxCi_V8Tr67jbz8Dw1pM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-06_01,2021-11-03_01,2020-04-07_01 X-Spam-Status: No, score=-14.7 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list <gcc-patches.gcc.gnu.org> List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe> List-Archive: <https://gcc.gnu.org/pipermail/gcc-patches/> List-Post: <mailto:gcc-patches@gcc.gnu.org> List-Help: <mailto:gcc-patches-request@gcc.gnu.org?subject=help> List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe> From: apinski--- via Gcc-patches <gcc-patches@gcc.gnu.org> Reply-To: apinski@marvell.com Cc: Andrew Pinski <apinski@marvell.com> Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> |
Series |
Fix PR target/103100 -mstrict-align and memset on not aligned buffers
|
|
Commit Message
Li, Pan2 via Gcc-patches
Nov. 6, 2021, 5:25 a.m. UTC
From: Andrew Pinski <apinski@marvell.com>
The problem here is with -mstrict-align, aarch64_expand_setmem needs
to check the alginment of the mode to make sure we can use it for
doing the stores.
gcc/ChangeLog:
PR target/103100
* config/aarch64/aarch64.c (aarch64_expand_setmem):
Add check for alignment of the mode if STRICT_ALIGNMENT is true.
---
gcc/config/aarch64/aarch64.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
Comments
apinski--- via Gcc-patches <gcc-patches@gcc.gnu.org> writes: > From: Andrew Pinski <apinski@marvell.com> > > The problem here is with -mstrict-align, aarch64_expand_setmem needs > to check the alginment of the mode to make sure we can use it for > doing the stores. > > gcc/ChangeLog: > > PR target/103100 > * config/aarch64/aarch64.c (aarch64_expand_setmem): > Add check for alignment of the mode if STRICT_ALIGNMENT is true. > --- > gcc/config/aarch64/aarch64.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c > index fdf05505846..2c00583e12c 100644 > --- a/gcc/config/aarch64/aarch64.c > +++ b/gcc/config/aarch64/aarch64.c > @@ -23738,7 +23738,9 @@ aarch64_expand_setmem (rtx *operands) > over writing. */ > opt_scalar_int_mode mode_iter; > FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT) > - if (GET_MODE_BITSIZE (mode_iter.require ()) <= MIN (n, copy_limit)) > + if (GET_MODE_BITSIZE (mode_iter.require ()) <= MIN (n, copy_limit) > + && (!STRICT_ALIGNMENT > + || MEM_ALIGN (dst) >= GET_MODE_ALIGNMENT (mode_iter.require ()))) Sorry for the slow review. I think instead we should have keep track of the alignment of the start byte. This will be MEM_ALIGN for the first iteration but could decrease after writing some bytes. The net effect should be the same in practice. It just seems more robust. Thanks, Richard > cur_mode = mode_iter.require (); > > gcc_assert (cur_mode != BLKmode);
On Wed, Nov 17, 2021 at 1:39 AM Richard Sandiford via Gcc-patches <gcc-patches@gcc.gnu.org> wrote: > > apinski--- via Gcc-patches <gcc-patches@gcc.gnu.org> writes: > > From: Andrew Pinski <apinski@marvell.com> > > > > The problem here is with -mstrict-align, aarch64_expand_setmem needs > > to check the alginment of the mode to make sure we can use it for > > doing the stores. > > > > gcc/ChangeLog: > > > > PR target/103100 > > * config/aarch64/aarch64.c (aarch64_expand_setmem): > > Add check for alignment of the mode if STRICT_ALIGNMENT is true. > > --- > > gcc/config/aarch64/aarch64.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c > > index fdf05505846..2c00583e12c 100644 > > --- a/gcc/config/aarch64/aarch64.c > > +++ b/gcc/config/aarch64/aarch64.c > > @@ -23738,7 +23738,9 @@ aarch64_expand_setmem (rtx *operands) > > over writing. */ > > opt_scalar_int_mode mode_iter; > > FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT) > > - if (GET_MODE_BITSIZE (mode_iter.require ()) <= MIN (n, copy_limit)) > > + if (GET_MODE_BITSIZE (mode_iter.require ()) <= MIN (n, copy_limit) > > + && (!STRICT_ALIGNMENT > > + || MEM_ALIGN (dst) >= GET_MODE_ALIGNMENT (mode_iter.require ()))) > > Sorry for the slow review. I think instead we should have keep > track of the alignment of the start byte. This will be MEM_ALIGN > for the first iteration but could decrease after writing some bytes. > > The net effect should be the same in practice. It just seems > more robust. So looking into this loop further, I think it really needs a rewrite :). Currently it is not a greedy loop, instead it iterates for each copy it does and loops over the modes each time too. Let me rewrite the loop so it is better. Thanks, Andrew > > Thanks, > Richard > > > cur_mode = mode_iter.require (); > > > > gcc_assert (cur_mode != BLKmode);
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index fdf05505846..2c00583e12c 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -23738,7 +23738,9 @@ aarch64_expand_setmem (rtx *operands) over writing. */ opt_scalar_int_mode mode_iter; FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT) - if (GET_MODE_BITSIZE (mode_iter.require ()) <= MIN (n, copy_limit)) + if (GET_MODE_BITSIZE (mode_iter.require ()) <= MIN (n, copy_limit) + && (!STRICT_ALIGNMENT + || MEM_ALIGN (dst) >= GET_MODE_ALIGNMENT (mode_iter.require ()))) cur_mode = mode_iter.require (); gcc_assert (cur_mode != BLKmode);