[aarch64] Use wzr/xzr for assigning vector element to 0

Message ID CAAgBjM=J8Vye=RPBw1sWQnUzxfC1C2UPT_vc+_jmXOeYJG-YGQ@mail.gmail.com
State Under Review
Headers
Series [aarch64] Use wzr/xzr for assigning vector element to 0 |

Commit Message

Prathamesh Kulkarni Jan. 17, 2023, 10:46 a.m. UTC
  Hi Richard,
For the following (contrived) test:

void foo(int32x4_t v)
{
  v[3] = 0;
  return v;
}

-O2 code-gen:
foo:
        fmov    s1, wzr
        ins     v0.s[3], v1.s[0]
        ret

I suppose we can instead emit the following code-gen ?
foo:
     ins v0.s[3], wzr
     ret

combine produces:
Failed to match this instruction:
(set (reg:V4SI 95 [ v ])
    (vec_merge:V4SI (const_vector:V4SI [
                (const_int 0 [0]) repeated x4
            ])
        (reg:V4SI 97)
        (const_int 8 [0x8])))

So, I wrote the following pattern to match the above insn:
(define_insn "aarch64_simd_vec_set_zero<mode>"
  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
        (vec_merge:VALL_F16
            (match_operand:VALL_F16 1 "const_dup0_operand" "w")
            (match_operand:VALL_F16 3 "register_operand" "0")
            (match_operand:SI 2 "immediate_operand" "i")))]
  "TARGET_SIMD"
  {
    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
    return "ins\\t%0.<Vetype>[%p2], wzr";
  }
)

which now matches the above insn produced by combine.
However, in reload dump, it creates a new insn for assigning
register to (const_vector (const_int 0)),
which results in:
(insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
        (const_vector:V4SI [
                (const_int 0 [0]) repeated x4
            ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
     (nil))
(insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
        (vec_merge:V4SI (reg:V4SI 33 v1 [99])
            (reg:V4SI 32 v0 [97])
            (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
{aarch64_simd_vec_set_zerov4si}
     (nil))

and eventually the code-gen:
foo:
        movi    v1.4s, 0
        ins     v0.s[3], wzr
        ret

To get rid of redundant assignment of 0 to v1, I tried to split the
above pattern
as in the attached patch. This works to emit code-gen:
foo:
        ins     v0.s[3], wzr
        ret

However, I am not sure if this is the right approach. Could you suggest,
if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?

Thanks,
Prathamesh
  

Comments

Richard Sandiford Jan. 17, 2023, 12:59 p.m. UTC | #1
Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> Hi Richard,
> For the following (contrived) test:
>
> void foo(int32x4_t v)
> {
>   v[3] = 0;
>   return v;
> }
>
> -O2 code-gen:
> foo:
>         fmov    s1, wzr
>         ins     v0.s[3], v1.s[0]
>         ret
>
> I suppose we can instead emit the following code-gen ?
> foo:
>      ins v0.s[3], wzr
>      ret
>
> combine produces:
> Failed to match this instruction:
> (set (reg:V4SI 95 [ v ])
>     (vec_merge:V4SI (const_vector:V4SI [
>                 (const_int 0 [0]) repeated x4
>             ])
>         (reg:V4SI 97)
>         (const_int 8 [0x8])))
>
> So, I wrote the following pattern to match the above insn:
> (define_insn "aarch64_simd_vec_set_zero<mode>"
>   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>         (vec_merge:VALL_F16
>             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
>             (match_operand:VALL_F16 3 "register_operand" "0")
>             (match_operand:SI 2 "immediate_operand" "i")))]
>   "TARGET_SIMD"
>   {
>     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
>     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
>     return "ins\\t%0.<Vetype>[%p2], wzr";
>   }
> )
>
> which now matches the above insn produced by combine.
> However, in reload dump, it creates a new insn for assigning
> register to (const_vector (const_int 0)),
> which results in:
> (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
>         (const_vector:V4SI [
>                 (const_int 0 [0]) repeated x4
>             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
>      (nil))
> (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
>         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
>             (reg:V4SI 32 v0 [97])
>             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
> {aarch64_simd_vec_set_zerov4si}
>      (nil))
>
> and eventually the code-gen:
> foo:
>         movi    v1.4s, 0
>         ins     v0.s[3], wzr
>         ret
>
> To get rid of redundant assignment of 0 to v1, I tried to split the
> above pattern
> as in the attached patch. This works to emit code-gen:
> foo:
>         ins     v0.s[3], wzr
>         ret
>
> However, I am not sure if this is the right approach. Could you suggest,
> if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?

The problem is with the "w" constraint on operand 1, which tells LRA
to force the zero into an FPR.  It should work if you remove the
constraint.

Also, I think you'll need to use <vwcore>zr for the zero, so that
it uses xzr for 64-bit elements.

I think this and the existing patterns ought to test
exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
since there's no guarantee that RTL optimisations won't form
vec_merges that have other masks.

Thanks,
Richard
  
Prathamesh Kulkarni Jan. 18, 2023, 10:47 a.m. UTC | #2
On Tue, 17 Jan 2023 at 18:29, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > Hi Richard,
> > For the following (contrived) test:
> >
> > void foo(int32x4_t v)
> > {
> >   v[3] = 0;
> >   return v;
> > }
> >
> > -O2 code-gen:
> > foo:
> >         fmov    s1, wzr
> >         ins     v0.s[3], v1.s[0]
> >         ret
> >
> > I suppose we can instead emit the following code-gen ?
> > foo:
> >      ins v0.s[3], wzr
> >      ret
> >
> > combine produces:
> > Failed to match this instruction:
> > (set (reg:V4SI 95 [ v ])
> >     (vec_merge:V4SI (const_vector:V4SI [
> >                 (const_int 0 [0]) repeated x4
> >             ])
> >         (reg:V4SI 97)
> >         (const_int 8 [0x8])))
> >
> > So, I wrote the following pattern to match the above insn:
> > (define_insn "aarch64_simd_vec_set_zero<mode>"
> >   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >         (vec_merge:VALL_F16
> >             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
> >             (match_operand:VALL_F16 3 "register_operand" "0")
> >             (match_operand:SI 2 "immediate_operand" "i")))]
> >   "TARGET_SIMD"
> >   {
> >     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> >     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> >     return "ins\\t%0.<Vetype>[%p2], wzr";
> >   }
> > )
> >
> > which now matches the above insn produced by combine.
> > However, in reload dump, it creates a new insn for assigning
> > register to (const_vector (const_int 0)),
> > which results in:
> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
> >         (const_vector:V4SI [
> >                 (const_int 0 [0]) repeated x4
> >             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
> >      (nil))
> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
> >         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
> >             (reg:V4SI 32 v0 [97])
> >             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
> > {aarch64_simd_vec_set_zerov4si}
> >      (nil))
> >
> > and eventually the code-gen:
> > foo:
> >         movi    v1.4s, 0
> >         ins     v0.s[3], wzr
> >         ret
> >
> > To get rid of redundant assignment of 0 to v1, I tried to split the
> > above pattern
> > as in the attached patch. This works to emit code-gen:
> > foo:
> >         ins     v0.s[3], wzr
> >         ret
> >
> > However, I am not sure if this is the right approach. Could you suggest,
> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?
>
> The problem is with the "w" constraint on operand 1, which tells LRA
> to force the zero into an FPR.  It should work if you remove the
> constraint.
Ah indeed, sorry about that, changing the constrained works.
Does the attached patch look OK after bootstrap+test ?
Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ?

Thanks,
Prathamesh


>
> Also, I think you'll need to use <vwcore>zr for the zero, so that
> it uses xzr for 64-bit elements.
>
> I think this and the existing patterns ought to test
> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
> since there's no guarantee that RTL optimisations won't form
> vec_merges that have other masks.
>
> Thanks,
> Richard
[aarch64] Use wzr/xzr for assigning 0 to vector element.

gcc/ChangeLog:
	* config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
	New pattern.
	* config/aarch64/predicates.md (const_dup0_operand): New.

diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 104088f67d2..8e54ee4e886 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -1083,6 +1083,20 @@
   [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
 )
 
+(define_insn "aarch64_simd_vec_set_zero<mode>"
+  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
+	(vec_merge:VALL_F16
+	    (match_operand:VALL_F16 1 "const_dup0_operand" "i")
+	    (match_operand:VALL_F16 3 "register_operand" "0")
+	    (match_operand:SI 2 "immediate_operand" "i")))]
+  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
+  {
+    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
+    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
+    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
+  }
+)
+
 (define_insn "@aarch64_simd_vec_copy_lane<mode>"
   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
 	(vec_merge:VALL_F16
diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
index ff7f73d3f30..901fa1bd7f9 100644
--- a/gcc/config/aarch64/predicates.md
+++ b/gcc/config/aarch64/predicates.md
@@ -49,6 +49,13 @@
   return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
 })
 
+(define_predicate "const_dup0_operand"
+  (match_code "const_vector")
+{
+  op = unwrap_const_vec_duplicate (op);
+  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
+})
+
 (define_predicate "subreg_lowpart_operator"
   (ior (match_code "truncate")
        (and (match_code "subreg")
  
Richard Sandiford Jan. 18, 2023, 2:29 p.m. UTC | #3
Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> On Tue, 17 Jan 2023 at 18:29, Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
>> > Hi Richard,
>> > For the following (contrived) test:
>> >
>> > void foo(int32x4_t v)
>> > {
>> >   v[3] = 0;
>> >   return v;
>> > }
>> >
>> > -O2 code-gen:
>> > foo:
>> >         fmov    s1, wzr
>> >         ins     v0.s[3], v1.s[0]
>> >         ret
>> >
>> > I suppose we can instead emit the following code-gen ?
>> > foo:
>> >      ins v0.s[3], wzr
>> >      ret
>> >
>> > combine produces:
>> > Failed to match this instruction:
>> > (set (reg:V4SI 95 [ v ])
>> >     (vec_merge:V4SI (const_vector:V4SI [
>> >                 (const_int 0 [0]) repeated x4
>> >             ])
>> >         (reg:V4SI 97)
>> >         (const_int 8 [0x8])))
>> >
>> > So, I wrote the following pattern to match the above insn:
>> > (define_insn "aarch64_simd_vec_set_zero<mode>"
>> >   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> >         (vec_merge:VALL_F16
>> >             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
>> >             (match_operand:VALL_F16 3 "register_operand" "0")
>> >             (match_operand:SI 2 "immediate_operand" "i")))]
>> >   "TARGET_SIMD"
>> >   {
>> >     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
>> >     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
>> >     return "ins\\t%0.<Vetype>[%p2], wzr";
>> >   }
>> > )
>> >
>> > which now matches the above insn produced by combine.
>> > However, in reload dump, it creates a new insn for assigning
>> > register to (const_vector (const_int 0)),
>> > which results in:
>> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
>> >         (const_vector:V4SI [
>> >                 (const_int 0 [0]) repeated x4
>> >             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
>> >      (nil))
>> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
>> >         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
>> >             (reg:V4SI 32 v0 [97])
>> >             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
>> > {aarch64_simd_vec_set_zerov4si}
>> >      (nil))
>> >
>> > and eventually the code-gen:
>> > foo:
>> >         movi    v1.4s, 0
>> >         ins     v0.s[3], wzr
>> >         ret
>> >
>> > To get rid of redundant assignment of 0 to v1, I tried to split the
>> > above pattern
>> > as in the attached patch. This works to emit code-gen:
>> > foo:
>> >         ins     v0.s[3], wzr
>> >         ret
>> >
>> > However, I am not sure if this is the right approach. Could you suggest,
>> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?
>>
>> The problem is with the "w" constraint on operand 1, which tells LRA
>> to force the zero into an FPR.  It should work if you remove the
>> constraint.
> Ah indeed, sorry about that, changing the constrained works.

"i" isn't right though, because that's for scalar integers.
There's no need for any constraint here -- the predicate does
all of the work.

> Does the attached patch look OK after bootstrap+test ?
> Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ?

It needs tests as well. :-)

Also:

> Thanks,
> Prathamesh
>
>
>>
>> Also, I think you'll need to use <vwcore>zr for the zero, so that
>> it uses xzr for 64-bit elements.
>>
>> I think this and the existing patterns ought to test
>> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
>> since there's no guarantee that RTL optimisations won't form
>> vec_merges that have other masks.
>>
>> Thanks,
>> Richard
>
> [aarch64] Use wzr/xzr for assigning 0 to vector element.
>
> gcc/ChangeLog:
> 	* config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> 	New pattern.
> 	* config/aarch64/predicates.md (const_dup0_operand): New.
>
> diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> index 104088f67d2..8e54ee4e886 100644
> --- a/gcc/config/aarch64/aarch64-simd.md
> +++ b/gcc/config/aarch64/aarch64-simd.md
> @@ -1083,6 +1083,20 @@
>    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
>  )
>  
> +(define_insn "aarch64_simd_vec_set_zero<mode>"
> +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> +	(vec_merge:VALL_F16
> +	    (match_operand:VALL_F16 1 "const_dup0_operand" "i")
> +	    (match_operand:VALL_F16 3 "register_operand" "0")
> +	    (match_operand:SI 2 "immediate_operand" "i")))]
> +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> +  {
> +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> +  }
> +)
> +
>  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
>    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>  	(vec_merge:VALL_F16
> diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
> index ff7f73d3f30..901fa1bd7f9 100644
> --- a/gcc/config/aarch64/predicates.md
> +++ b/gcc/config/aarch64/predicates.md
> @@ -49,6 +49,13 @@
>    return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
>  })
>  
> +(define_predicate "const_dup0_operand"
> +  (match_code "const_vector")
> +{
> +  op = unwrap_const_vec_duplicate (op);
> +  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
> +})
> +

We already have aarch64_simd_imm_zero for this.  aarch64_simd_imm_zero
is actually more general, because it works for floating-point modes too.

I think the tests should cover all modes included in VALL_F16, since
that should have picked up this and the xzr thing.

Thanks,
Richard

>  (define_predicate "subreg_lowpart_operator"
>    (ior (match_code "truncate")
>         (and (match_code "subreg")
  
Prathamesh Kulkarni Jan. 19, 2023, 12:07 p.m. UTC | #4
On Wed, 18 Jan 2023 at 19:59, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > On Tue, 17 Jan 2023 at 18:29, Richard Sandiford
> > <richard.sandiford@arm.com> wrote:
> >>
> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> > Hi Richard,
> >> > For the following (contrived) test:
> >> >
> >> > void foo(int32x4_t v)
> >> > {
> >> >   v[3] = 0;
> >> >   return v;
> >> > }
> >> >
> >> > -O2 code-gen:
> >> > foo:
> >> >         fmov    s1, wzr
> >> >         ins     v0.s[3], v1.s[0]
> >> >         ret
> >> >
> >> > I suppose we can instead emit the following code-gen ?
> >> > foo:
> >> >      ins v0.s[3], wzr
> >> >      ret
> >> >
> >> > combine produces:
> >> > Failed to match this instruction:
> >> > (set (reg:V4SI 95 [ v ])
> >> >     (vec_merge:V4SI (const_vector:V4SI [
> >> >                 (const_int 0 [0]) repeated x4
> >> >             ])
> >> >         (reg:V4SI 97)
> >> >         (const_int 8 [0x8])))
> >> >
> >> > So, I wrote the following pattern to match the above insn:
> >> > (define_insn "aarch64_simd_vec_set_zero<mode>"
> >> >   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> >         (vec_merge:VALL_F16
> >> >             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
> >> >             (match_operand:VALL_F16 3 "register_operand" "0")
> >> >             (match_operand:SI 2 "immediate_operand" "i")))]
> >> >   "TARGET_SIMD"
> >> >   {
> >> >     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> >> >     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> >> >     return "ins\\t%0.<Vetype>[%p2], wzr";
> >> >   }
> >> > )
> >> >
> >> > which now matches the above insn produced by combine.
> >> > However, in reload dump, it creates a new insn for assigning
> >> > register to (const_vector (const_int 0)),
> >> > which results in:
> >> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
> >> >         (const_vector:V4SI [
> >> >                 (const_int 0 [0]) repeated x4
> >> >             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
> >> >      (nil))
> >> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
> >> >         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
> >> >             (reg:V4SI 32 v0 [97])
> >> >             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
> >> > {aarch64_simd_vec_set_zerov4si}
> >> >      (nil))
> >> >
> >> > and eventually the code-gen:
> >> > foo:
> >> >         movi    v1.4s, 0
> >> >         ins     v0.s[3], wzr
> >> >         ret
> >> >
> >> > To get rid of redundant assignment of 0 to v1, I tried to split the
> >> > above pattern
> >> > as in the attached patch. This works to emit code-gen:
> >> > foo:
> >> >         ins     v0.s[3], wzr
> >> >         ret
> >> >
> >> > However, I am not sure if this is the right approach. Could you suggest,
> >> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?
> >>
> >> The problem is with the "w" constraint on operand 1, which tells LRA
> >> to force the zero into an FPR.  It should work if you remove the
> >> constraint.
> > Ah indeed, sorry about that, changing the constrained works.
>
> "i" isn't right though, because that's for scalar integers.
> There's no need for any constraint here -- the predicate does
> all of the work.
>
> > Does the attached patch look OK after bootstrap+test ?
> > Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ?
>
> It needs tests as well. :-)
>
> Also:
>
> > Thanks,
> > Prathamesh
> >
> >
> >>
> >> Also, I think you'll need to use <vwcore>zr for the zero, so that
> >> it uses xzr for 64-bit elements.
> >>
> >> I think this and the existing patterns ought to test
> >> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
> >> since there's no guarantee that RTL optimisations won't form
> >> vec_merges that have other masks.
> >>
> >> Thanks,
> >> Richard
> >
> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
> >
> > gcc/ChangeLog:
> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> >       New pattern.
> >       * config/aarch64/predicates.md (const_dup0_operand): New.
> >
> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> > index 104088f67d2..8e54ee4e886 100644
> > --- a/gcc/config/aarch64/aarch64-simd.md
> > +++ b/gcc/config/aarch64/aarch64-simd.md
> > @@ -1083,6 +1083,20 @@
> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
> >  )
> >
> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> > +     (vec_merge:VALL_F16
> > +         (match_operand:VALL_F16 1 "const_dup0_operand" "i")
> > +         (match_operand:VALL_F16 3 "register_operand" "0")
> > +         (match_operand:SI 2 "immediate_operand" "i")))]
> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> > +  {
> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> > +  }
> > +)
> > +
> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >       (vec_merge:VALL_F16
> > diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
> > index ff7f73d3f30..901fa1bd7f9 100644
> > --- a/gcc/config/aarch64/predicates.md
> > +++ b/gcc/config/aarch64/predicates.md
> > @@ -49,6 +49,13 @@
> >    return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
> >  })
> >
> > +(define_predicate "const_dup0_operand"
> > +  (match_code "const_vector")
> > +{
> > +  op = unwrap_const_vec_duplicate (op);
> > +  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
> > +})
> > +
>
> We already have aarch64_simd_imm_zero for this.  aarch64_simd_imm_zero
> is actually more general, because it works for floating-point modes too.
>
> I think the tests should cover all modes included in VALL_F16, since
> that should have picked up this and the xzr thing.
Hi Richard,
Thanks for the suggestions. Does the attached patch look OK ?
I am not sure how to test for v4bf and v8bf since it seems the compiler
refuses conversions to/from bfloat16_t ?

Thanks,
Prathamesh

>
> Thanks,
> Richard
>
> >  (define_predicate "subreg_lowpart_operator"
> >    (ior (match_code "truncate")
> >         (and (match_code "subreg")
[aarch64] Use wzr/xzr for assigning 0 to vector element.

gcc/ChangeLog:
	* config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
	New pattern.

gcc/testsuite/ChangeLog:
	* gcc.target/aarch64/vec-set-zero.c: New test.

diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 7f212bf37cd..7428e74beaf 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -1083,6 +1083,20 @@
   [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
 )
 
+(define_insn "aarch64_simd_vec_set_zero<mode>"
+  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
+	(vec_merge:VALL_F16
+	    (match_operand:VALL_F16 1 "aarch64_simd_imm_zero" "")
+	    (match_operand:VALL_F16 3 "register_operand" "0")
+	    (match_operand:SI 2 "immediate_operand" "i")))]
+  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
+  {
+    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
+    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
+    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
+  }
+)
+
 (define_insn "@aarch64_simd_vec_copy_lane<mode>"
   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
 	(vec_merge:VALL_F16
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
new file mode 100644
index 00000000000..c260cc9e445
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
@@ -0,0 +1,32 @@
+/* { dg-do compile } */
+/* { dg-options "-O2" } */
+
+#include "arm_neon.h"
+
+#define FOO(type) \
+type f_##type(type v) \
+{ \
+  v[1] = 0; \
+  return v; \
+}
+
+FOO(int8x8_t)
+FOO(int16x4_t)
+FOO(int32x2_t)
+
+FOO(int8x16_t)
+FOO(int16x8_t)
+FOO(int32x4_t)
+FOO(int64x2_t)
+
+FOO(float16x4_t)
+FOO(float32x2_t)
+
+FOO(float16x8_t)
+FOO(float32x4_t)
+FOO(float64x2_t)
+
+/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.b\\\[\[1\]\\\], wzr" 2 } } */
+/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.h\\\[\[1\]\\\], wzr" 4 } } */
+/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.s\\\[\[1\]\\\], wzr" 4 } } */
+/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.d\\\[\[1\]\\\], xzr" 2 } } */
  
Richard Sandiford Jan. 23, 2023, 4:56 p.m. UTC | #5
Unaddressed
Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> On Wed, 18 Jan 2023 at 19:59, Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
>> > On Tue, 17 Jan 2023 at 18:29, Richard Sandiford
>> > <richard.sandiford@arm.com> wrote:
>> >>
>> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
>> >> > Hi Richard,
>> >> > For the following (contrived) test:
>> >> >
>> >> > void foo(int32x4_t v)
>> >> > {
>> >> >   v[3] = 0;
>> >> >   return v;
>> >> > }
>> >> >
>> >> > -O2 code-gen:
>> >> > foo:
>> >> >         fmov    s1, wzr
>> >> >         ins     v0.s[3], v1.s[0]
>> >> >         ret
>> >> >
>> >> > I suppose we can instead emit the following code-gen ?
>> >> > foo:
>> >> >      ins v0.s[3], wzr
>> >> >      ret
>> >> >
>> >> > combine produces:
>> >> > Failed to match this instruction:
>> >> > (set (reg:V4SI 95 [ v ])
>> >> >     (vec_merge:V4SI (const_vector:V4SI [
>> >> >                 (const_int 0 [0]) repeated x4
>> >> >             ])
>> >> >         (reg:V4SI 97)
>> >> >         (const_int 8 [0x8])))
>> >> >
>> >> > So, I wrote the following pattern to match the above insn:
>> >> > (define_insn "aarch64_simd_vec_set_zero<mode>"
>> >> >   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> >> >         (vec_merge:VALL_F16
>> >> >             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
>> >> >             (match_operand:VALL_F16 3 "register_operand" "0")
>> >> >             (match_operand:SI 2 "immediate_operand" "i")))]
>> >> >   "TARGET_SIMD"
>> >> >   {
>> >> >     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
>> >> >     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
>> >> >     return "ins\\t%0.<Vetype>[%p2], wzr";
>> >> >   }
>> >> > )
>> >> >
>> >> > which now matches the above insn produced by combine.
>> >> > However, in reload dump, it creates a new insn for assigning
>> >> > register to (const_vector (const_int 0)),
>> >> > which results in:
>> >> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
>> >> >         (const_vector:V4SI [
>> >> >                 (const_int 0 [0]) repeated x4
>> >> >             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
>> >> >      (nil))
>> >> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
>> >> >         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
>> >> >             (reg:V4SI 32 v0 [97])
>> >> >             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
>> >> > {aarch64_simd_vec_set_zerov4si}
>> >> >      (nil))
>> >> >
>> >> > and eventually the code-gen:
>> >> > foo:
>> >> >         movi    v1.4s, 0
>> >> >         ins     v0.s[3], wzr
>> >> >         ret
>> >> >
>> >> > To get rid of redundant assignment of 0 to v1, I tried to split the
>> >> > above pattern
>> >> > as in the attached patch. This works to emit code-gen:
>> >> > foo:
>> >> >         ins     v0.s[3], wzr
>> >> >         ret
>> >> >
>> >> > However, I am not sure if this is the right approach. Could you suggest,
>> >> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?
>> >>
>> >> The problem is with the "w" constraint on operand 1, which tells LRA
>> >> to force the zero into an FPR.  It should work if you remove the
>> >> constraint.
>> > Ah indeed, sorry about that, changing the constrained works.
>>
>> "i" isn't right though, because that's for scalar integers.
>> There's no need for any constraint here -- the predicate does
>> all of the work.
>>
>> > Does the attached patch look OK after bootstrap+test ?
>> > Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ?
>>
>> It needs tests as well. :-)
>>
>> Also:
>>
>> > Thanks,
>> > Prathamesh
>> >
>> >
>> >>
>> >> Also, I think you'll need to use <vwcore>zr for the zero, so that
>> >> it uses xzr for 64-bit elements.
>> >>
>> >> I think this and the existing patterns ought to test
>> >> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
>> >> since there's no guarantee that RTL optimisations won't form
>> >> vec_merges that have other masks.
>> >>
>> >> Thanks,
>> >> Richard
>> >
>> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
>> >
>> > gcc/ChangeLog:
>> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
>> >       New pattern.
>> >       * config/aarch64/predicates.md (const_dup0_operand): New.
>> >
>> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
>> > index 104088f67d2..8e54ee4e886 100644
>> > --- a/gcc/config/aarch64/aarch64-simd.md
>> > +++ b/gcc/config/aarch64/aarch64-simd.md
>> > @@ -1083,6 +1083,20 @@
>> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
>> >  )
>> >
>> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
>> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> > +     (vec_merge:VALL_F16
>> > +         (match_operand:VALL_F16 1 "const_dup0_operand" "i")
>> > +         (match_operand:VALL_F16 3 "register_operand" "0")
>> > +         (match_operand:SI 2 "immediate_operand" "i")))]
>> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
>> > +  {
>> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
>> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
>> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
>> > +  }
>> > +)
>> > +
>> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
>> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> >       (vec_merge:VALL_F16
>> > diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
>> > index ff7f73d3f30..901fa1bd7f9 100644
>> > --- a/gcc/config/aarch64/predicates.md
>> > +++ b/gcc/config/aarch64/predicates.md
>> > @@ -49,6 +49,13 @@
>> >    return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
>> >  })
>> >
>> > +(define_predicate "const_dup0_operand"
>> > +  (match_code "const_vector")
>> > +{
>> > +  op = unwrap_const_vec_duplicate (op);
>> > +  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
>> > +})
>> > +
>>
>> We already have aarch64_simd_imm_zero for this.  aarch64_simd_imm_zero
>> is actually more general, because it works for floating-point modes too.
>>
>> I think the tests should cover all modes included in VALL_F16, since
>> that should have picked up this and the xzr thing.
> Hi Richard,
> Thanks for the suggestions. Does the attached patch look OK ?
> I am not sure how to test for v4bf and v8bf since it seems the compiler
> refuses conversions to/from bfloat16_t ?
>
> Thanks,
> Prathamesh
>
>>
>> Thanks,
>> Richard
>>
>> >  (define_predicate "subreg_lowpart_operator"
>> >    (ior (match_code "truncate")
>> >         (and (match_code "subreg")
>
> [aarch64] Use wzr/xzr for assigning 0 to vector element.
>
> gcc/ChangeLog:
> 	* config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> 	New pattern.
>
> gcc/testsuite/ChangeLog:
> 	* gcc.target/aarch64/vec-set-zero.c: New test.
>
> diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> index 7f212bf37cd..7428e74beaf 100644
> --- a/gcc/config/aarch64/aarch64-simd.md
> +++ b/gcc/config/aarch64/aarch64-simd.md
> @@ -1083,6 +1083,20 @@
>    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
>  )
>  
> +(define_insn "aarch64_simd_vec_set_zero<mode>"
> +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> +	(vec_merge:VALL_F16
> +	    (match_operand:VALL_F16 1 "aarch64_simd_imm_zero" "")
> +	    (match_operand:VALL_F16 3 "register_operand" "0")
> +	    (match_operand:SI 2 "immediate_operand" "i")))]
> +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> +  {
> +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> +  }
> +)
> +
>  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
>    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>  	(vec_merge:VALL_F16
> diff --git a/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> new file mode 100644
> index 00000000000..c260cc9e445
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> @@ -0,0 +1,32 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2" } */
> +
> +#include "arm_neon.h"
> +
> +#define FOO(type) \
> +type f_##type(type v) \
> +{ \
> +  v[1] = 0; \
> +  return v; \
> +}
> +
> +FOO(int8x8_t)
> +FOO(int16x4_t)
> +FOO(int32x2_t)
> +
> +FOO(int8x16_t)
> +FOO(int16x8_t)
> +FOO(int32x4_t)
> +FOO(int64x2_t)
> +
> +FOO(float16x4_t)
> +FOO(float32x2_t)
> +
> +FOO(float16x8_t)
> +FOO(float32x4_t)
> +FOO(float64x2_t)
> +
> +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.b\\\[\[1\]\\\], wzr" 2 } } */
> +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.h\\\[\[1\]\\\], wzr" 4 } } */
> +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.s\\\[\[1\]\\\], wzr" 4 } } */
> +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.d\\\[\[1\]\\\], xzr" 2 } } */

Can you test big-endian too?  I'd expect it to use different INS indices.

It might be worth quoting the regexps with {...} rather than "...",
to reduce the number of backslashes needed.

Thanks,
Richard
  
Prathamesh Kulkarni Jan. 25, 2023, 11:56 a.m. UTC | #6
On Mon, 23 Jan 2023 at 22:26, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > On Wed, 18 Jan 2023 at 19:59, Richard Sandiford
> > <richard.sandiford@arm.com> wrote:
> >>
> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> > On Tue, 17 Jan 2023 at 18:29, Richard Sandiford
> >> > <richard.sandiford@arm.com> wrote:
> >> >>
> >> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> >> > Hi Richard,
> >> >> > For the following (contrived) test:
> >> >> >
> >> >> > void foo(int32x4_t v)
> >> >> > {
> >> >> >   v[3] = 0;
> >> >> >   return v;
> >> >> > }
> >> >> >
> >> >> > -O2 code-gen:
> >> >> > foo:
> >> >> >         fmov    s1, wzr
> >> >> >         ins     v0.s[3], v1.s[0]
> >> >> >         ret
> >> >> >
> >> >> > I suppose we can instead emit the following code-gen ?
> >> >> > foo:
> >> >> >      ins v0.s[3], wzr
> >> >> >      ret
> >> >> >
> >> >> > combine produces:
> >> >> > Failed to match this instruction:
> >> >> > (set (reg:V4SI 95 [ v ])
> >> >> >     (vec_merge:V4SI (const_vector:V4SI [
> >> >> >                 (const_int 0 [0]) repeated x4
> >> >> >             ])
> >> >> >         (reg:V4SI 97)
> >> >> >         (const_int 8 [0x8])))
> >> >> >
> >> >> > So, I wrote the following pattern to match the above insn:
> >> >> > (define_insn "aarch64_simd_vec_set_zero<mode>"
> >> >> >   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> >> >         (vec_merge:VALL_F16
> >> >> >             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
> >> >> >             (match_operand:VALL_F16 3 "register_operand" "0")
> >> >> >             (match_operand:SI 2 "immediate_operand" "i")))]
> >> >> >   "TARGET_SIMD"
> >> >> >   {
> >> >> >     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> >> >> >     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> >> >> >     return "ins\\t%0.<Vetype>[%p2], wzr";
> >> >> >   }
> >> >> > )
> >> >> >
> >> >> > which now matches the above insn produced by combine.
> >> >> > However, in reload dump, it creates a new insn for assigning
> >> >> > register to (const_vector (const_int 0)),
> >> >> > which results in:
> >> >> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
> >> >> >         (const_vector:V4SI [
> >> >> >                 (const_int 0 [0]) repeated x4
> >> >> >             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
> >> >> >      (nil))
> >> >> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
> >> >> >         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
> >> >> >             (reg:V4SI 32 v0 [97])
> >> >> >             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
> >> >> > {aarch64_simd_vec_set_zerov4si}
> >> >> >      (nil))
> >> >> >
> >> >> > and eventually the code-gen:
> >> >> > foo:
> >> >> >         movi    v1.4s, 0
> >> >> >         ins     v0.s[3], wzr
> >> >> >         ret
> >> >> >
> >> >> > To get rid of redundant assignment of 0 to v1, I tried to split the
> >> >> > above pattern
> >> >> > as in the attached patch. This works to emit code-gen:
> >> >> > foo:
> >> >> >         ins     v0.s[3], wzr
> >> >> >         ret
> >> >> >
> >> >> > However, I am not sure if this is the right approach. Could you suggest,
> >> >> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?
> >> >>
> >> >> The problem is with the "w" constraint on operand 1, which tells LRA
> >> >> to force the zero into an FPR.  It should work if you remove the
> >> >> constraint.
> >> > Ah indeed, sorry about that, changing the constrained works.
> >>
> >> "i" isn't right though, because that's for scalar integers.
> >> There's no need for any constraint here -- the predicate does
> >> all of the work.
> >>
> >> > Does the attached patch look OK after bootstrap+test ?
> >> > Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ?
> >>
> >> It needs tests as well. :-)
> >>
> >> Also:
> >>
> >> > Thanks,
> >> > Prathamesh
> >> >
> >> >
> >> >>
> >> >> Also, I think you'll need to use <vwcore>zr for the zero, so that
> >> >> it uses xzr for 64-bit elements.
> >> >>
> >> >> I think this and the existing patterns ought to test
> >> >> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
> >> >> since there's no guarantee that RTL optimisations won't form
> >> >> vec_merges that have other masks.
> >> >>
> >> >> Thanks,
> >> >> Richard
> >> >
> >> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
> >> >
> >> > gcc/ChangeLog:
> >> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> >> >       New pattern.
> >> >       * config/aarch64/predicates.md (const_dup0_operand): New.
> >> >
> >> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> >> > index 104088f67d2..8e54ee4e886 100644
> >> > --- a/gcc/config/aarch64/aarch64-simd.md
> >> > +++ b/gcc/config/aarch64/aarch64-simd.md
> >> > @@ -1083,6 +1083,20 @@
> >> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
> >> >  )
> >> >
> >> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
> >> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> > +     (vec_merge:VALL_F16
> >> > +         (match_operand:VALL_F16 1 "const_dup0_operand" "i")
> >> > +         (match_operand:VALL_F16 3 "register_operand" "0")
> >> > +         (match_operand:SI 2 "immediate_operand" "i")))]
> >> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> >> > +  {
> >> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> >> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> >> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> >> > +  }
> >> > +)
> >> > +
> >> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
> >> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> >       (vec_merge:VALL_F16
> >> > diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
> >> > index ff7f73d3f30..901fa1bd7f9 100644
> >> > --- a/gcc/config/aarch64/predicates.md
> >> > +++ b/gcc/config/aarch64/predicates.md
> >> > @@ -49,6 +49,13 @@
> >> >    return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
> >> >  })
> >> >
> >> > +(define_predicate "const_dup0_operand"
> >> > +  (match_code "const_vector")
> >> > +{
> >> > +  op = unwrap_const_vec_duplicate (op);
> >> > +  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
> >> > +})
> >> > +
> >>
> >> We already have aarch64_simd_imm_zero for this.  aarch64_simd_imm_zero
> >> is actually more general, because it works for floating-point modes too.
> >>
> >> I think the tests should cover all modes included in VALL_F16, since
> >> that should have picked up this and the xzr thing.
> > Hi Richard,
> > Thanks for the suggestions. Does the attached patch look OK ?
> > I am not sure how to test for v4bf and v8bf since it seems the compiler
> > refuses conversions to/from bfloat16_t ?
> >
> > Thanks,
> > Prathamesh
> >
> >>
> >> Thanks,
> >> Richard
> >>
> >> >  (define_predicate "subreg_lowpart_operator"
> >> >    (ior (match_code "truncate")
> >> >         (and (match_code "subreg")
> >
> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
> >
> > gcc/ChangeLog:
> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> >       New pattern.
> >
> > gcc/testsuite/ChangeLog:
> >       * gcc.target/aarch64/vec-set-zero.c: New test.
> >
> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> > index 7f212bf37cd..7428e74beaf 100644
> > --- a/gcc/config/aarch64/aarch64-simd.md
> > +++ b/gcc/config/aarch64/aarch64-simd.md
> > @@ -1083,6 +1083,20 @@
> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
> >  )
> >
> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> > +     (vec_merge:VALL_F16
> > +         (match_operand:VALL_F16 1 "aarch64_simd_imm_zero" "")
> > +         (match_operand:VALL_F16 3 "register_operand" "0")
> > +         (match_operand:SI 2 "immediate_operand" "i")))]
> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> > +  {
> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> > +  }
> > +)
> > +
> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >       (vec_merge:VALL_F16
> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> > new file mode 100644
> > index 00000000000..c260cc9e445
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> > @@ -0,0 +1,32 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-O2" } */
> > +
> > +#include "arm_neon.h"
> > +
> > +#define FOO(type) \
> > +type f_##type(type v) \
> > +{ \
> > +  v[1] = 0; \
> > +  return v; \
> > +}
> > +
> > +FOO(int8x8_t)
> > +FOO(int16x4_t)
> > +FOO(int32x2_t)
> > +
> > +FOO(int8x16_t)
> > +FOO(int16x8_t)
> > +FOO(int32x4_t)
> > +FOO(int64x2_t)
> > +
> > +FOO(float16x4_t)
> > +FOO(float32x2_t)
> > +
> > +FOO(float16x8_t)
> > +FOO(float32x4_t)
> > +FOO(float64x2_t)
> > +
> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.b\\\[\[1\]\\\], wzr" 2 } } */
> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.h\\\[\[1\]\\\], wzr" 4 } } */
> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.s\\\[\[1\]\\\], wzr" 4 } } */
> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.d\\\[\[1\]\\\], xzr" 2 } } */
>
> Can you test big-endian too?  I'd expect it to use different INS indices.
Ah indeed, thanks for pointing out.
>
> It might be worth quoting the regexps with {...} rather than "...",
> to reduce the number of backslashes needed.
Does the attached patch look OK ?

Thanks,
Prathamesh
>
> Thanks,
> Richard
[aarch64] Use wzr/xzr for assigning 0 to vector element.

gcc/ChangeLog:
	* config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
	New pattern.

gcc/testsuite/ChangeLog:
	* gcc.target/aarch64/vec-set-zero.c: New test.

diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 7f212bf37cd..7428e74beaf 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -1083,6 +1083,20 @@
   [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
 )
 
+(define_insn "aarch64_simd_vec_set_zero<mode>"
+  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
+	(vec_merge:VALL_F16
+	    (match_operand:VALL_F16 1 "aarch64_simd_imm_zero" "")
+	    (match_operand:VALL_F16 3 "register_operand" "0")
+	    (match_operand:SI 2 "immediate_operand" "i")))]
+  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
+  {
+    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
+    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
+    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
+  }
+)
+
 (define_insn "@aarch64_simd_vec_copy_lane<mode>"
   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
 	(vec_merge:VALL_F16
diff --git a/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
new file mode 100644
index 00000000000..b34b902cf27
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
@@ -0,0 +1,40 @@
+/* { dg-do compile } */
+/* { dg-options "-O2" } */
+
+#include "arm_neon.h"
+
+#define FOO(type) \
+type f_##type(type v) \
+{ \
+  v[1] = 0; \
+  return v; \
+}
+
+FOO(int8x8_t)
+FOO(int16x4_t)
+FOO(int32x2_t)
+
+FOO(int8x16_t)
+FOO(int16x8_t)
+FOO(int32x4_t)
+FOO(int64x2_t)
+
+FOO(float16x4_t)
+FOO(float32x2_t)
+
+FOO(float16x8_t)
+FOO(float32x4_t)
+FOO(float64x2_t)
+
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[1\], wzr} 2 { target aarch64_little_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[1\], wzr} 4 { target aarch64_little_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[1\], wzr} 4 { target aarch64_little_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.d\[1\], xzr} 2 { target aarch64_little_endian } } } */
+
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[6\], wzr} 1 { target aarch64_big_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[14\], wzr} 1 { target aarch64_big_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[2\], wzr} 2 { target aarch64_big_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[6\], wzr} 2 { target aarch64_big_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[0\], wzr} 2 { target aarch64_big_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[2\], wzr} 2 { target aarch64_big_endian } } } */
+/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.d\[0\], xzr} 2 { target aarch64_big_endian } } } */
  
Richard Sandiford Jan. 31, 2023, 6:21 a.m. UTC | #7
Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> On Mon, 23 Jan 2023 at 22:26, Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
>> > On Wed, 18 Jan 2023 at 19:59, Richard Sandiford
>> > <richard.sandiford@arm.com> wrote:
>> >>
>> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
>> >> > On Tue, 17 Jan 2023 at 18:29, Richard Sandiford
>> >> > <richard.sandiford@arm.com> wrote:
>> >> >>
>> >> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
>> >> >> > Hi Richard,
>> >> >> > For the following (contrived) test:
>> >> >> >
>> >> >> > void foo(int32x4_t v)
>> >> >> > {
>> >> >> >   v[3] = 0;
>> >> >> >   return v;
>> >> >> > }
>> >> >> >
>> >> >> > -O2 code-gen:
>> >> >> > foo:
>> >> >> >         fmov    s1, wzr
>> >> >> >         ins     v0.s[3], v1.s[0]
>> >> >> >         ret
>> >> >> >
>> >> >> > I suppose we can instead emit the following code-gen ?
>> >> >> > foo:
>> >> >> >      ins v0.s[3], wzr
>> >> >> >      ret
>> >> >> >
>> >> >> > combine produces:
>> >> >> > Failed to match this instruction:
>> >> >> > (set (reg:V4SI 95 [ v ])
>> >> >> >     (vec_merge:V4SI (const_vector:V4SI [
>> >> >> >                 (const_int 0 [0]) repeated x4
>> >> >> >             ])
>> >> >> >         (reg:V4SI 97)
>> >> >> >         (const_int 8 [0x8])))
>> >> >> >
>> >> >> > So, I wrote the following pattern to match the above insn:
>> >> >> > (define_insn "aarch64_simd_vec_set_zero<mode>"
>> >> >> >   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> >> >> >         (vec_merge:VALL_F16
>> >> >> >             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
>> >> >> >             (match_operand:VALL_F16 3 "register_operand" "0")
>> >> >> >             (match_operand:SI 2 "immediate_operand" "i")))]
>> >> >> >   "TARGET_SIMD"
>> >> >> >   {
>> >> >> >     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
>> >> >> >     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
>> >> >> >     return "ins\\t%0.<Vetype>[%p2], wzr";
>> >> >> >   }
>> >> >> > )
>> >> >> >
>> >> >> > which now matches the above insn produced by combine.
>> >> >> > However, in reload dump, it creates a new insn for assigning
>> >> >> > register to (const_vector (const_int 0)),
>> >> >> > which results in:
>> >> >> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
>> >> >> >         (const_vector:V4SI [
>> >> >> >                 (const_int 0 [0]) repeated x4
>> >> >> >             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
>> >> >> >      (nil))
>> >> >> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
>> >> >> >         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
>> >> >> >             (reg:V4SI 32 v0 [97])
>> >> >> >             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
>> >> >> > {aarch64_simd_vec_set_zerov4si}
>> >> >> >      (nil))
>> >> >> >
>> >> >> > and eventually the code-gen:
>> >> >> > foo:
>> >> >> >         movi    v1.4s, 0
>> >> >> >         ins     v0.s[3], wzr
>> >> >> >         ret
>> >> >> >
>> >> >> > To get rid of redundant assignment of 0 to v1, I tried to split the
>> >> >> > above pattern
>> >> >> > as in the attached patch. This works to emit code-gen:
>> >> >> > foo:
>> >> >> >         ins     v0.s[3], wzr
>> >> >> >         ret
>> >> >> >
>> >> >> > However, I am not sure if this is the right approach. Could you suggest,
>> >> >> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?
>> >> >>
>> >> >> The problem is with the "w" constraint on operand 1, which tells LRA
>> >> >> to force the zero into an FPR.  It should work if you remove the
>> >> >> constraint.
>> >> > Ah indeed, sorry about that, changing the constrained works.
>> >>
>> >> "i" isn't right though, because that's for scalar integers.
>> >> There's no need for any constraint here -- the predicate does
>> >> all of the work.
>> >>
>> >> > Does the attached patch look OK after bootstrap+test ?
>> >> > Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ?
>> >>
>> >> It needs tests as well. :-)
>> >>
>> >> Also:
>> >>
>> >> > Thanks,
>> >> > Prathamesh
>> >> >
>> >> >
>> >> >>
>> >> >> Also, I think you'll need to use <vwcore>zr for the zero, so that
>> >> >> it uses xzr for 64-bit elements.
>> >> >>
>> >> >> I think this and the existing patterns ought to test
>> >> >> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
>> >> >> since there's no guarantee that RTL optimisations won't form
>> >> >> vec_merges that have other masks.
>> >> >>
>> >> >> Thanks,
>> >> >> Richard
>> >> >
>> >> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
>> >> >
>> >> > gcc/ChangeLog:
>> >> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
>> >> >       New pattern.
>> >> >       * config/aarch64/predicates.md (const_dup0_operand): New.
>> >> >
>> >> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
>> >> > index 104088f67d2..8e54ee4e886 100644
>> >> > --- a/gcc/config/aarch64/aarch64-simd.md
>> >> > +++ b/gcc/config/aarch64/aarch64-simd.md
>> >> > @@ -1083,6 +1083,20 @@
>> >> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
>> >> >  )
>> >> >
>> >> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
>> >> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> >> > +     (vec_merge:VALL_F16
>> >> > +         (match_operand:VALL_F16 1 "const_dup0_operand" "i")
>> >> > +         (match_operand:VALL_F16 3 "register_operand" "0")
>> >> > +         (match_operand:SI 2 "immediate_operand" "i")))]
>> >> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
>> >> > +  {
>> >> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
>> >> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
>> >> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
>> >> > +  }
>> >> > +)
>> >> > +
>> >> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
>> >> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> >> >       (vec_merge:VALL_F16
>> >> > diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
>> >> > index ff7f73d3f30..901fa1bd7f9 100644
>> >> > --- a/gcc/config/aarch64/predicates.md
>> >> > +++ b/gcc/config/aarch64/predicates.md
>> >> > @@ -49,6 +49,13 @@
>> >> >    return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
>> >> >  })
>> >> >
>> >> > +(define_predicate "const_dup0_operand"
>> >> > +  (match_code "const_vector")
>> >> > +{
>> >> > +  op = unwrap_const_vec_duplicate (op);
>> >> > +  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
>> >> > +})
>> >> > +
>> >>
>> >> We already have aarch64_simd_imm_zero for this.  aarch64_simd_imm_zero
>> >> is actually more general, because it works for floating-point modes too.
>> >>
>> >> I think the tests should cover all modes included in VALL_F16, since
>> >> that should have picked up this and the xzr thing.
>> > Hi Richard,
>> > Thanks for the suggestions. Does the attached patch look OK ?
>> > I am not sure how to test for v4bf and v8bf since it seems the compiler
>> > refuses conversions to/from bfloat16_t ?
>> >
>> > Thanks,
>> > Prathamesh
>> >
>> >>
>> >> Thanks,
>> >> Richard
>> >>
>> >> >  (define_predicate "subreg_lowpart_operator"
>> >> >    (ior (match_code "truncate")
>> >> >         (and (match_code "subreg")
>> >
>> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
>> >
>> > gcc/ChangeLog:
>> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
>> >       New pattern.
>> >
>> > gcc/testsuite/ChangeLog:
>> >       * gcc.target/aarch64/vec-set-zero.c: New test.
>> >
>> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
>> > index 7f212bf37cd..7428e74beaf 100644
>> > --- a/gcc/config/aarch64/aarch64-simd.md
>> > +++ b/gcc/config/aarch64/aarch64-simd.md
>> > @@ -1083,6 +1083,20 @@
>> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
>> >  )
>> >
>> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
>> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> > +     (vec_merge:VALL_F16
>> > +         (match_operand:VALL_F16 1 "aarch64_simd_imm_zero" "")
>> > +         (match_operand:VALL_F16 3 "register_operand" "0")
>> > +         (match_operand:SI 2 "immediate_operand" "i")))]
>> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
>> > +  {
>> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
>> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
>> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
>> > +  }
>> > +)
>> > +
>> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
>> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>> >       (vec_merge:VALL_F16
>> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
>> > new file mode 100644
>> > index 00000000000..c260cc9e445
>> > --- /dev/null
>> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
>> > @@ -0,0 +1,32 @@
>> > +/* { dg-do compile } */
>> > +/* { dg-options "-O2" } */
>> > +
>> > +#include "arm_neon.h"
>> > +
>> > +#define FOO(type) \
>> > +type f_##type(type v) \
>> > +{ \
>> > +  v[1] = 0; \
>> > +  return v; \
>> > +}
>> > +
>> > +FOO(int8x8_t)
>> > +FOO(int16x4_t)
>> > +FOO(int32x2_t)
>> > +
>> > +FOO(int8x16_t)
>> > +FOO(int16x8_t)
>> > +FOO(int32x4_t)
>> > +FOO(int64x2_t)
>> > +
>> > +FOO(float16x4_t)
>> > +FOO(float32x2_t)
>> > +
>> > +FOO(float16x8_t)
>> > +FOO(float32x4_t)
>> > +FOO(float64x2_t)
>> > +
>> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.b\\\[\[1\]\\\], wzr" 2 } } */
>> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.h\\\[\[1\]\\\], wzr" 4 } } */
>> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.s\\\[\[1\]\\\], wzr" 4 } } */
>> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.d\\\[\[1\]\\\], xzr" 2 } } */
>>
>> Can you test big-endian too?  I'd expect it to use different INS indices.
> Ah indeed, thanks for pointing out.
>>
>> It might be worth quoting the regexps with {...} rather than "...",
>> to reduce the number of backslashes needed.
> Does the attached patch look OK ?

Yeah, OK for GCC 14, thanks.

Richard

>
> Thanks,
> Prathamesh
>>
>> Thanks,
>> Richard
>
> [aarch64] Use wzr/xzr for assigning 0 to vector element.
>
> gcc/ChangeLog:
> 	* config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> 	New pattern.
>
> gcc/testsuite/ChangeLog:
> 	* gcc.target/aarch64/vec-set-zero.c: New test.
>
> diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> index 7f212bf37cd..7428e74beaf 100644
> --- a/gcc/config/aarch64/aarch64-simd.md
> +++ b/gcc/config/aarch64/aarch64-simd.md
> @@ -1083,6 +1083,20 @@
>    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
>  )
>  
> +(define_insn "aarch64_simd_vec_set_zero<mode>"
> +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> +	(vec_merge:VALL_F16
> +	    (match_operand:VALL_F16 1 "aarch64_simd_imm_zero" "")
> +	    (match_operand:VALL_F16 3 "register_operand" "0")
> +	    (match_operand:SI 2 "immediate_operand" "i")))]
> +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> +  {
> +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> +  }
> +)
> +
>  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
>    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
>  	(vec_merge:VALL_F16
> diff --git a/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> new file mode 100644
> index 00000000000..b34b902cf27
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> @@ -0,0 +1,40 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2" } */
> +
> +#include "arm_neon.h"
> +
> +#define FOO(type) \
> +type f_##type(type v) \
> +{ \
> +  v[1] = 0; \
> +  return v; \
> +}
> +
> +FOO(int8x8_t)
> +FOO(int16x4_t)
> +FOO(int32x2_t)
> +
> +FOO(int8x16_t)
> +FOO(int16x8_t)
> +FOO(int32x4_t)
> +FOO(int64x2_t)
> +
> +FOO(float16x4_t)
> +FOO(float32x2_t)
> +
> +FOO(float16x8_t)
> +FOO(float32x4_t)
> +FOO(float64x2_t)
> +
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[1\], wzr} 2 { target aarch64_little_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[1\], wzr} 4 { target aarch64_little_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[1\], wzr} 4 { target aarch64_little_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.d\[1\], xzr} 2 { target aarch64_little_endian } } } */
> +
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[6\], wzr} 1 { target aarch64_big_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[14\], wzr} 1 { target aarch64_big_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[2\], wzr} 2 { target aarch64_big_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[6\], wzr} 2 { target aarch64_big_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[0\], wzr} 2 { target aarch64_big_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[2\], wzr} 2 { target aarch64_big_endian } } } */
> +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.d\[0\], xzr} 2 { target aarch64_big_endian } } } */
  
Prathamesh Kulkarni April 19, 2023, 8:42 a.m. UTC | #8
On Tue, 31 Jan 2023 at 11:51, Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> > On Mon, 23 Jan 2023 at 22:26, Richard Sandiford
> > <richard.sandiford@arm.com> wrote:
> >>
> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> > On Wed, 18 Jan 2023 at 19:59, Richard Sandiford
> >> > <richard.sandiford@arm.com> wrote:
> >> >>
> >> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> >> > On Tue, 17 Jan 2023 at 18:29, Richard Sandiford
> >> >> > <richard.sandiford@arm.com> wrote:
> >> >> >>
> >> >> >> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> writes:
> >> >> >> > Hi Richard,
> >> >> >> > For the following (contrived) test:
> >> >> >> >
> >> >> >> > void foo(int32x4_t v)
> >> >> >> > {
> >> >> >> >   v[3] = 0;
> >> >> >> >   return v;
> >> >> >> > }
> >> >> >> >
> >> >> >> > -O2 code-gen:
> >> >> >> > foo:
> >> >> >> >         fmov    s1, wzr
> >> >> >> >         ins     v0.s[3], v1.s[0]
> >> >> >> >         ret
> >> >> >> >
> >> >> >> > I suppose we can instead emit the following code-gen ?
> >> >> >> > foo:
> >> >> >> >      ins v0.s[3], wzr
> >> >> >> >      ret
> >> >> >> >
> >> >> >> > combine produces:
> >> >> >> > Failed to match this instruction:
> >> >> >> > (set (reg:V4SI 95 [ v ])
> >> >> >> >     (vec_merge:V4SI (const_vector:V4SI [
> >> >> >> >                 (const_int 0 [0]) repeated x4
> >> >> >> >             ])
> >> >> >> >         (reg:V4SI 97)
> >> >> >> >         (const_int 8 [0x8])))
> >> >> >> >
> >> >> >> > So, I wrote the following pattern to match the above insn:
> >> >> >> > (define_insn "aarch64_simd_vec_set_zero<mode>"
> >> >> >> >   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> >> >> >         (vec_merge:VALL_F16
> >> >> >> >             (match_operand:VALL_F16 1 "const_dup0_operand" "w")
> >> >> >> >             (match_operand:VALL_F16 3 "register_operand" "0")
> >> >> >> >             (match_operand:SI 2 "immediate_operand" "i")))]
> >> >> >> >   "TARGET_SIMD"
> >> >> >> >   {
> >> >> >> >     int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> >> >> >> >     operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> >> >> >> >     return "ins\\t%0.<Vetype>[%p2], wzr";
> >> >> >> >   }
> >> >> >> > )
> >> >> >> >
> >> >> >> > which now matches the above insn produced by combine.
> >> >> >> > However, in reload dump, it creates a new insn for assigning
> >> >> >> > register to (const_vector (const_int 0)),
> >> >> >> > which results in:
> >> >> >> > (insn 19 8 13 2 (set (reg:V4SI 33 v1 [99])
> >> >> >> >         (const_vector:V4SI [
> >> >> >> >                 (const_int 0 [0]) repeated x4
> >> >> >> >             ])) "wzr-test.c":8:1 1269 {*aarch64_simd_movv4si}
> >> >> >> >      (nil))
> >> >> >> > (insn 13 19 14 2 (set (reg/i:V4SI 32 v0)
> >> >> >> >         (vec_merge:V4SI (reg:V4SI 33 v1 [99])
> >> >> >> >             (reg:V4SI 32 v0 [97])
> >> >> >> >             (const_int 8 [0x8]))) "wzr-test.c":8:1 1808
> >> >> >> > {aarch64_simd_vec_set_zerov4si}
> >> >> >> >      (nil))
> >> >> >> >
> >> >> >> > and eventually the code-gen:
> >> >> >> > foo:
> >> >> >> >         movi    v1.4s, 0
> >> >> >> >         ins     v0.s[3], wzr
> >> >> >> >         ret
> >> >> >> >
> >> >> >> > To get rid of redundant assignment of 0 to v1, I tried to split the
> >> >> >> > above pattern
> >> >> >> > as in the attached patch. This works to emit code-gen:
> >> >> >> > foo:
> >> >> >> >         ins     v0.s[3], wzr
> >> >> >> >         ret
> >> >> >> >
> >> >> >> > However, I am not sure if this is the right approach. Could you suggest,
> >> >> >> > if it'd be possible to get rid of UNSPEC_SETZERO in the patch ?
> >> >> >>
> >> >> >> The problem is with the "w" constraint on operand 1, which tells LRA
> >> >> >> to force the zero into an FPR.  It should work if you remove the
> >> >> >> constraint.
> >> >> > Ah indeed, sorry about that, changing the constrained works.
> >> >>
> >> >> "i" isn't right though, because that's for scalar integers.
> >> >> There's no need for any constraint here -- the predicate does
> >> >> all of the work.
> >> >>
> >> >> > Does the attached patch look OK after bootstrap+test ?
> >> >> > Since we're in stage-4, shall it be OK to commit now, or queue it for stage-1 ?
> >> >>
> >> >> It needs tests as well. :-)
> >> >>
> >> >> Also:
> >> >>
> >> >> > Thanks,
> >> >> > Prathamesh
> >> >> >
> >> >> >
> >> >> >>
> >> >> >> Also, I think you'll need to use <vwcore>zr for the zero, so that
> >> >> >> it uses xzr for 64-bit elements.
> >> >> >>
> >> >> >> I think this and the existing patterns ought to test
> >> >> >> exact_log2 (INTVAL (operands[2])) >= 0 in the insn condition,
> >> >> >> since there's no guarantee that RTL optimisations won't form
> >> >> >> vec_merges that have other masks.
> >> >> >>
> >> >> >> Thanks,
> >> >> >> Richard
> >> >> >
> >> >> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
> >> >> >
> >> >> > gcc/ChangeLog:
> >> >> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> >> >> >       New pattern.
> >> >> >       * config/aarch64/predicates.md (const_dup0_operand): New.
> >> >> >
> >> >> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> >> >> > index 104088f67d2..8e54ee4e886 100644
> >> >> > --- a/gcc/config/aarch64/aarch64-simd.md
> >> >> > +++ b/gcc/config/aarch64/aarch64-simd.md
> >> >> > @@ -1083,6 +1083,20 @@
> >> >> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
> >> >> >  )
> >> >> >
> >> >> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
> >> >> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> >> > +     (vec_merge:VALL_F16
> >> >> > +         (match_operand:VALL_F16 1 "const_dup0_operand" "i")
> >> >> > +         (match_operand:VALL_F16 3 "register_operand" "0")
> >> >> > +         (match_operand:SI 2 "immediate_operand" "i")))]
> >> >> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> >> >> > +  {
> >> >> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> >> >> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> >> >> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> >> >> > +  }
> >> >> > +)
> >> >> > +
> >> >> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
> >> >> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> >> >       (vec_merge:VALL_F16
> >> >> > diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
> >> >> > index ff7f73d3f30..901fa1bd7f9 100644
> >> >> > --- a/gcc/config/aarch64/predicates.md
> >> >> > +++ b/gcc/config/aarch64/predicates.md
> >> >> > @@ -49,6 +49,13 @@
> >> >> >    return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
> >> >> >  })
> >> >> >
> >> >> > +(define_predicate "const_dup0_operand"
> >> >> > +  (match_code "const_vector")
> >> >> > +{
> >> >> > +  op = unwrap_const_vec_duplicate (op);
> >> >> > +  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
> >> >> > +})
> >> >> > +
> >> >>
> >> >> We already have aarch64_simd_imm_zero for this.  aarch64_simd_imm_zero
> >> >> is actually more general, because it works for floating-point modes too.
> >> >>
> >> >> I think the tests should cover all modes included in VALL_F16, since
> >> >> that should have picked up this and the xzr thing.
> >> > Hi Richard,
> >> > Thanks for the suggestions. Does the attached patch look OK ?
> >> > I am not sure how to test for v4bf and v8bf since it seems the compiler
> >> > refuses conversions to/from bfloat16_t ?
> >> >
> >> > Thanks,
> >> > Prathamesh
> >> >
> >> >>
> >> >> Thanks,
> >> >> Richard
> >> >>
> >> >> >  (define_predicate "subreg_lowpart_operator"
> >> >> >    (ior (match_code "truncate")
> >> >> >         (and (match_code "subreg")
> >> >
> >> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
> >> >
> >> > gcc/ChangeLog:
> >> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> >> >       New pattern.
> >> >
> >> > gcc/testsuite/ChangeLog:
> >> >       * gcc.target/aarch64/vec-set-zero.c: New test.
> >> >
> >> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> >> > index 7f212bf37cd..7428e74beaf 100644
> >> > --- a/gcc/config/aarch64/aarch64-simd.md
> >> > +++ b/gcc/config/aarch64/aarch64-simd.md
> >> > @@ -1083,6 +1083,20 @@
> >> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
> >> >  )
> >> >
> >> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
> >> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> > +     (vec_merge:VALL_F16
> >> > +         (match_operand:VALL_F16 1 "aarch64_simd_imm_zero" "")
> >> > +         (match_operand:VALL_F16 3 "register_operand" "0")
> >> > +         (match_operand:SI 2 "immediate_operand" "i")))]
> >> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> >> > +  {
> >> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> >> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> >> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> >> > +  }
> >> > +)
> >> > +
> >> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
> >> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >> >       (vec_merge:VALL_F16
> >> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> >> > new file mode 100644
> >> > index 00000000000..c260cc9e445
> >> > --- /dev/null
> >> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> >> > @@ -0,0 +1,32 @@
> >> > +/* { dg-do compile } */
> >> > +/* { dg-options "-O2" } */
> >> > +
> >> > +#include "arm_neon.h"
> >> > +
> >> > +#define FOO(type) \
> >> > +type f_##type(type v) \
> >> > +{ \
> >> > +  v[1] = 0; \
> >> > +  return v; \
> >> > +}
> >> > +
> >> > +FOO(int8x8_t)
> >> > +FOO(int16x4_t)
> >> > +FOO(int32x2_t)
> >> > +
> >> > +FOO(int8x16_t)
> >> > +FOO(int16x8_t)
> >> > +FOO(int32x4_t)
> >> > +FOO(int64x2_t)
> >> > +
> >> > +FOO(float16x4_t)
> >> > +FOO(float32x2_t)
> >> > +
> >> > +FOO(float16x8_t)
> >> > +FOO(float32x4_t)
> >> > +FOO(float64x2_t)
> >> > +
> >> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.b\\\[\[1\]\\\], wzr" 2 } } */
> >> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.h\\\[\[1\]\\\], wzr" 4 } } */
> >> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.s\\\[\[1\]\\\], wzr" 4 } } */
> >> > +/* { dg-final { scan-assembler-times "ins\\tv\[0-9\]+\.d\\\[\[1\]\\\], xzr" 2 } } */
> >>
> >> Can you test big-endian too?  I'd expect it to use different INS indices.
> > Ah indeed, thanks for pointing out.
> >>
> >> It might be worth quoting the regexps with {...} rather than "...",
> >> to reduce the number of backslashes needed.
> > Does the attached patch look OK ?
>
> Yeah, OK for GCC 14, thanks.
Thanks, committed after verifying bootstrap+test passes on aarch64-linux-gnu in:
https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=2c7bf8036dfe2f603f1c135dabf6415d8d28051b

Thanks,
Prathamesh
>
> Richard
>
> >
> > Thanks,
> > Prathamesh
> >>
> >> Thanks,
> >> Richard
> >
> > [aarch64] Use wzr/xzr for assigning 0 to vector element.
> >
> > gcc/ChangeLog:
> >       * config/aaarch64/aarch64-simd.md (aarch64_simd_vec_set_zero<mode>):
> >       New pattern.
> >
> > gcc/testsuite/ChangeLog:
> >       * gcc.target/aarch64/vec-set-zero.c: New test.
> >
> > diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> > index 7f212bf37cd..7428e74beaf 100644
> > --- a/gcc/config/aarch64/aarch64-simd.md
> > +++ b/gcc/config/aarch64/aarch64-simd.md
> > @@ -1083,6 +1083,20 @@
> >    [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
> >  )
> >
> > +(define_insn "aarch64_simd_vec_set_zero<mode>"
> > +  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> > +     (vec_merge:VALL_F16
> > +         (match_operand:VALL_F16 1 "aarch64_simd_imm_zero" "")
> > +         (match_operand:VALL_F16 3 "register_operand" "0")
> > +         (match_operand:SI 2 "immediate_operand" "i")))]
> > +  "TARGET_SIMD && exact_log2 (INTVAL (operands[2])) >= 0"
> > +  {
> > +    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
> > +    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
> > +    return "ins\\t%0.<Vetype>[%p2], <vwcore>zr";
> > +  }
> > +)
> > +
> >  (define_insn "@aarch64_simd_vec_copy_lane<mode>"
> >    [(set (match_operand:VALL_F16 0 "register_operand" "=w")
> >       (vec_merge:VALL_F16
> > diff --git a/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> > new file mode 100644
> > index 00000000000..b34b902cf27
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/aarch64/vec-set-zero.c
> > @@ -0,0 +1,40 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-O2" } */
> > +
> > +#include "arm_neon.h"
> > +
> > +#define FOO(type) \
> > +type f_##type(type v) \
> > +{ \
> > +  v[1] = 0; \
> > +  return v; \
> > +}
> > +
> > +FOO(int8x8_t)
> > +FOO(int16x4_t)
> > +FOO(int32x2_t)
> > +
> > +FOO(int8x16_t)
> > +FOO(int16x8_t)
> > +FOO(int32x4_t)
> > +FOO(int64x2_t)
> > +
> > +FOO(float16x4_t)
> > +FOO(float32x2_t)
> > +
> > +FOO(float16x8_t)
> > +FOO(float32x4_t)
> > +FOO(float64x2_t)
> > +
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[1\], wzr} 2 { target aarch64_little_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[1\], wzr} 4 { target aarch64_little_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[1\], wzr} 4 { target aarch64_little_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.d\[1\], xzr} 2 { target aarch64_little_endian } } } */
> > +
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[6\], wzr} 1 { target aarch64_big_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.b\[14\], wzr} 1 { target aarch64_big_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[2\], wzr} 2 { target aarch64_big_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.h\[6\], wzr} 2 { target aarch64_big_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[0\], wzr} 2 { target aarch64_big_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.s\[2\], wzr} 2 { target aarch64_big_endian } } } */
> > +/* { dg-final { scan-assembler-times {ins\tv[0-9]+\.d\[0\], xzr} 2 { target aarch64_big_endian } } } */
  

Patch

diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 104088f67d2..5130f46c0da 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -1083,6 +1083,39 @@ 
   [(set_attr "type" "neon_ins<q>, neon_from_gp<q>, neon_load1_one_lane<q>")]
 )
 
+(define_insn "aarch64_simd_set_zero<mode>"
+  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
+	(unspec:VALL_F16 [(match_operand:VALL_F16 1 "register_operand" "0")
+			  (match_operand:SI 2 "immediate_operand" "i")]
+			 UNSPEC_SETZERO))]
+  "TARGET_SIMD"
+  {
+    if (GET_MODE_INNER (<MODE>mode) == DImode)
+      return "ins\\t%0.<Vetype>[%p2], xzr";
+    return "ins\\t%0.<Vetype>[%p2], wzr";
+  }
+  [(set_attr "type" "neon_ins<q>")]
+)
+
+(define_insn_and_split "aarch64_simd_vec_set_zero<mode>"
+  [(set (match_operand:VALL_F16 0 "register_operand" "=w")
+	(vec_merge:VALL_F16
+	    (match_operand:VALL_F16 1 "const_dup0_operand" "w")
+	    (match_operand:VALL_F16 3 "register_operand" "0")
+	    (match_operand:SI 2 "immediate_operand" "i")))]
+  "TARGET_SIMD"
+  "#"
+  "&& 1"
+  [(const_int 0)]
+  {
+    int elt = ENDIAN_LANE_N (<nunits>, exact_log2 (INTVAL (operands[2])));
+    operands[2] = GEN_INT ((HOST_WIDE_INT) 1 << elt);
+    emit_insn (gen_aarch64_simd_set_zero<mode> (operands[0], operands[3], operands[2]));
+    DONE;
+  }
+  [(set_attr "type" "neon_ins<q>")]
+)
+
 (define_insn "@aarch64_simd_vec_copy_lane<mode>"
   [(set (match_operand:VALL_F16 0 "register_operand" "=w")
 	(vec_merge:VALL_F16
diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md
index 5b26443e5b6..8064841ebb4 100644
--- a/gcc/config/aarch64/iterators.md
+++ b/gcc/config/aarch64/iterators.md
@@ -839,6 +839,7 @@ 
     UNSPEC_FCMUL_CONJ	; Used in aarch64-simd.md.
     UNSPEC_FCMLA_CONJ	; Used in aarch64-simd.md.
     UNSPEC_FCMLA180_CONJ	; Used in aarch64-simd.md.
+    UNSPEC_SETZERO	; Used in aarch64-simd.md.
     UNSPEC_ASRD		; Used in aarch64-sve.md.
     UNSPEC_ADCLB	; Used in aarch64-sve2.md.
     UNSPEC_ADCLT	; Used in aarch64-sve2.md.
diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
index ff7f73d3f30..901fa1bd7f9 100644
--- a/gcc/config/aarch64/predicates.md
+++ b/gcc/config/aarch64/predicates.md
@@ -49,6 +49,13 @@ 
   return CONST_INT_P (op) && IN_RANGE (INTVAL (op), 1, 3);
 })
 
+(define_predicate "const_dup0_operand"
+  (match_code "const_vector")
+{
+  op = unwrap_const_vec_duplicate (op);
+  return CONST_INT_P (op) && rtx_equal_p (op, const0_rtx);
+})
+
 (define_predicate "subreg_lowpart_operator"
   (ior (match_code "truncate")
        (and (match_code "subreg")