[v2] RISC-V: Fix wrong LMUL when only implict zve32f.

Message ID 20250204072917.65037-1-monk.chiang@sifive.com
State Superseded
Delegated to: Robin Dapp
Headers
Series [v2] RISC-V: Fix wrong LMUL when only implict zve32f. |

Checks

Context Check Description
linaro-tcwg-bot/tcwg_gcc_build--master-arm success Build passed
linaro-tcwg-bot/tcwg_gcc_check--master-arm success Test passed
linaro-tcwg-bot/tcwg_gcc_build--master-aarch64 success Build passed
linaro-tcwg-bot/tcwg_gcc_check--master-aarch64 success Test passed
rivoscibot/toolchain-ci-rivos-apply-patch success Patch applied
rivoscibot/toolchain-ci-rivos-lint warning Lint failed
rivoscibot/toolchain-ci-rivos-build--newlib-rv64gcv-lp64d-multilib success Build passed
rivoscibot/toolchain-ci-rivos-build--linux-rv64gc_zba_zbb_zbc_zbs-lp64d-multilib success Build passed
rivoscibot/toolchain-ci-rivos-build--linux-rv64gcv-lp64d-multilib success Build passed
rivoscibot/toolchain-ci-rivos-test success Testing passed

Commit Message

Monk Chiang Feb. 4, 2025, 7:29 a.m. UTC
  According to Section 3.4.2, Vector Register Grouping, in the RISC-V
Vector Specification, the rule for LMUL is LMUL >= SEW/ELEN

gcc/ChangeLog:
	* config/riscv/riscv-v.cc: Add restrict for insert LMUL.
	config/riscv/riscv-vector-builtins-types.def:
	Use RVV_REQUIRE_ELEN_64 to check LMUL number.
	config/riscv/riscv-vector-switch.def: Likewise.

gcc/testsuite/ChangeLog:
	* gcc.target/riscv/rvv/autovec/pr111391-2.c: Update test.
	gcc.target/riscv/rvv/base/abi-14.c: Update test.
	gcc.target/riscv/rvv/base/abi-16.c: Update test.
	gcc.target/riscv/rvv/base/abi-18.c: Update test.
	gcc.target/riscv/rvv/base/vsetvl_zve32f.c: New test.
---
 gcc/config/riscv/riscv-v.cc                   |   8 +-
 .../riscv/riscv-vector-builtins-types.def     | 322 +++++++++---------
 gcc/config/riscv/riscv-vector-switch.def      |  84 ++---
 .../gcc.target/riscv/rvv/autovec/pr111391-2.c |   2 +-
 .../gcc.target/riscv/rvv/base/abi-14.c        |  84 ++---
 .../gcc.target/riscv/rvv/base/abi-16.c        |  98 +++---
 .../gcc.target/riscv/rvv/base/abi-18.c        | 112 +++---
 .../gcc.target/riscv/rvv/base/vsetvl_zve32f.c |  73 ++++
 8 files changed, 429 insertions(+), 354 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/vsetvl_zve32f.c
  

Comments

Robin Dapp Feb. 4, 2025, 12:01 p.m. UTC | #1
Hi Monk,

could you detail the issue/patch a bit?  Are we generally violating
LMUL >= SEW/ELEN with zve32f (zve32x as well then)?  And what's
"implicit zve32f"?  In the test case it's specified explicitly.

> According to Section 3.4.2, Vector Register Grouping, in the RISC-V
> Vector Specification, the rule for LMUL is LMUL >= SEW/ELEN

> +	  /* Follow rule LMUL >= SEW / ELEN.  */
> +	  int elen = TARGET_VECTOR_ELEN_64 ? 1 : 2;
>  	  int factor = TARGET_MIN_VLEN / size;
>  	  if (inner_size == 8)
> -	    factor = MIN (factor, 8);
> +	    factor = MIN (factor, 8 / elen);
>  	  else if (inner_size == 16)
> -	    factor = MIN (factor, 4);
> +	    factor = MIN (factor, 4 / elen);
>  	  else if (inner_size == 32)
> -	    factor = MIN (factor, 2);
> +	    factor = MIN (factor, 2 / elen);
>  	  else if (inner_size == 64)
>  	    factor = MIN (factor, 1);
>  	  else

As far as I understand it the minimum LMUL rule applies to the minimum SEW = 8
and the ELEN of the implementation.  An LMUL = 1/8 is invalid for a VLEN = 32
because that would mean we'd only have 4 bits per element.

The spec says:

For standard vector extensions with ELEN=32, fractional LMULs of 1/2 and 1/4
must be supported. For standard vector extensions with ELEN=64, fractional
LMULs of 1/2, 1/4, and 1/8 must be supported.

So the problem is we assume a "sane" implementation that would implement
LMUL=1/8 whenever VLEN > 32 but that's too optimistic?

Then the problem would be that we're using TARGET_MIN_VLEN rather than ELEN
here and there are implementations that could technically support LMUL = 1/8
but don't?

This sounds a bit like vector unaligned access all over again...
So we'd want a "sane" uarch flag that keeps the current MIN_VLEN behavior but
needed to make LMUL = 1/4 the minimum by default.  This only applies to LMUL =
1/8, though and not all the other cases.

> +/* { dg-options "-march=rv32imafc_zve32f_zvl128b -mabi=ilp32 -O2" } */

> +/* { dg-final { scan-assembler {vsetivli\s+zero,\s*2,\s*e32,\s*m1,\s*t[au],\s*m[au]} } } */
> +/* { dg-final { scan-assembler {vsetivli\s+zero,\s*4,\s*e32,\s*m1,\s*t[au],\s*m[au]} } } */

From what I can tell the test case uses a V2SImode, so 64 bit.
When VLEN=128 (zvl128b) isn't the correct LMUL mf2 rather than m1?
In particular, how would the same LMUL for AVL=2 and AVL=4 and the same data
type be correct?

Maybe it would help to add a run test?  A PR might be useful as well
to track things as we're late in the release cycle.
  
Monk Chiang Feb. 5, 2025, 6:07 a.m. UTC | #2
Hi Robin,
Sorry, I should have simplified the problem by presenting it in terms of
Zve32x, because Zve32f implies Zve32x.
As the specification states, the requirement is to support LMUL ≥ SEW/ELEN.
Regarding the implementation,

I followed this rule to fix the problem.
In this link: https://godbolt.org/z/j59oTW371, there is a vsetivli
zero,2,e32,mf2,ta,ma.
Here, SEW=32, and Zve32x has ELEN=32, which makes LMUL=1/2 illegal.

According to the rule LMUL ≥ SEW/ELEN => LMUL ≥ 32 / 32 => LMUL ≥  1.


On Tue, Feb 4, 2025 at 8:01 PM Robin Dapp <rdapp.gcc@gmail.com> wrote:

> Hi Monk,
>
> could you detail the issue/patch a bit?  Are we generally violating
> LMUL >= SEW/ELEN with zve32f (zve32x as well then)?  And what's
> "implicit zve32f"?  In the test case it's specified explicitly.
>
> > According to Section 3.4.2, Vector Register Grouping, in the RISC-V
> > Vector Specification, the rule for LMUL is LMUL >= SEW/ELEN
>
> > +       /* Follow rule LMUL >= SEW / ELEN.  */
> > +       int elen = TARGET_VECTOR_ELEN_64 ? 1 : 2;
> >         int factor = TARGET_MIN_VLEN / size;
> >         if (inner_size == 8)
> > -         factor = MIN (factor, 8);
> > +         factor = MIN (factor, 8 / elen);
> >         else if (inner_size == 16)
> > -         factor = MIN (factor, 4);
> > +         factor = MIN (factor, 4 / elen);
> >         else if (inner_size == 32)
> > -         factor = MIN (factor, 2);
> > +         factor = MIN (factor, 2 / elen);
> >         else if (inner_size == 64)
> >           factor = MIN (factor, 1);
> >         else
>
> As far as I understand it the minimum LMUL rule applies to the minimum SEW
> = 8
> and the ELEN of the implementation.  An LMUL = 1/8 is invalid for a VLEN =
> 32
> because that would mean we'd only have 4 bits per element.
>
> The spec says:
>
> For standard vector extensions with ELEN=32, fractional LMULs of 1/2 and
> 1/4
> must be supported. For standard vector extensions with ELEN=64, fractional
> LMULs of 1/2, 1/4, and 1/8 must be supported.
>
> So the problem is we assume a "sane" implementation that would implement
> LMUL=1/8 whenever VLEN > 32 but that's too optimistic?
>
> Then the problem would be that we're using TARGET_MIN_VLEN rather than ELEN
> here and there are implementations that could technically support LMUL =
> 1/8
> but don't?
>

  Yes, technically supported, but hardware may get illegal from vsetvli
instruction.
  I think we should use ELEN. Our current implementations are designed with
Zve64x in mind.


>
> This sounds a bit like vector unaligned access all over again...
> So we'd want a "sane" uarch flag that keeps the current MIN_VLEN behavior
> but
> needed to make LMUL = 1/4 the minimum by default.  This only applies to
> LMUL =
> 1/8, though and not all the other cases.
>
> > +/* { dg-options "-march=rv32imafc_zve32f_zvl128b -mabi=ilp32 -O2" } */
>
> > +/* { dg-final { scan-assembler
> {vsetivli\s+zero,\s*2,\s*e32,\s*m1,\s*t[au],\s*m[au]} } } */
> > +/* { dg-final { scan-assembler
> {vsetivli\s+zero,\s*4,\s*e32,\s*m1,\s*t[au],\s*m[au]} } } */
>
> From what I can tell the test case uses a V2SImode, so 64 bit.
> When VLEN=128 (zvl128b) isn't the correct LMUL mf2 rather than m1?

In particular, how would the same LMUL for AVL=2 and AVL=4 and the same data
> type be correct?
>
  That's right. The case just allocates more space, but storing 2 and 4
elements remains the same.


>
> Maybe it would help to add a run test?  A PR might be useful as well
> to track things as we're late in the release cycle.
>

   The test is from ./gcc.dg/tree-ssa/pr80898-2.c, I will add a run test.

>
> --
> Regards
>  Robin
>
>
  
Robin Dapp Feb. 5, 2025, 8:18 a.m. UTC | #3
> Hi Robin,
> Sorry, I should have simplified the problem by presenting it in terms of
> Zve32x, because Zve32f implies Zve32x.
> As the specification states, the requirement is to support LMUL ≥ SEW/ELEN.
> Regarding the implementation,

But the spec requirement mentions SEW_min not SEW?

"In general, the requirement is to support LMUL ≥ SEWMIN/ELEN, where SEWMIN is
the narrowest supported SEW value and ELEN is the widest supported SEW value"

Further it states:
"For standard vector extensions with ELEN=32, fractional LMULs of 1/2 and 1/4
must be supported."

> I followed this rule to fix the problem.
> In this link: https://godbolt.org/z/j59oTW371, there is a vsetivli
> zero,2,e32,mf2,ta,ma.
> Here, SEW=32, and Zve32x has ELEN=32, which makes LMUL=1/2 illegal.
>
> According to the rule LMUL ≥ SEW/ELEN => LMUL ≥ 32 / 32 => LMUL ≥  1.

As you're specifying VLEN=128 (with zvl128b) we enable the respective modes,
i.e. V2SI and V4SI.  With VLEN=32 those wouldn't be available.
RVVMF2SI already has the requirement TARGET_MIN_VLEN > 32 so wouldn't be chosen
either.

If LMUL <= 1 were illegal for all zve32 we couldn't vectorize anything that
doesn't fit a full vector?  That can't be correct and would severely limit us.

I think the only necessary change is to make sure we're not emitting
LMUL = 1/8 for ELEN=32/zve32.  That should be a far less invasive change.

>> In particular, how would the same LMUL for AVL=2 and AVL=4 and the same data
>> type be correct?
>
> That's right. The case just allocates more space, but storing 2 and 4
> elements remains the same.

Even if a V2SI with LMUL=1 on VLEN=128 doesn't lead to a SIGILL right away
it would surely modify the overlap constraints and such.  To me that doesn't
look right.
  
Monk Chiang Feb. 7, 2025, 8:13 a.m. UTC | #4
Hi Robin,
Thanks for your comment. I think your point is correct, especially the part
about SEWmin.
I will revise this patch again.

On Wed, Feb 5, 2025 at 4:18 PM Robin Dapp <rdapp.gcc@gmail.com> wrote:

> > Hi Robin,
> > Sorry, I should have simplified the problem by presenting it in terms of
> > Zve32x, because Zve32f implies Zve32x.
> > As the specification states, the requirement is to support LMUL ≥
> SEW/ELEN.
> > Regarding the implementation,
>
> But the spec requirement mentions SEW_min not SEW?
>
> "In general, the requirement is to support LMUL ≥ SEWMIN/ELEN, where
> SEWMIN is
> the narrowest supported SEW value and ELEN is the widest supported SEW
> value"
>
> Further it states:
> "For standard vector extensions with ELEN=32, fractional LMULs of 1/2 and
> 1/4
> must be supported."
>
> > I followed this rule to fix the problem.
> > In this link: https://godbolt.org/z/j59oTW371, there is a vsetivli
> > zero,2,e32,mf2,ta,ma.
> > Here, SEW=32, and Zve32x has ELEN=32, which makes LMUL=1/2 illegal.
> >
> > According to the rule LMUL ≥ SEW/ELEN => LMUL ≥ 32 / 32 => LMUL ≥  1.
>
> As you're specifying VLEN=128 (with zvl128b) we enable the respective
> modes,
> i.e. V2SI and V4SI.  With VLEN=32 those wouldn't be available.
> RVVMF2SI already has the requirement TARGET_MIN_VLEN > 32 so wouldn't be
> chosen
> either.
>
> If LMUL <= 1 were illegal for all zve32 we couldn't vectorize anything that
> doesn't fit a full vector?  That can't be correct and would severely limit
> us.
>
> I think the only necessary change is to make sure we're not emitting
> LMUL = 1/8 for ELEN=32/zve32.  That should be a far less invasive change.
>
> >> In particular, how would the same LMUL for AVL=2 and AVL=4 and the same
> data
> >> type be correct?
> >
> > That's right. The case just allocates more space, but storing 2 and 4
> > elements remains the same.
>
> Even if a V2SI with LMUL=1 on VLEN=128 doesn't lead to a SIGILL right away
> it would surely modify the overlap constraints and such.  To me that
> doesn't
> look right.
>
> --
> Regards
>  Robin
>
>
  

Patch

diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 9847439ca77..24f3127e71d 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1730,13 +1730,15 @@  get_vlmul (machine_mode mode)
       int inner_size = GET_MODE_BITSIZE (GET_MODE_INNER (mode));
       if (size < TARGET_MIN_VLEN)
 	{
+	  /* Follow rule LMUL >= SEW / ELEN.  */
+	  int elen = TARGET_VECTOR_ELEN_64 ? 1 : 2;
 	  int factor = TARGET_MIN_VLEN / size;
 	  if (inner_size == 8)
-	    factor = MIN (factor, 8);
+	    factor = MIN (factor, 8 / elen);
 	  else if (inner_size == 16)
-	    factor = MIN (factor, 4);
+	    factor = MIN (factor, 4 / elen);
 	  else if (inner_size == 32)
-	    factor = MIN (factor, 2);
+	    factor = MIN (factor, 2 / elen);
 	  else if (inner_size == 64)
 	    factor = MIN (factor, 1);
 	  else
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index 6b98b93dfb6..857b63758a0 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -369,20 +369,20 @@  along with GCC; see the file COPYING3. If not see
 #define DEF_RVV_XFQF_OPS(TYPE, REQUIRE)
 #endif
 
-DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint8mf4_t, 0)
 DEF_RVV_I_OPS (vint8mf2_t, 0)
 DEF_RVV_I_OPS (vint8m1_t, 0)
 DEF_RVV_I_OPS (vint8m2_t, 0)
 DEF_RVV_I_OPS (vint8m4_t, 0)
 DEF_RVV_I_OPS (vint8m8_t, 0)
-DEF_RVV_I_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_I_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint16mf2_t, 0)
 DEF_RVV_I_OPS (vint16m1_t, 0)
 DEF_RVV_I_OPS (vint16m2_t, 0)
 DEF_RVV_I_OPS (vint16m4_t, 0)
 DEF_RVV_I_OPS (vint16m8_t, 0)
-DEF_RVV_I_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_I_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint32m1_t, 0)
 DEF_RVV_I_OPS (vint32m2_t, 0)
 DEF_RVV_I_OPS (vint32m4_t, 0)
@@ -392,20 +392,20 @@  DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint8mf4_t, 0)
 DEF_RVV_U_OPS (vuint8mf2_t, 0)
 DEF_RVV_U_OPS (vuint8m1_t, 0)
 DEF_RVV_U_OPS (vuint8m2_t, 0)
 DEF_RVV_U_OPS (vuint8m4_t, 0)
 DEF_RVV_U_OPS (vuint8m8_t, 0)
-DEF_RVV_U_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_U_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint16mf2_t, 0)
 DEF_RVV_U_OPS (vuint16m1_t, 0)
 DEF_RVV_U_OPS (vuint16m2_t, 0)
 DEF_RVV_U_OPS (vuint16m4_t, 0)
 DEF_RVV_U_OPS (vuint16m8_t, 0)
-DEF_RVV_U_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_U_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint32m1_t, 0)
 DEF_RVV_U_OPS (vuint32m2_t, 0)
 DEF_RVV_U_OPS (vuint32m4_t, 0)
@@ -415,21 +415,21 @@  DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_F_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_F_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_F_OPS (vbfloat16mf2_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_F_OPS (vbfloat16m1_t,  RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_F_OPS (vbfloat16m2_t,  RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_F_OPS (vbfloat16m4_t,  RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_F_OPS (vbfloat16m8_t,  RVV_REQUIRE_ELEN_BF_16)
 
-DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_F_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_F_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_F_OPS (vfloat16m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_F_OPS (vfloat16m4_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_F_OPS (vfloat16m8_t, RVV_REQUIRE_ELEN_FP_16)
 
-DEF_RVV_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_F_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_F_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_F_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
@@ -439,7 +439,7 @@  DEF_RVV_F_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_F_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_F_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_B_OPS (vbool64_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_B_OPS (vbool64_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_B_OPS (vbool32_t, 0)
 DEF_RVV_B_OPS (vbool16_t, 0)
 DEF_RVV_B_OPS (vbool8_t, 0)
@@ -447,13 +447,13 @@  DEF_RVV_B_OPS (vbool4_t, 0)
 DEF_RVV_B_OPS (vbool2_t, 0)
 DEF_RVV_B_OPS (vbool1_t, 0)
 
-DEF_RVV_WEXTI_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WEXTI_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTI_OPS (vint16mf2_t, 0)
 DEF_RVV_WEXTI_OPS (vint16m1_t, 0)
 DEF_RVV_WEXTI_OPS (vint16m2_t, 0)
 DEF_RVV_WEXTI_OPS (vint16m4_t, 0)
 DEF_RVV_WEXTI_OPS (vint16m8_t, 0)
-DEF_RVV_WEXTI_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WEXTI_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTI_OPS (vint32m1_t, 0)
 DEF_RVV_WEXTI_OPS (vint32m2_t, 0)
 DEF_RVV_WEXTI_OPS (vint32m4_t, 0)
@@ -463,7 +463,7 @@  DEF_RVV_WEXTI_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTI_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTI_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_QEXTI_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_QEXTI_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_QEXTI_OPS (vint32m1_t, 0)
 DEF_RVV_QEXTI_OPS (vint32m2_t, 0)
 DEF_RVV_QEXTI_OPS (vint32m4_t, 0)
@@ -478,13 +478,13 @@  DEF_RVV_OEXTI_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_OEXTI_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_OEXTI_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_WEXTU_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WEXTU_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTU_OPS (vuint16mf2_t, 0)
 DEF_RVV_WEXTU_OPS (vuint16m1_t, 0)
 DEF_RVV_WEXTU_OPS (vuint16m2_t, 0)
 DEF_RVV_WEXTU_OPS (vuint16m4_t, 0)
 DEF_RVV_WEXTU_OPS (vuint16m8_t, 0)
-DEF_RVV_WEXTU_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WEXTU_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTU_OPS (vuint32m1_t, 0)
 DEF_RVV_WEXTU_OPS (vuint32m2_t, 0)
 DEF_RVV_WEXTU_OPS (vuint32m4_t, 0)
@@ -494,7 +494,7 @@  DEF_RVV_WEXTU_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTU_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTU_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_QEXTU_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_QEXTU_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_QEXTU_OPS (vuint32m1_t, 0)
 DEF_RVV_QEXTU_OPS (vuint32m2_t, 0)
 DEF_RVV_QEXTU_OPS (vuint32m4_t, 0)
@@ -509,20 +509,20 @@  DEF_RVV_OEXTU_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_OEXTU_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_OEXTU_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_FULL_V_I_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_FULL_V_I_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_FULL_V_I_OPS (vint8mf4_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint8mf2_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint8m1_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint8m2_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint8m4_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint8m8_t, 0)
-DEF_RVV_FULL_V_I_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_FULL_V_I_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_FULL_V_I_OPS (vint16mf2_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint16m1_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint16m2_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint16m4_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint16m8_t, 0)
-DEF_RVV_FULL_V_I_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_FULL_V_I_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_FULL_V_I_OPS (vint32m1_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint32m2_t, 0)
 DEF_RVV_FULL_V_I_OPS (vint32m4_t, 0)
@@ -532,20 +532,20 @@  DEF_RVV_FULL_V_I_OPS (vint64m2_t, RVV_REQUIRE_FULL_V)
 DEF_RVV_FULL_V_I_OPS (vint64m4_t, RVV_REQUIRE_FULL_V)
 DEF_RVV_FULL_V_I_OPS (vint64m8_t, RVV_REQUIRE_FULL_V)
 
-DEF_RVV_FULL_V_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_FULL_V_U_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_FULL_V_U_OPS (vuint8mf4_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint8mf2_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint8m1_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint8m2_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint8m4_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint8m8_t, 0)
-DEF_RVV_FULL_V_U_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_FULL_V_U_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_FULL_V_U_OPS (vuint16mf2_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint16m1_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint16m2_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint16m4_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint16m8_t, 0)
-DEF_RVV_FULL_V_U_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_FULL_V_U_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_FULL_V_U_OPS (vuint32m1_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint32m2_t, 0)
 DEF_RVV_FULL_V_U_OPS (vuint32m4_t, 0)
@@ -555,7 +555,7 @@  DEF_RVV_FULL_V_U_OPS (vuint64m2_t, RVV_REQUIRE_FULL_V)
 DEF_RVV_FULL_V_U_OPS (vuint64m4_t, RVV_REQUIRE_FULL_V)
 DEF_RVV_FULL_V_U_OPS (vuint64m8_t, RVV_REQUIRE_FULL_V)
 
-DEF_RVV_WEXTF_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WEXTF_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_WEXTF_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WEXTF_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WEXTF_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_16)
@@ -566,14 +566,14 @@  DEF_RVV_WEXTF_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_WEXTF_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_WEXTF_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_CONVERT_I_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_CONVERT_I_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_CONVERT_I_OPS (vint16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_CONVERT_I_OPS (vint16m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_CONVERT_I_OPS (vint16m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_CONVERT_I_OPS (vint16m4_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_CONVERT_I_OPS (vint16m8_t, RVV_REQUIRE_ELEN_FP_16)
 
-DEF_RVV_CONVERT_I_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_CONVERT_I_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_CONVERT_I_OPS (vint32m1_t, 0)
 DEF_RVV_CONVERT_I_OPS (vint32m2_t, 0)
 DEF_RVV_CONVERT_I_OPS (vint32m4_t, 0)
@@ -583,14 +583,14 @@  DEF_RVV_CONVERT_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_CONVERT_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_CONVERT_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_CONVERT_U_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_CONVERT_U_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_CONVERT_U_OPS (vuint16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_CONVERT_U_OPS (vuint16m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_CONVERT_U_OPS (vuint16m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_CONVERT_U_OPS (vuint16m4_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_CONVERT_U_OPS (vuint16m8_t, RVV_REQUIRE_ELEN_FP_16)
 
-DEF_RVV_CONVERT_U_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_CONVERT_U_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_CONVERT_U_OPS (vuint32m1_t, 0)
 DEF_RVV_CONVERT_U_OPS (vuint32m2_t, 0)
 DEF_RVV_CONVERT_U_OPS (vuint32m4_t, 0)
@@ -600,7 +600,7 @@  DEF_RVV_CONVERT_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_CONVERT_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_CONVERT_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_WCONVERT_I_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WCONVERT_I_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_WCONVERT_I_OPS (vint32m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WCONVERT_I_OPS (vint32m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WCONVERT_I_OPS (vint32m4_t, RVV_REQUIRE_ELEN_FP_16)
@@ -611,7 +611,7 @@  DEF_RVV_WCONVERT_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64
 DEF_RVV_WCONVERT_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_WCONVERT_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_WCONVERT_U_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WCONVERT_U_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_WCONVERT_U_OPS (vuint32m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WCONVERT_U_OPS (vuint32m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WCONVERT_U_OPS (vuint32m4_t, RVV_REQUIRE_ELEN_FP_16)
@@ -622,7 +622,7 @@  DEF_RVV_WCONVERT_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_6
 DEF_RVV_WCONVERT_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_WCONVERT_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_WCONVERT_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WCONVERT_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_WCONVERT_F_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_WCONVERT_F_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_WCONVERT_F_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
@@ -633,76 +633,76 @@  DEF_RVV_WCONVERT_F_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_WCONVERT_F_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_WCONVERT_F_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_F32_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_F32_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_F32_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_F32_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_F32_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_F32_OPS (vfloat32m8_t, RVV_REQUIRE_ELEN_FP_32)
 
-DEF_RVV_WI_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WI_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WI_OPS (vint8mf4_t, 0)
 DEF_RVV_WI_OPS (vint8mf2_t, 0)
 DEF_RVV_WI_OPS (vint8m1_t, 0)
 DEF_RVV_WI_OPS (vint8m2_t, 0)
 DEF_RVV_WI_OPS (vint8m4_t, 0)
 DEF_RVV_WI_OPS (vint8m8_t, 0)
-DEF_RVV_WI_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WI_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WI_OPS (vint16mf2_t, 0)
 DEF_RVV_WI_OPS (vint16m1_t, 0)
 DEF_RVV_WI_OPS (vint16m2_t, 0)
 DEF_RVV_WI_OPS (vint16m4_t, 0)
 DEF_RVV_WI_OPS (vint16m8_t, 0)
-DEF_RVV_WI_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WI_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WI_OPS (vint32m1_t, 0)
 DEF_RVV_WI_OPS (vint32m2_t, 0)
 DEF_RVV_WI_OPS (vint32m4_t, 0)
 DEF_RVV_WI_OPS (vint32m8_t, 0)
 
-DEF_RVV_WU_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WU_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WU_OPS (vuint8mf4_t, 0)
 DEF_RVV_WU_OPS (vuint8mf2_t, 0)
 DEF_RVV_WU_OPS (vuint8m1_t, 0)
 DEF_RVV_WU_OPS (vuint8m2_t, 0)
 DEF_RVV_WU_OPS (vuint8m4_t, 0)
 DEF_RVV_WU_OPS (vuint8m8_t, 0)
-DEF_RVV_WU_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WU_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WU_OPS (vuint16mf2_t, 0)
 DEF_RVV_WU_OPS (vuint16m1_t, 0)
 DEF_RVV_WU_OPS (vuint16m2_t, 0)
 DEF_RVV_WU_OPS (vuint16m4_t, 0)
 DEF_RVV_WU_OPS (vuint16m8_t, 0)
-DEF_RVV_WU_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WU_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_WU_OPS (vuint32m1_t, 0)
 DEF_RVV_WU_OPS (vuint32m2_t, 0)
 DEF_RVV_WU_OPS (vuint32m4_t, 0)
 DEF_RVV_WU_OPS (vuint32m8_t, 0)
 
-DEF_RVV_WF_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WF_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_WF_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WF_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WF_OPS (vfloat16m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WF_OPS (vfloat16m4_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_WF_OPS (vfloat16m8_t, RVV_REQUIRE_ELEN_FP_16)
 
-DEF_RVV_WF_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_WF_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_WF_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_WF_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_WF_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_WF_OPS (vfloat32m8_t, RVV_REQUIRE_ELEN_FP_32)
 
-DEF_RVV_EI16_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EI16_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vint8mf4_t, 0)
 DEF_RVV_EI16_OPS (vint8mf2_t, 0)
 DEF_RVV_EI16_OPS (vint8m1_t, 0)
 DEF_RVV_EI16_OPS (vint8m2_t, 0)
 DEF_RVV_EI16_OPS (vint8m4_t, 0)
-DEF_RVV_EI16_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EI16_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vint16mf2_t, 0)
 DEF_RVV_EI16_OPS (vint16m1_t, 0)
 DEF_RVV_EI16_OPS (vint16m2_t, 0)
 DEF_RVV_EI16_OPS (vint16m4_t, 0)
 DEF_RVV_EI16_OPS (vint16m8_t, 0)
-DEF_RVV_EI16_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EI16_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vint32m1_t, 0)
 DEF_RVV_EI16_OPS (vint32m2_t, 0)
 DEF_RVV_EI16_OPS (vint32m4_t, 0)
@@ -711,19 +711,19 @@  DEF_RVV_EI16_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_EI16_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EI16_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vuint8mf4_t, 0)
 DEF_RVV_EI16_OPS (vuint8mf2_t, 0)
 DEF_RVV_EI16_OPS (vuint8m1_t, 0)
 DEF_RVV_EI16_OPS (vuint8m2_t, 0)
 DEF_RVV_EI16_OPS (vuint8m4_t, 0)
-DEF_RVV_EI16_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EI16_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vuint16mf2_t, 0)
 DEF_RVV_EI16_OPS (vuint16m1_t, 0)
 DEF_RVV_EI16_OPS (vuint16m2_t, 0)
 DEF_RVV_EI16_OPS (vuint16m4_t, 0)
 DEF_RVV_EI16_OPS (vuint16m8_t, 0)
-DEF_RVV_EI16_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EI16_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vuint32m1_t, 0)
 DEF_RVV_EI16_OPS (vuint32m2_t, 0)
 DEF_RVV_EI16_OPS (vuint32m4_t, 0)
@@ -733,14 +733,14 @@  DEF_RVV_EI16_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_EI16_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EI16_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_EI16_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_EI16_OPS (vfloat16m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_EI16_OPS (vfloat16m4_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_EI16_OPS (vfloat16m8_t, RVV_REQUIRE_ELEN_FP_16)
 
-DEF_RVV_EI16_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EI16_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_EI16_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_EI16_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_EI16_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
@@ -751,13 +751,13 @@  DEF_RVV_EI16_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_EI16_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_EI16_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_EEW8_INTERPRET_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EEW8_INTERPRET_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW8_INTERPRET_OPS (vint16mf2_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vint16m1_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vint16m2_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vint16m4_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vint16m8_t, 0)
-DEF_RVV_EEW8_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EEW8_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW8_INTERPRET_OPS (vint32m1_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vint32m2_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vint32m4_t, 0)
@@ -766,13 +766,13 @@  DEF_RVV_EEW8_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW8_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW8_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW8_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_EEW8_INTERPRET_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW8_INTERPRET_OPS (vuint16mf2_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vuint16m1_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vuint16m2_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vuint16m4_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vuint16m8_t, 0)
-DEF_RVV_EEW8_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EEW8_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW8_INTERPRET_OPS (vuint32m1_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vuint32m2_t, 0)
 DEF_RVV_EEW8_INTERPRET_OPS (vuint32m4_t, 0)
@@ -788,7 +788,7 @@  DEF_RVV_EEW16_INTERPRET_OPS (vint8m1_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vint8m2_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vint8m4_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vint8m8_t, 0)
-DEF_RVV_EEW16_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EEW16_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW16_INTERPRET_OPS (vint32m1_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vint32m2_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vint32m4_t, 0)
@@ -803,7 +803,7 @@  DEF_RVV_EEW16_INTERPRET_OPS (vuint8m1_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vuint8m2_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vuint8m4_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vuint8m8_t, 0)
-DEF_RVV_EEW16_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_EEW16_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_EEW16_INTERPRET_OPS (vuint32m1_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vuint32m2_t, 0)
 DEF_RVV_EEW16_INTERPRET_OPS (vuint32m4_t, 0)
@@ -994,53 +994,53 @@  DEF_RVV_UNSIGNED_EEW64_LMUL1_INTERPRET_OPS(vbool16_t, 0)
 DEF_RVV_UNSIGNED_EEW64_LMUL1_INTERPRET_OPS(vbool32_t, 0)
 DEF_RVV_UNSIGNED_EEW64_LMUL1_INTERPRET_OPS(vbool64_t, RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf4_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint8m1_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint8m2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint8m4_t, 0)
-DEF_RVV_X2_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint16mf2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint16m1_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint16m2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint16m4_t, 0)
-DEF_RVV_X2_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint32m1_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint32m2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint32m4_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf4_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m1_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m4_t, 0)
-DEF_RVV_X2_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint16mf2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m1_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m4_t, 0)
-DEF_RVV_X2_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m1_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m2_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m4_t, 0)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_X2_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vbfloat16mf2_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_X2_VLMUL_EXT_OPS (vbfloat16m1_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_X2_VLMUL_EXT_OPS (vbfloat16m2_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_X2_VLMUL_EXT_OPS (vbfloat16m4_t, RVV_REQUIRE_ELEN_BF_16)
-DEF_RVV_X2_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat16m2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat16m4_t, RVV_REQUIRE_ELEN_FP_16)
-DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
@@ -1048,107 +1048,107 @@  DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf4_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf2_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint8m1_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint8m2_t, 0)
-DEF_RVV_X4_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint16mf2_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint16m1_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint16m2_t, 0)
-DEF_RVV_X4_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint32m1_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint32m2_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf4_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf2_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint8m1_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint8m2_t, 0)
-DEF_RVV_X4_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint16mf2_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint16m1_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint16m2_t, 0)
-DEF_RVV_X4_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint32m1_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint32m2_t, 0)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_X4_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vbfloat16mf2_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_X4_VLMUL_EXT_OPS (vbfloat16m1_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_X4_VLMUL_EXT_OPS (vbfloat16m2_t, RVV_REQUIRE_ELEN_BF_16)
-DEF_RVV_X4_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_X4_VLMUL_EXT_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_X4_VLMUL_EXT_OPS (vfloat16m2_t, RVV_REQUIRE_ELEN_FP_16)
-DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_X4_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_X4_VLMUL_EXT_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf4_t, 0)
 DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf2_t, 0)
 DEF_RVV_X8_VLMUL_EXT_OPS (vint8m1_t, 0)
-DEF_RVV_X8_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vint16mf2_t, 0)
 DEF_RVV_X8_VLMUL_EXT_OPS (vint16m1_t, 0)
-DEF_RVV_X8_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vint32m1_t, 0)
 DEF_RVV_X8_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf4_t, 0)
 DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf2_t, 0)
 DEF_RVV_X8_VLMUL_EXT_OPS (vuint8m1_t, 0)
-DEF_RVV_X8_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vuint16mf2_t, 0)
 DEF_RVV_X8_VLMUL_EXT_OPS (vuint16m1_t, 0)
-DEF_RVV_X8_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vuint32m1_t, 0)
 DEF_RVV_X8_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_X8_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vbfloat16mf2_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_X8_VLMUL_EXT_OPS (vbfloat16m1_t, RVV_REQUIRE_ELEN_BF_16)
-DEF_RVV_X8_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_X8_VLMUL_EXT_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
-DEF_RVV_X8_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X8_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X8_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_X8_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf4_t, 0)
 DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf2_t, 0)
-DEF_RVV_X16_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X16_VLMUL_EXT_OPS (vint16mf2_t, 0)
-DEF_RVV_X16_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf4_t, 0)
 DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf2_t, 0)
-DEF_RVV_X16_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X16_VLMUL_EXT_OPS (vuint16mf2_t, 0)
-DEF_RVV_X16_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_X16_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X16_VLMUL_EXT_OPS (vbfloat16mf2_t, RVV_REQUIRE_ELEN_BF_16)
-DEF_RVV_X16_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_X16_VLMUL_EXT_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
-DEF_RVV_X16_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X16_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_X32_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X32_VLMUL_EXT_OPS (vint8mf4_t, 0)
-DEF_RVV_X32_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_X32_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_X32_VLMUL_EXT_OPS (vuint8mf4_t, 0)
-DEF_RVV_X32_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_X32_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_X32_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vbfloat16mf4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_X32_VLMUL_EXT_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 
-DEF_RVV_X64_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_X64_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_X64_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_X64_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ELEN_64)
 
 DEF_RVV_LMUL1_OPS (vint8m1_t, 0)
 DEF_RVV_LMUL1_OPS (vint16m1_t, 0)
@@ -1189,20 +1189,20 @@  DEF_RVV_LMUL4_OPS (vbfloat16m4_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_LMUL4_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_LMUL4_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_TUPLE_OPS (vint8mf8x2_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint8mf8x2_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint8mf8x3_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint8mf8x3_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint8mf8x4_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint8mf8x4_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint8mf8x5_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint8mf8x5_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint8mf8x6_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint8mf8x6_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint8mf8x7_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint8mf8x7_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint8mf8x8_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint8mf8x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x5_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x5_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x6_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x6_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x7_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x7_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint8mf8x8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint8mf8x8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vint8mf4x2_t, 0)
 DEF_RVV_TUPLE_OPS (vuint8mf4x2_t, 0)
 DEF_RVV_TUPLE_OPS (vint8mf4x3_t, 0)
@@ -1253,20 +1253,20 @@  DEF_RVV_TUPLE_OPS (vint8m2x4_t, 0)
 DEF_RVV_TUPLE_OPS (vuint8m2x4_t, 0)
 DEF_RVV_TUPLE_OPS (vint8m4x2_t, 0)
 DEF_RVV_TUPLE_OPS (vuint8m4x2_t, 0)
-DEF_RVV_TUPLE_OPS (vint16mf4x2_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint16mf4x2_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint16mf4x3_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint16mf4x3_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint16mf4x4_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint16mf4x4_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint16mf4x5_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint16mf4x5_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint16mf4x6_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint16mf4x6_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint16mf4x7_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint16mf4x7_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint16mf4x8_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint16mf4x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x5_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x5_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x6_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x6_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x7_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x7_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint16mf4x8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint16mf4x8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vint16mf2x2_t, 0)
 DEF_RVV_TUPLE_OPS (vuint16mf2x2_t, 0)
 DEF_RVV_TUPLE_OPS (vint16mf2x3_t, 0)
@@ -1303,20 +1303,20 @@  DEF_RVV_TUPLE_OPS (vint16m2x4_t, 0)
 DEF_RVV_TUPLE_OPS (vuint16m2x4_t, 0)
 DEF_RVV_TUPLE_OPS (vint16m4x2_t, 0)
 DEF_RVV_TUPLE_OPS (vuint16m4x2_t, 0)
-DEF_RVV_TUPLE_OPS (vint32mf2x2_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint32mf2x2_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint32mf2x3_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint32mf2x3_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint32mf2x4_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint32mf2x4_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint32mf2x5_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint32mf2x5_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint32mf2x6_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint32mf2x6_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint32mf2x7_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint32mf2x7_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vint32mf2x8_t, RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vuint32mf2x8_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x3_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x5_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x5_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x6_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x6_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x7_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x7_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vint32mf2x8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vuint32mf2x8_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vint32m1x2_t, 0)
 DEF_RVV_TUPLE_OPS (vuint32m1x2_t, 0)
 DEF_RVV_TUPLE_OPS (vint32m1x3_t, 0)
@@ -1361,13 +1361,13 @@  DEF_RVV_TUPLE_OPS (vint64m2x4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vuint64m2x4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vint64m4x2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vuint64m4x2_t, RVV_REQUIRE_ELEN_64)
-DEF_RVV_TUPLE_OPS (vbfloat16mf4x2_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vbfloat16mf4x3_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vbfloat16mf4x4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vbfloat16mf4x5_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vbfloat16mf4x6_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vbfloat16mf4x7_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vbfloat16mf4x8_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vbfloat16mf4x2_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vbfloat16mf4x3_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vbfloat16mf4x4_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vbfloat16mf4x5_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vbfloat16mf4x6_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vbfloat16mf4x7_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vbfloat16mf4x8_t, RVV_REQUIRE_ELEN_BF_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vbfloat16mf2x2_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_TUPLE_OPS (vbfloat16mf2x3_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_TUPLE_OPS (vbfloat16mf2x4_t, RVV_REQUIRE_ELEN_BF_16)
@@ -1386,13 +1386,13 @@  DEF_RVV_TUPLE_OPS (vbfloat16m2x2_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_TUPLE_OPS (vbfloat16m2x3_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_TUPLE_OPS (vbfloat16m2x4_t, RVV_REQUIRE_ELEN_BF_16)
 DEF_RVV_TUPLE_OPS (vbfloat16m4x2_t, RVV_REQUIRE_ELEN_BF_16)
-DEF_RVV_TUPLE_OPS (vfloat16mf4x2_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat16mf4x3_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat16mf4x4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat16mf4x5_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat16mf4x6_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat16mf4x7_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat16mf4x8_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat16mf4x2_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat16mf4x3_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat16mf4x4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat16mf4x5_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat16mf4x6_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat16mf4x7_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat16mf4x8_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vfloat16mf2x2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_TUPLE_OPS (vfloat16mf2x3_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_TUPLE_OPS (vfloat16mf2x4_t, RVV_REQUIRE_ELEN_FP_16)
@@ -1411,13 +1411,13 @@  DEF_RVV_TUPLE_OPS (vfloat16m2x2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_TUPLE_OPS (vfloat16m2x3_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_TUPLE_OPS (vfloat16m2x4_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_TUPLE_OPS (vfloat16m4x2_t, RVV_REQUIRE_ELEN_FP_16)
-DEF_RVV_TUPLE_OPS (vfloat32mf2x2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat32mf2x3_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat32mf2x4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat32mf2x5_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat32mf2x6_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat32mf2x7_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
-DEF_RVV_TUPLE_OPS (vfloat32mf2x8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x3_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x5_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x6_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x7_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
+DEF_RVV_TUPLE_OPS (vfloat32mf2x8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ELEN_64)
 DEF_RVV_TUPLE_OPS (vfloat32m1x2_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_TUPLE_OPS (vfloat32m1x3_t, RVV_REQUIRE_ELEN_FP_32)
 DEF_RVV_TUPLE_OPS (vfloat32m1x4_t, RVV_REQUIRE_ELEN_FP_32)
@@ -1441,7 +1441,7 @@  DEF_RVV_TUPLE_OPS (vfloat64m2x3_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_TUPLE_OPS (vfloat64m2x4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_TUPLE_OPS (vfloat64m4x2_t, RVV_REQUIRE_ELEN_FP_64)
 
-DEF_RVV_CRYPTO_SEW32_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64)
+DEF_RVV_CRYPTO_SEW32_OPS (vuint32mf2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_CRYPTO_SEW32_OPS (vuint32m1_t, 0)
 DEF_RVV_CRYPTO_SEW32_OPS (vuint32m2_t, 0)
 DEF_RVV_CRYPTO_SEW32_OPS (vuint32m4_t, 0)
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 23744d076f9..1b0d61940a6 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -64,13 +64,13 @@  Encode the ratio of SEW/LMUL into the mask types.
   |BI   |RVVM1BI|RVVMF2BI|RVVMF4BI|RVVMF8BI|RVVMF16BI|RVVMF32BI|RVVMF64BI|  */
 
 /* Return 'REQUIREMENT' for machine_mode 'MODE'.
-   For example: 'MODE' = RVVMF64BImode needs TARGET_MIN_VLEN > 32.  */
+   For example: 'MODE' = RVVMF64BImode needs TARGET_VECTOR_ELEN_64.  */
 #ifndef ENTRY
 #define ENTRY(MODE, REQUIREMENT, VLMUL, RATIO)
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF64BI, TARGET_VECTOR_ELEN_64, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
 ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
 ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
@@ -85,7 +85,7 @@  ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
 ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
 ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
+ENTRY (RVVMF8QI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
@@ -93,7 +93,7 @@  ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
 ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
+ENTRY (RVVMF4HI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_BF_16.  */
 ENTRY (RVVM8BF, TARGET_VECTOR_ELEN_BF_16, LMUL_8, 2)
@@ -109,21 +109,21 @@  ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
 ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -152,61 +152,61 @@  ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
 TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
 TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
 TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
 TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
 TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
 TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
 TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
 TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
 TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
 TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
 TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
 TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
 TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
 TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8BF, TARGET_VECTOR_ELEN_BF_16, RVVM1BF, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x8BF, TARGET_VECTOR_ELEN_BF_16, RVVMF2BF, 8, LMUL_F2, 32)
@@ -236,67 +236,67 @@  TUPLE_ENTRY (RVVMF4x2BF, TARGET_VECTOR_ELEN_BF_16 && TARGET_MIN_VLEN > 32, RVVMF
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
 TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_VECTOR_ELEN_64) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_VECTOR_ELEN_64) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_VECTOR_ELEN_64 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr111391-2.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr111391-2.c
index 1f170c962e1..32db3a68fd3 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr111391-2.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr111391-2.c
@@ -3,7 +3,7 @@ 
 
 #include "pr111391-1.c"
 
-/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*2,\s*e32,\s*mf2,\s*t[au],\s*m[au]} 1 } }
+/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*2,\s*e32,\s*m1,\s*t[au],\s*m[au]} 1 } } */
 /* { dg-final { scan-assembler-times {vmv\.x\.s} 2 } } */
 /* { dg-final { scan-assembler-times {vslidedown.vi\s+v[0-9]+,\s*v[0-9]+,\s*1} 1 } } */
 /* { dg-final { scan-assembler-times {slli\s+[a-x0-9]+,[a-x0-9]+,32} 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-14.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-14.c
index 163152ae923..222d8c233ab 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-14.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-14.c
@@ -1,20 +1,20 @@ 
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc_zve32x_zvl64b -mabi=ilp32d" } */
 
-void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
-void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
-void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
-void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
-void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
-void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} 
-void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
-void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
-void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
-void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
-void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
-void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
-void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
-void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x2_t'} } */
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x2_t'} } */
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x3_t'} } */
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x3_t'} } */
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x4_t'} } */
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x4_t'} } */
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x5_t'} } */
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x5_t'} } */
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x6_t'} } */
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x6_t'} } */
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x7_t'} } */
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x7_t'} } */
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x8_t'} } */
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x8_t'} } */
 void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
 void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
 void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
@@ -65,20 +65,20 @@  void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
 void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
 void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
 void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
-void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
-void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
-void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
-void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
-void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
-void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
-void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
-void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
-void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
-void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
-void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
-void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
-void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
-void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x2_t'} } */
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x2_t'} } */
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x3_t'} } */
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x3_t'} } */
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x4_t'} } */
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x4_t'} } */
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x5_t'} } */
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x5_t'} } */
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x6_t'} } */
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x6_t'} } */
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x7_t'} } */
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x7_t'} } */
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x8_t'} } */
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x8_t'} } */
 void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
 void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
 void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
@@ -115,20 +115,20 @@  void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
 void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
 void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
 void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
-void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
-void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
-void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
-void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
-void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
-void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
-void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
-void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
-void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
-void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
-void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
-void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
-void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
-void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x2_t'} } */
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x2_t'} } */
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x3_t'} } */
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x3_t'} } */
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x4_t'} } */
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x4_t'} } */
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x5_t'} } */
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x5_t'} } */
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x6_t'} } */
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x6_t'} } */
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x7_t'} } */
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x7_t'} } */
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x8_t'} } */
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x8_t'} } */
 void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
 void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
 void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-16.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-16.c
index 9e962a70acf..2762b7a0e30 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-16.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-16.c
@@ -1,20 +1,20 @@ 
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc_zve32f_zvl64b -mabi=ilp32d" } */
 
-void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
-void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
-void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
-void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
-void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
-void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} 
-void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
-void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
-void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
-void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
-void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
-void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
-void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
-void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x2_t'} } */
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x2_t'} } */
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x3_t'} } */
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x3_t'} } */
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x4_t'} } */
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x4_t'} } */
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x5_t'} } */
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x5_t'} } */
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x6_t'} } */
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x6_t'} } */
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x7_t'} } */
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x7_t'} } */
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x8_t'} } */
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x8_t'} } */
 void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
 void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
 void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
@@ -65,20 +65,20 @@  void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
 void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
 void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
 void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
-void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
-void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
-void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
-void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
-void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
-void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
-void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
-void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
-void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
-void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
-void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
-void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
-void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
-void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x2_t'} } */
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x2_t'} } */
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x3_t'} } */
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x3_t'} } */
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x4_t'} } */
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x4_t'} } */
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x5_t'} } */
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x5_t'} } */
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x6_t'} } */
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x6_t'} } */
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x7_t'} } */
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x7_t'} } */
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x8_t'} } */
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x8_t'} } */
 void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
 void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
 void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
@@ -115,20 +115,20 @@  void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
 void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
 void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
 void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
-void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
-void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
-void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
-void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
-void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
-void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
-void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
-void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
-void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
-void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
-void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
-void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
-void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
-void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x2_t'} } */
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x2_t'} } */
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x3_t'} } */
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x3_t'} } */
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x4_t'} } */
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x4_t'} } */
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x5_t'} } */
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x5_t'} } */
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x6_t'} } */
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x6_t'} } */
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x7_t'} } */
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x7_t'} } */
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x8_t'} } */
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x8_t'} } */
 void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
 void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
 void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
@@ -179,13 +179,13 @@  void f___rvv_float16m1_t () {__rvv_float16m1_t t;} /* { dg-error {unknown type n
 void f___rvv_float16m2_t () {__rvv_float16m2_t t;} /* { dg-error {unknown type name '__rvv_float16m2_t'} } */
 void f___rvv_float16m4_t () {__rvv_float16m4_t t;} /* { dg-error {unknown type name '__rvv_float16m4_t'} } */
 void f___rvv_float16m8_t () {__rvv_float16m8_t t;} /* { dg-error {unknown type name '__rvv_float16m8_t'} } */
-void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;}
-void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;}
-void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;}
-void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;}
-void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;}
-void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;}
-void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;}
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x2_t'} } */
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x3_t'} } */
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x4_t'} } */
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x5_t'} } */
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x6_t'} } */
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x7_t'} } */
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x8_t'} } */
 void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;}
 void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;}
 void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-18.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-18.c
index 402e8f6ba22..95b760fa4e6 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-18.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-18.c
@@ -1,20 +1,20 @@ 
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc_zve32x_zvl64b_zvfhmin -mabi=ilp32d" } */
 
-void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
-void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
-void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
-void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
-void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
-void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} 
-void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
-void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
-void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
-void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
-void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
-void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
-void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
-void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x2_t'} } */
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x2_t'} } */
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x3_t'} } */
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x3_t'} } */
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x4_t'} } */
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x4_t'} } */
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x5_t'} } */
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x5_t'} } */
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x6_t'} } */
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x6_t'} } */
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x7_t'} } */
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x7_t'} } */
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x8_t'} } */
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x8_t'} } */
 void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
 void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
 void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
@@ -65,20 +65,20 @@  void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
 void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
 void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
 void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
-void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
-void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
-void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
-void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
-void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
-void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
-void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
-void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
-void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
-void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
-void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
-void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
-void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
-void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x2_t'} } */
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x2_t'} } */
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x3_t'} } */
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x3_t'} } */
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x4_t'} } */
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x4_t'} } */
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x5_t'} } */
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x5_t'} } */
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x6_t'} } */
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x6_t'} } */
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x7_t'} } */
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x7_t'} } */
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x8_t'} } */
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x8_t'} } */
 void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
 void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
 void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
@@ -115,20 +115,20 @@  void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
 void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
 void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
 void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
-void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
-void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
-void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
-void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
-void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
-void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
-void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
-void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
-void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
-void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
-void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
-void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
-void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
-void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x2_t'} } */
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x2_t'} } */
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x3_t'} } */
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x3_t'} } */
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x4_t'} } */
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x4_t'} } */
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x5_t'} } */
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x5_t'} } */
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x6_t'} } */
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x6_t'} } */
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x7_t'} } */
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x7_t'} } */
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x8_t'} } */
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x8_t'} } */
 void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
 void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
 void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
@@ -173,13 +173,13 @@  void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;} /* { dg-error {unknown type n
 void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x4_t'} } */
 void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;} /* { dg-error {unknown type name '__rvv_int64m4x2_t'} } */
 void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m4x2_t'} } */
-void f___rvv_float16mf4x2_t () {__rvv_float16mf4x2_t t;}
-void f___rvv_float16mf4x3_t () {__rvv_float16mf4x3_t t;}
-void f___rvv_float16mf4x4_t () {__rvv_float16mf4x4_t t;}
-void f___rvv_float16mf4x5_t () {__rvv_float16mf4x5_t t;}
-void f___rvv_float16mf4x6_t () {__rvv_float16mf4x6_t t;}
-void f___rvv_float16mf4x7_t () {__rvv_float16mf4x7_t t;}
-void f___rvv_float16mf4x8_t () {__rvv_float16mf4x8_t t;}
+void f___rvv_float16mf4x2_t () {__rvv_float16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_float16mf4x2_t'} } */
+void f___rvv_float16mf4x3_t () {__rvv_float16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_float16mf4x3_t'} } */
+void f___rvv_float16mf4x4_t () {__rvv_float16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_float16mf4x4_t'} } */
+void f___rvv_float16mf4x5_t () {__rvv_float16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_float16mf4x5_t'} } */
+void f___rvv_float16mf4x6_t () {__rvv_float16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_float16mf4x6_t'} } */
+void f___rvv_float16mf4x7_t () {__rvv_float16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_float16mf4x7_t'} } */
+void f___rvv_float16mf4x8_t () {__rvv_float16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_float16mf4x8_t'} } */
 void f___rvv_float16mf2x2_t () {__rvv_float16mf2x2_t t;}
 void f___rvv_float16mf2x3_t () {__rvv_float16mf2x3_t t;}
 void f___rvv_float16mf2x4_t () {__rvv_float16mf2x4_t t;}
@@ -198,13 +198,13 @@  void f___rvv_float16m2x2_t () {__rvv_float16m2x2_t t;}
 void f___rvv_float16m2x3_t () {__rvv_float16m2x3_t t;}
 void f___rvv_float16m2x4_t () {__rvv_float16m2x4_t t;}
 void f___rvv_float16m4x2_t () {__rvv_float16m4x2_t t;}
-void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;}
-void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;}
-void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;}
-void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;}
-void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;}
-void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;}
-void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;}
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x2_t'} } */
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x3_t'} } */
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x4_t'} } */
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x5_t'} } */
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x6_t'} } */
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x7_t'} } */
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x8_t'} } */
 void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;}
 void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;}
 void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vsetvl_zve32f.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vsetvl_zve32f.c
new file mode 100644
index 00000000000..f6899c3bc2f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vsetvl_zve32f.c
@@ -0,0 +1,73 @@ 
+/* { dg-do compile } */
+/* { dg-options "-march=rv32imafc_zve32f_zvl128b -mabi=ilp32 -O2" } */
+
+struct S0
+{
+  unsigned a : 15;
+  int b;
+  int c;
+};
+
+struct S1
+{
+  struct S0 s0;
+  int e;
+};
+
+struct Z
+{
+  char c;
+  int z;
+} __attribute__((packed));
+
+union U
+{
+  struct S1 s1;
+  struct Z z;
+};
+
+int __attribute__((noinline, noclone))
+return_zero (void)
+{
+  return 0;
+}
+
+volatile union U gu;
+struct S0 gs;
+
+int __attribute__((noinline, noclone))
+check_outcome ()
+{
+  if (gs.a != 6
+      || gs.b != 80000)
+    __builtin_abort ();
+}
+
+int
+main (int argc, char *argv[])
+{
+  union U u;
+  struct S1 m;
+  struct S0 l;
+
+  if (return_zero ())
+    u.z.z = 20000;
+  else
+    {
+      u.s1.s0.a = 6;
+      u.s1.s0.b = 80000;
+      u.s1.e = 2;
+
+      m = u.s1;
+      m.s0.c = 0;
+      l = m.s0;
+      gs = l;
+    }
+
+  gu = u;
+  check_outcome ();
+  return 0;
+}
+
+/* { dg-final { scan-assembler {vsetivli\s+zero,\s*2,\s*e32,\s*m1,\s*t[au],\s*m[au]} } } */
+/* { dg-final { scan-assembler {vsetivli\s+zero,\s*4,\s*e32,\s*m1,\s*t[au],\s*m[au]} } } */