Fix non-grouped SLP load/store accounting in alignment peeling

Message ID 20240508094548.3153638AA251@sourceware.org
State New
Headers
Series Fix non-grouped SLP load/store accounting in alignment peeling |

Checks

Context Check Description
linaro-tcwg-bot/tcwg_gcc_build--master-aarch64 success Testing passed
linaro-tcwg-bot/tcwg_gcc_check--master-aarch64 warning Patch is already merged
linaro-tcwg-bot/tcwg_gcc_build--master-arm warning Patch is already merged

Commit Message

Richard Biener May 8, 2024, 9:45 a.m. UTC
  When we have a non-grouped access we bogously multiply by zero.
This shows most with single-lane SLP but also happens with
the multi-lane splat case.

Re-bootstrap & regtest running on x86_64-unknown-linux-gnu.

I've ran into this latent bug on the force-slp branch.

Richard.

	* tree-vect-data-refs.cc (vect_enhance_data_refs_alignment):
	Properly guard DR_GROUP_SIZE access with STMT_VINFO_GROUPED_ACCESS.
---
 gcc/tree-vect-data-refs.cc | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)
  

Patch

diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc
index c531079d3bb..ae237407672 100644
--- a/gcc/tree-vect-data-refs.cc
+++ b/gcc/tree-vect-data-refs.cc
@@ -2290,8 +2290,11 @@  vect_enhance_data_refs_alignment (loop_vec_info loop_vinfo)
               if (unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
 		{
 		  poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
-		  nscalars = (STMT_SLP_TYPE (stmt_info)
-			      ? vf * DR_GROUP_SIZE (stmt_info) : vf);
+		  unsigned group_size = 1;
+		  if (STMT_SLP_TYPE (stmt_info)
+		      && STMT_VINFO_GROUPED_ACCESS (stmt_info))
+		    group_size = DR_GROUP_SIZE (stmt_info);
+		  nscalars = vf * group_size;
 		}
 
 	      /* Save info about DR in the hash table.  Also include peeling