From patchwork Thu Apr 6 14:42:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?6ZKf5bGF5ZOy?= X-Patchwork-Id: 67463 X-Patchwork-Delegate: jlaw@ventanamicro.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id CBDDF3856DE6 for ; Thu, 6 Apr 2023 14:42:56 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtpbgsg2.qq.com (smtpbgsg2.qq.com [54.254.200.128]) by sourceware.org (Postfix) with ESMTPS id 098EF3858CDA for ; Thu, 6 Apr 2023 14:42:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 098EF3858CDA Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivai.ai Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivai.ai X-QQ-mid: bizesmtp74t1680792149tkrqyx7i Received: from rios-cad5.localdomain ( [58.60.1.11]) by bizesmtp.qq.com (ESMTP) with id ; Thu, 06 Apr 2023 22:42:27 +0800 (CST) X-QQ-SSF: 01400000000000F0O000000A0000000 X-QQ-FEAT: hoArX50alxE6wZirNBHly70oG29kf7O0S/aJV3/n57o0gS6158pM27/IAnLNP vDVMLMHPymmCuc6UdEj+fIuN1mz1nhxBmXqFGpzP6ucjA8NDJLB1x+E6wM56Ldvh85M4435 /zN4ZCfks3Em5cmxWTaSAiWg9P6LR0EnsDBNygZD2DWernYrkC2QzS1Db84OMKCvoFxYiab tNNg9lbqfTA9JcAcD641VZOp7wETsr+yL555YUbu9cqI3TnF1LKOc+eS4PUDGyrTgqVQCcN 4+HgWvi/qlgZ/+NX7f2l8sSeR6Tf49gFl6Fq91q1jXUwSHjhuyjw/xn0A5QnK9YL5SdTzmy RjmSIt+cEBNfjMKGxIswwi1lTJRzMvr4BJbqQbH/qphxRl9nt7FOhFG7rSNNFvWzXtxKRyB yhOH9flo+1TOm8Orz5Joaw== X-QQ-GoodBg: 2 X-BIZMAIL-ID: 10428428287678539498 From: juzhe.zhong@rivai.ai To: gcc-patches@gcc.gnu.org Cc: kito.cheng@gmail.com, palmer@dabbelt.com, richard.sandiford@arm.com, rguenther@suse.de, jeffreyalaw@gmail.com, Juzhe-Zhong Subject: [PATCH 1/3] VECT: Add WHILE_LEN pattern to support decrement IV manipulation for loop vectorizer. Date: Thu, 6 Apr 2023 22:42:20 +0800 Message-Id: <20230406144222.316395-2-juzhe.zhong@rivai.ai> X-Mailer: git-send-email 2.36.3 In-Reply-To: <20230406144222.316395-1-juzhe.zhong@rivai.ai> References: <20230406144222.316395-1-juzhe.zhong@rivai.ai> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:rivai.ai:qybglogicsvr:qybglogicsvr7 X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" From: Juzhe-Zhong This patch is to add WHILE_LEN pattern. It's inspired by RVV ISA simple "vvaddint32.s" example: https://github.com/riscv/riscv-v-spec/blob/master/example/vvaddint32.s More details are in "vect_set_loop_controls_by_while_len" implementation and comments. Consider such following case: #define N 16 int src[N]; int dest[N]; void foo (int n) { for (int i = 0; i < n; i++) dest[i] = src[i]; } -march=rv64gcv -O3 --param riscv-autovec-preference=scalable -fno-vect-cost-model -fno-tree-loop-distribute-patterns: foo: ble a0,zero,.L1 lui a4,%hi(.LANCHOR0) addi a4,a4,%lo(.LANCHOR0) addi a3,a4,64 csrr a2,vlenb .L3: vsetvli a5,a0,e32,m1,ta,ma vle32.v v1,0(a4) sub a0,a0,a5 vse32.v v1,0(a3) add a4,a4,a2 add a3,a3,a2 bne a0,zero,.L3 .L1: ret Also, we support multiple rgroup for SLP: More testcases are in gcc/testsuite/gcc.target/riscv/rvv/autovec. gcc/ChangeLog: * doc/md.texi: Add while_len support * internal-fn.cc (while_len_direct): Ditto. (expand_while_len_optab_fn): Ditto. (direct_while_len_optab_supported_p): Ditto. * internal-fn.def (WHILE_LEN): Ditto. * optabs.def (OPTAB_D): Ditto. * tree-ssa-loop-manip.cc (create_iv): Ditto. * tree-ssa-loop-manip.h (create_iv): Ditto. * tree-vect-loop-manip.cc (vect_set_loop_controls_by_while_len): New function. (vect_set_loop_condition_partial_vectors): Add while_len support. * tree-vect-loop.cc (vect_get_loop_len): Ditto. * tree-vect-stmts.cc (vectorizable_store): Ditto. (vectorizable_load): Ditto * tree-vectorizer.h (vect_get_loop_len): Ditto. --- gcc/doc/md.texi | 14 +++ gcc/internal-fn.cc | 29 ++++++ gcc/internal-fn.def | 1 + gcc/optabs.def | 1 + gcc/tree-ssa-loop-manip.cc | 4 +- gcc/tree-ssa-loop-manip.h | 2 +- gcc/tree-vect-loop-manip.cc | 186 ++++++++++++++++++++++++++++++++++-- gcc/tree-vect-loop.cc | 35 +++++-- gcc/tree-vect-stmts.cc | 9 +- gcc/tree-vectorizer.h | 4 +- 10 files changed, 264 insertions(+), 21 deletions(-) diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 8e3113599fd..72178ab014c 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -4965,6 +4965,20 @@ for (i = 1; i < operand3; i++) operand0[i] = operand0[i - 1] && (operand1 + i < operand2); @end smallexample +@cindex @code{while_len@var{m}@var{n}} instruction pattern +@item @code{while_len@var{m}@var{n}} +Set operand 0 to the number of active elements in vector will be updated value. +operand 1 is the total elements need to be updated value. +operand 2 is the vectorization factor. +The operation is equivalent to: + +@smallexample +operand0 = MIN (operand1, operand2); +operand2 can be const_poly_int or poly_int related to vector mode size. +Some target like RISC-V has a standalone instruction to get MIN (n, MODE SIZE) so +that we can reduce a use of general purpose register. +@end smallexample + @cindex @code{check_raw_ptrs@var{m}} instruction pattern @item @samp{check_raw_ptrs@var{m}} Check whether, given two pointers @var{a} and @var{b} and a length @var{len}, diff --git a/gcc/internal-fn.cc b/gcc/internal-fn.cc index 6e81dc05e0e..5f44def90d3 100644 --- a/gcc/internal-fn.cc +++ b/gcc/internal-fn.cc @@ -127,6 +127,7 @@ init_internal_fns () #define cond_binary_direct { 1, 1, true } #define cond_ternary_direct { 1, 1, true } #define while_direct { 0, 2, false } +#define while_len_direct { 0, 0, false } #define fold_extract_direct { 2, 2, false } #define fold_left_direct { 1, 1, false } #define mask_fold_left_direct { 1, 1, false } @@ -3702,6 +3703,33 @@ expand_while_optab_fn (internal_fn, gcall *stmt, convert_optab optab) emit_move_insn (lhs_rtx, ops[0].value); } +/* Expand WHILE_LEN call STMT using optab OPTAB. */ +static void +expand_while_len_optab_fn (internal_fn, gcall *stmt, convert_optab optab) +{ + expand_operand ops[3]; + tree rhs_type[2]; + + tree lhs = gimple_call_lhs (stmt); + tree lhs_type = TREE_TYPE (lhs); + rtx lhs_rtx = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE); + create_output_operand (&ops[0], lhs_rtx, TYPE_MODE (lhs_type)); + + for (unsigned int i = 0; i < gimple_call_num_args (stmt); ++i) + { + tree rhs = gimple_call_arg (stmt, i); + rhs_type[i] = TREE_TYPE (rhs); + rtx rhs_rtx = expand_normal (rhs); + create_input_operand (&ops[i + 1], rhs_rtx, TYPE_MODE (rhs_type[i])); + } + + insn_code icode = direct_optab_handler (optab, TYPE_MODE (rhs_type[0])); + + expand_insn (icode, 3, ops); + if (!rtx_equal_p (lhs_rtx, ops[0].value)) + emit_move_insn (lhs_rtx, ops[0].value); +} + /* Expand a call to a convert-like optab using the operands in STMT. FN has a single output operand and NARGS input operands. */ @@ -3843,6 +3871,7 @@ multi_vector_optab_supported_p (convert_optab optab, tree_pair types, #define direct_scatter_store_optab_supported_p convert_optab_supported_p #define direct_len_store_optab_supported_p direct_optab_supported_p #define direct_while_optab_supported_p convert_optab_supported_p +#define direct_while_len_optab_supported_p direct_optab_supported_p #define direct_fold_extract_optab_supported_p direct_optab_supported_p #define direct_fold_left_optab_supported_p direct_optab_supported_p #define direct_mask_fold_left_optab_supported_p direct_optab_supported_p diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def index 7fe742c2ae7..3a933abff5d 100644 --- a/gcc/internal-fn.def +++ b/gcc/internal-fn.def @@ -153,6 +153,7 @@ DEF_INTERNAL_OPTAB_FN (VEC_SET, 0, vec_set, vec_set) DEF_INTERNAL_OPTAB_FN (LEN_STORE, 0, len_store, len_store) DEF_INTERNAL_OPTAB_FN (WHILE_ULT, ECF_CONST | ECF_NOTHROW, while_ult, while) +DEF_INTERNAL_OPTAB_FN (WHILE_LEN, ECF_CONST | ECF_NOTHROW, while_len, while_len) DEF_INTERNAL_OPTAB_FN (CHECK_RAW_PTRS, ECF_CONST | ECF_NOTHROW, check_raw_ptrs, check_ptrs) DEF_INTERNAL_OPTAB_FN (CHECK_WAR_PTRS, ECF_CONST | ECF_NOTHROW, diff --git a/gcc/optabs.def b/gcc/optabs.def index 695f5911b30..f5938bd2c24 100644 --- a/gcc/optabs.def +++ b/gcc/optabs.def @@ -476,3 +476,4 @@ OPTAB_DC (vec_series_optab, "vec_series$a", VEC_SERIES) OPTAB_D (vec_shl_insert_optab, "vec_shl_insert_$a") OPTAB_D (len_load_optab, "len_load_$a") OPTAB_D (len_store_optab, "len_store_$a") +OPTAB_D (while_len_optab, "while_len$a") diff --git a/gcc/tree-ssa-loop-manip.cc b/gcc/tree-ssa-loop-manip.cc index 09acc1c94cc..cdbf280e249 100644 --- a/gcc/tree-ssa-loop-manip.cc +++ b/gcc/tree-ssa-loop-manip.cc @@ -59,14 +59,14 @@ static bitmap_obstack loop_renamer_obstack; void create_iv (tree base, tree step, tree var, class loop *loop, gimple_stmt_iterator *incr_pos, bool after, - tree *var_before, tree *var_after) + tree *var_before, tree *var_after, enum tree_code code) { gassign *stmt; gphi *phi; tree initial, step1; gimple_seq stmts; tree vb, va; - enum tree_code incr_op = PLUS_EXPR; + enum tree_code incr_op = code; edge pe = loop_preheader_edge (loop); if (var != NULL_TREE) diff --git a/gcc/tree-ssa-loop-manip.h b/gcc/tree-ssa-loop-manip.h index d49273a3987..da755320a3a 100644 --- a/gcc/tree-ssa-loop-manip.h +++ b/gcc/tree-ssa-loop-manip.h @@ -23,7 +23,7 @@ along with GCC; see the file COPYING3. If not see typedef void (*transform_callback)(class loop *, void *); extern void create_iv (tree, tree, tree, class loop *, gimple_stmt_iterator *, - bool, tree *, tree *); + bool, tree *, tree *, enum tree_code = PLUS_EXPR); extern void rewrite_into_loop_closed_ssa (bitmap, unsigned); extern void verify_loop_closed_ssa (bool, class loop * = NULL); diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index f60fa50e8f4..f3cd6c51d2e 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -682,6 +682,173 @@ vect_set_loop_controls_directly (class loop *loop, loop_vec_info loop_vinfo, return next_ctrl; } +/* Helper for vect_set_loop_condition_partial_vectors. Generate definitions + for all the rgroup controls in RGC and return a control that is nonzero + when the loop needs to iterate. Add any new preheader statements to + PREHEADER_SEQ. Use LOOP_COND_GSI to insert code before the exit gcond. + + RGC belongs to loop LOOP. The loop originally iterated NITERS + times and has been vectorized according to LOOP_VINFO. + + Unlike vect_set_loop_controls_directly which is iterating from 0-based IV + to TEST_LIMIT - bias. + + In vect_set_loop_controls_by_while_len, we are iterating from start at + IV = TEST_LIMIT - bias and keep subtract IV by the length calculated by + IFN_WHILE_LEN pattern. + + Note: the cost of the code generated by this function is modeled + by vect_estimate_min_profitable_iters, so changes here may need + corresponding changes there. + + 1. Single rgroup, the Gimple IR should be: + + + _19 = (unsigned long) n_5(D); + ... + + : + ... + # ivtmp_20 = PHI + ... + _22 = .WHILE_LEN (ivtmp_20, vf); + ... + vector statement (use _22); + ... + ivtmp_21 = ivtmp_20 - _22; + ... + if (ivtmp_21 != 0) + goto ; [75.00%] + else + goto ; [25.00%] + + + return; + + Note: IFN_WHILE_LEN will guarantee "ivtmp_21 = ivtmp_20 - _22" never + underflow 0. + + 2. Multiple rgroup, the Gimple IR should be: + + + _70 = (unsigned long) bnd.7_52; + _71 = _70 * 2; + _72 = MAX_EXPR <_71, 4>; + _73 = _72 + 18446744073709551612; + ... + + : + ... + # ivtmp_74 = PHI + # ivtmp_77 = PHI + _76 = .WHILE_LEN (ivtmp_74, vf * nitems_per_ctrl); + _79 = .WHILE_LEN (ivtmp_77, vf * nitems_per_ctrl); + ... + vector statement (use _79); + ... + vector statement (use _76); + ... + _65 = _79 / 2; + vector statement (use _65); + ... + _68 = _76 / 2; + vector statement (use _68); + ... + ivtmp_78 = ivtmp_77 - _79; + ivtmp_75 = ivtmp_74 - _76; + ... + if (ivtmp_78 != 0) + goto ; [75.00%] + else + goto ; [25.00%] + + + return; + +*/ + +static tree +vect_set_loop_controls_by_while_len (class loop *loop, loop_vec_info loop_vinfo, + gimple_seq *preheader_seq, + gimple_seq *header_seq, + rgroup_controls *rgc, tree niters) +{ + tree compare_type = LOOP_VINFO_RGROUP_COMPARE_TYPE (loop_vinfo); + tree iv_type = LOOP_VINFO_RGROUP_IV_TYPE (loop_vinfo); + /* We are not allowing masked approach in WHILE_LEN. */ + gcc_assert (!LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)); + + tree ctrl_type = rgc->type; + unsigned int nitems_per_iter = rgc->max_nscalars_per_iter * rgc->factor; + poly_uint64 nitems_per_ctrl = TYPE_VECTOR_SUBPARTS (ctrl_type) * rgc->factor; + poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo); + + /* Calculate the maximum number of item values that the rgroup + handles in total, the number that it handles for each iteration + of the vector loop. */ + tree nitems_total = niters; + if (nitems_per_iter != 1) + { + /* We checked before setting LOOP_VINFO_USING_PARTIAL_VECTORS_P that + these multiplications don't overflow. */ + tree compare_factor = build_int_cst (compare_type, nitems_per_iter); + nitems_total = gimple_build (preheader_seq, MULT_EXPR, compare_type, + nitems_total, compare_factor); + } + + /* Convert the comparison value to the IV type (either a no-op or + a promotion). */ + nitems_total = gimple_convert (preheader_seq, iv_type, nitems_total); + + /* Create an induction variable that counts the number of items + processed. */ + tree index_before_incr, index_after_incr; + gimple_stmt_iterator incr_gsi; + bool insert_after; + standard_iv_increment_position (loop, &incr_gsi, &insert_after); + + /* Test the decremented IV, which will never underflow 0 since we have + IFN_WHILE_LEN to gurantee that. */ + tree test_limit = nitems_total; + + /* Provide a definition of each control in the group. */ + tree ctrl; + unsigned int i; + FOR_EACH_VEC_ELT_REVERSE (rgc->controls, i, ctrl) + { + /* Previous controls will cover BIAS items. This control covers the + next batch. */ + poly_uint64 bias = nitems_per_ctrl * i; + tree bias_tree = build_int_cst (iv_type, bias); + + /* Rather than have a new IV that starts at TEST_LIMIT and goes down to + BIAS, prefer to use the same TEST_LIMIT - BIAS based IV for each + control and adjust the bound down by BIAS. */ + tree this_test_limit = test_limit; + if (i != 0) + { + this_test_limit = gimple_build (preheader_seq, MAX_EXPR, iv_type, + this_test_limit, bias_tree); + this_test_limit = gimple_build (preheader_seq, MINUS_EXPR, iv_type, + this_test_limit, bias_tree); + } + + /* Create decrement IV. */ + create_iv (this_test_limit, ctrl, NULL_TREE, loop, &incr_gsi, + insert_after, &index_before_incr, &index_after_incr, + MINUS_EXPR); + + poly_uint64 final_vf = vf * nitems_per_iter; + tree vf_step = build_int_cst (iv_type, final_vf); + tree res_len = gimple_build (header_seq, IFN_WHILE_LEN, iv_type, + index_before_incr, vf_step); + gassign *assign = gimple_build_assign (ctrl, res_len); + gimple_seq_add_stmt (header_seq, assign); + } + + return index_after_incr; +} + /* Set up the iteration condition and rgroup controls for LOOP, given that LOOP_VINFO_USING_PARTIAL_VECTORS_P is true for the vectorized loop. LOOP_VINFO describes the vectorization of LOOP. NITERS is @@ -703,6 +870,7 @@ vect_set_loop_condition_partial_vectors (class loop *loop, bool use_masks_p = LOOP_VINFO_FULLY_MASKED_P (loop_vinfo); tree compare_type = LOOP_VINFO_RGROUP_COMPARE_TYPE (loop_vinfo); + tree iv_type = LOOP_VINFO_RGROUP_IV_TYPE (loop_vinfo); unsigned int compare_precision = TYPE_PRECISION (compare_type); tree orig_niters = niters; @@ -757,12 +925,18 @@ vect_set_loop_condition_partial_vectors (class loop *loop, bool might_wrap_p = vect_rgroup_iv_might_wrap_p (loop_vinfo, rgc); /* Set up all controls for this group. */ - test_ctrl = vect_set_loop_controls_directly (loop, loop_vinfo, - &preheader_seq, - &header_seq, - loop_cond_gsi, rgc, - niters, niters_skip, - might_wrap_p); + if (direct_internal_fn_supported_p (IFN_WHILE_LEN, iv_type, + OPTIMIZE_FOR_SPEED)) + test_ctrl + = vect_set_loop_controls_by_while_len (loop, loop_vinfo, + &preheader_seq, &header_seq, + rgc, niters); + else + test_ctrl + = vect_set_loop_controls_directly (loop, loop_vinfo, &preheader_seq, + &header_seq, loop_cond_gsi, rgc, + niters, niters_skip, + might_wrap_p); } /* Emit all accumulated statements. */ diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 1ba9f18d73e..5bffd9a6322 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -10360,12 +10360,14 @@ vect_record_loop_len (loop_vec_info loop_vinfo, vec_loop_lens *lens, rgroup that operates on NVECTORS vectors, where 0 <= INDEX < NVECTORS. */ tree -vect_get_loop_len (loop_vec_info loop_vinfo, vec_loop_lens *lens, - unsigned int nvectors, unsigned int index) +vect_get_loop_len (gimple_stmt_iterator *gsi, loop_vec_info loop_vinfo, + vec_loop_lens *lens, unsigned int nvectors, tree vectype, + unsigned int index) { rgroup_controls *rgl = &(*lens)[nvectors - 1]; - bool use_bias_adjusted_len = - LOOP_VINFO_PARTIAL_LOAD_STORE_BIAS (loop_vinfo) != 0; + bool use_bias_adjusted_len + = LOOP_VINFO_PARTIAL_LOAD_STORE_BIAS (loop_vinfo) != 0; + tree iv_type = LOOP_VINFO_RGROUP_IV_TYPE (loop_vinfo); /* Populate the rgroup's len array, if this is the first time we've used it. */ @@ -10386,8 +10388,8 @@ vect_get_loop_len (loop_vec_info loop_vinfo, vec_loop_lens *lens, if (use_bias_adjusted_len) { gcc_assert (i == 0); - tree adjusted_len = - make_temp_ssa_name (len_type, NULL, "adjusted_loop_len"); + tree adjusted_len + = make_temp_ssa_name (len_type, NULL, "adjusted_loop_len"); SSA_NAME_DEF_STMT (adjusted_len) = gimple_build_nop (); rgl->bias_adjusted_ctrl = adjusted_len; } @@ -10396,6 +10398,27 @@ vect_get_loop_len (loop_vec_info loop_vinfo, vec_loop_lens *lens, if (use_bias_adjusted_len) return rgl->bias_adjusted_ctrl; + else if (direct_internal_fn_supported_p (IFN_WHILE_LEN, iv_type, + OPTIMIZE_FOR_SPEED)) + { + tree loop_len = rgl->controls[index]; + poly_int64 nunits1 = TYPE_VECTOR_SUBPARTS (rgl->type); + poly_int64 nunits2 = TYPE_VECTOR_SUBPARTS (vectype); + if (maybe_ne (nunits1, nunits2)) + { + /* A loop len for data type X can be reused for data type Y + if X has N times more elements than Y and if Y's elements + are N times bigger than X's. */ + gcc_assert (multiple_p (nunits1, nunits2)); + unsigned int factor = exact_div (nunits1, nunits2).to_constant (); + gimple_seq seq = NULL; + loop_len = gimple_build (&seq, RDIV_EXPR, iv_type, loop_len, + build_int_cst (iv_type, factor)); + if (seq) + gsi_insert_seq_before (gsi, seq, GSI_SAME_STMT); + } + return loop_len; + } else return rgl->controls[index]; } diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index efa2d0daa52..708c8a1d806 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -8653,8 +8653,9 @@ vectorizable_store (vec_info *vinfo, else if (loop_lens) { tree final_len - = vect_get_loop_len (loop_vinfo, loop_lens, - vec_num * ncopies, vec_num * j + i); + = vect_get_loop_len (gsi, loop_vinfo, loop_lens, + vec_num * ncopies, vectype, + vec_num * j + i); tree ptr = build_int_cst (ref_type, align * BITS_PER_UNIT); machine_mode vmode = TYPE_MODE (vectype); opt_machine_mode new_ovmode @@ -10009,8 +10010,8 @@ vectorizable_load (vec_info *vinfo, else if (loop_lens && memory_access_type != VMAT_INVARIANT) { tree final_len - = vect_get_loop_len (loop_vinfo, loop_lens, - vec_num * ncopies, + = vect_get_loop_len (gsi, loop_vinfo, loop_lens, + vec_num * ncopies, vectype, vec_num * j + i); tree ptr = build_int_cst (ref_type, align * BITS_PER_UNIT); diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index 9cf2fb23fe3..e5cf38caf4b 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -2293,8 +2293,8 @@ extern tree vect_get_loop_mask (gimple_stmt_iterator *, vec_loop_masks *, unsigned int, tree, unsigned int); extern void vect_record_loop_len (loop_vec_info, vec_loop_lens *, unsigned int, tree, unsigned int); -extern tree vect_get_loop_len (loop_vec_info, vec_loop_lens *, unsigned int, - unsigned int); +extern tree vect_get_loop_len (gimple_stmt_iterator *, loop_vec_info, + vec_loop_lens *, unsigned int, tree, unsigned int); extern gimple_seq vect_gen_len (tree, tree, tree, tree); extern stmt_vec_info info_for_reduction (vec_info *, stmt_vec_info); extern bool reduction_fn_for_scalar_code (code_helper, internal_fn *); From patchwork Thu Apr 6 14:42:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?6ZKf5bGF5ZOy?= X-Patchwork-Id: 67464 X-Patchwork-Delegate: jlaw@ventanamicro.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id BE69638515F7 for ; Thu, 6 Apr 2023 14:43:16 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtpbgsg2.qq.com (smtpbgsg2.qq.com [54.254.200.128]) by sourceware.org (Postfix) with ESMTPS id D87433858C50 for ; Thu, 6 Apr 2023 14:42:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D87433858C50 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivai.ai Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivai.ai X-QQ-mid: bizesmtp74t1680792154t235rf6w Received: from rios-cad5.localdomain ( [58.60.1.11]) by bizesmtp.qq.com (ESMTP) with id ; Thu, 06 Apr 2023 22:42:32 +0800 (CST) X-QQ-SSF: 01400000000000F0O000000A0000000 X-QQ-FEAT: rGm7xzoh3hn1HsQsH98iIoe7ibFx7drd4L474z1+AxWi4oqv63PBRLdQiMtcs bdARf6UVR/5teHVXvJCXB0Uax48cc7vCwktRJ0wGGmuC5ZpM0GUFILjjv0AVo03QV7e6jH2 4RCkIvren4ZxTYAGHykCcOuwz+ifcVjHdfrFH/SVZNtdlVuzxlnMy6QFSitfbiRsnuxdeWT akkTq8ievKQwLT7ZMeHQPv0Nf5e82aS2YUhVupJD5kuwBU/fEPFB86X6d1V6WTga7AlXtFX EjbORwUgDyGTGLYtqSixv8fTvdkNIfTsOT8fhdsi5UmQtMvQLjKRfT0B6INpLaeH776ylTt +LIfG1bP225jVwT1J/gwIYmn0nw0fLq3t5BpaA44Avaeh6854tZRacUIHa78xBj4FR4p4W8 NqUi46TtYm5p6blXBqW6LQ== X-QQ-GoodBg: 2 X-BIZMAIL-ID: 2827442432484841768 From: juzhe.zhong@rivai.ai To: gcc-patches@gcc.gnu.org Cc: kito.cheng@gmail.com, palmer@dabbelt.com, richard.sandiford@arm.com, rguenther@suse.de, jeffreyalaw@gmail.com, Juzhe-Zhong Subject: [PATCH 2/3] RISC-V: Enable basic RVV auto-vectorization and support WHILE_LEN/LEN_LOAD/LEN_STORE pattern Date: Thu, 6 Apr 2023 22:42:21 +0800 Message-Id: <20230406144222.316395-3-juzhe.zhong@rivai.ai> X-Mailer: git-send-email 2.36.3 In-Reply-To: <20230406144222.316395-1-juzhe.zhong@rivai.ai> References: <20230406144222.316395-1-juzhe.zhong@rivai.ai> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:rivai.ai:qybglogicsvr:qybglogicsvr7 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" From: Juzhe-Zhong gcc/ChangeLog: * config/riscv/riscv-opts.h (enum riscv_autovec_preference_enum): Add compile option for RVV auto-vectorization. (enum riscv_autovec_lmul_enum): Ditto. * config/riscv/riscv-protos.h (get_vector_mode): Remove unused global function. (preferred_simd_mode): Enable basic auto-vectorization for RVV. (expand_while_len): Enable while_len pattern. * config/riscv/riscv-v.cc (get_avl_type_rtx): Ditto. (autovec_use_vlmax_p): New function. (preferred_simd_mode): New function. (expand_while_len): Ditto. * config/riscv/riscv-vector-switch.def (ENTRY): Disable SEW = 64 for MIN_VLEN > 32 but EEW = 32. * config/riscv/riscv-vsetvl.cc (get_all_successors): New function. (get_all_overlap_blocks): Ditto. (local_eliminate_vsetvl_insn): Ditto. (vector_insn_info::skip_avl_compatible_p): Ditto. (vector_insn_info::merge): Ditto. (pass_vsetvl::compute_local_backward_infos): Ehance VSETVL PASS for RVV auto-vectorization. (pass_vsetvl::global_eliminate_vsetvl_p): Ditto. (pass_vsetvl::cleanup_insns): Ditto. * config/riscv/riscv-vsetvl.h: Ditto. * config/riscv/riscv.cc (riscv_convert_vector_bits): Add basic RVV auto-vectorization support. (riscv_preferred_simd_mode): Ditto. (TARGET_VECTORIZE_PREFERRED_SIMD_MODE): Ditto. * config/riscv/riscv.opt: Add compile option. * config/riscv/vector.md: Add RVV auto-vectorization. * config/riscv/autovec.md: New file. --- gcc/config/riscv/autovec.md | 63 +++++++ gcc/config/riscv/riscv-opts.h | 16 ++ gcc/config/riscv/riscv-protos.h | 3 +- gcc/config/riscv/riscv-v.cc | 61 ++++++- gcc/config/riscv/riscv-vector-switch.def | 47 +++-- gcc/config/riscv/riscv-vsetvl.cc | 210 ++++++++++++++++++++++- gcc/config/riscv/riscv-vsetvl.h | 1 + gcc/config/riscv/riscv.cc | 34 +++- gcc/config/riscv/riscv.opt | 40 +++++ gcc/config/riscv/vector.md | 6 +- 10 files changed, 457 insertions(+), 24 deletions(-) create mode 100644 gcc/config/riscv/autovec.md diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md new file mode 100644 index 00000000000..ff616d81586 --- /dev/null +++ b/gcc/config/riscv/autovec.md @@ -0,0 +1,63 @@ +;; Machine description for auto-vectorization using RVV for GNU compiler. +;; Copyright (C) 2023-2023 Free Software Foundation, Inc. +;; Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd. + +;; This file is part of GCC. + +;; GCC is free software; you can redistribute it and/or modify +;; it under the terms of the GNU General Public License as published by +;; the Free Software Foundation; either version 3, or (at your option) +;; any later version. + +;; GCC is distributed in the hope that it will be useful, +;; but WITHOUT ANY WARRANTY; without even the implied warranty of +;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +;; GNU General Public License for more details. + +;; You should have received a copy of the GNU General Public License +;; along with GCC; see the file COPYING3. If not see +;; . + +;; ========================================================================= +;; == While_len +;; ========================================================================= + +(define_expand "while_len" + [(match_operand:P 0 "register_operand") + (match_operand:P 1 "vector_length_operand") + (match_operand:P 2 "")] + "TARGET_VECTOR" +{ + riscv_vector::expand_while_len (operands); + DONE; +}) + +;; ========================================================================= +;; == Loads/Stores +;; ========================================================================= + +;; len_load/len_store is sub-optimal pattern for RVV auto-vectorization support. +;; We will replace them when len_maskload/len_maskstore is supported in loop vectorizer. +(define_expand "len_load_" + [(match_operand:V 0 "register_operand") + (match_operand:V 1 "memory_operand") + (match_operand 2 "vector_length_operand") + (match_operand 3 "const_0_operand")] + "TARGET_VECTOR" +{ + riscv_vector::emit_nonvlmax_op (code_for_pred_mov (mode), operands[0], + operands[1], operands[2], mode); + DONE; +}) + +(define_expand "len_store_" + [(match_operand:V 0 "memory_operand") + (match_operand:V 1 "register_operand") + (match_operand 2 "vector_length_operand") + (match_operand 3 "const_0_operand")] + "TARGET_VECTOR" +{ + riscv_vector::emit_nonvlmax_op (code_for_pred_mov (mode), operands[0], + operands[1], operands[2], mode); + DONE; +}) diff --git a/gcc/config/riscv/riscv-opts.h b/gcc/config/riscv/riscv-opts.h index cf0cd669be4..22b79b65de5 100644 --- a/gcc/config/riscv/riscv-opts.h +++ b/gcc/config/riscv/riscv-opts.h @@ -67,6 +67,22 @@ enum stack_protector_guard { SSP_GLOBAL /* global canary */ }; +/* RISC-V auto-vectorization preference. */ +enum riscv_autovec_preference_enum { + NO_AUTOVEC, + RVV_SCALABLE, + RVV_FIXED_VLMIN, + RVV_FIXED_VLMAX +}; + +/* RISC-V auto-vectorization RVV LMUL. */ +enum riscv_autovec_lmul_enum { + RVV_M1 = 1, + RVV_M2 = 2, + RVV_M4 = 4, + RVV_M8 = 8 +}; + #define MASK_ZICSR (1 << 0) #define MASK_ZIFENCEI (1 << 1) diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h index 4611447ddde..7db0deb4dbf 100644 --- a/gcc/config/riscv/riscv-protos.h +++ b/gcc/config/riscv/riscv-protos.h @@ -184,7 +184,6 @@ enum mask_policy enum tail_policy get_prefer_tail_policy (); enum mask_policy get_prefer_mask_policy (); rtx get_avl_type_rtx (enum avl_type); -opt_machine_mode get_vector_mode (scalar_mode, poly_uint64); bool simm5_p (rtx); bool neg_simm5_p (rtx); #ifdef RTX_CODE @@ -206,6 +205,8 @@ enum vlen_enum bool slide1_sew64_helper (int, machine_mode, machine_mode, machine_mode, rtx *); rtx gen_avl_for_scalar_move (rtx); +machine_mode preferred_simd_mode (scalar_mode); +void expand_while_len (rtx *); } /* We classify builtin types into two classes: diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc index ed3c5e0756f..0e0cffaf5a4 100644 --- a/gcc/config/riscv/riscv-v.cc +++ b/gcc/config/riscv/riscv-v.cc @@ -43,6 +43,7 @@ #include "optabs.h" #include "tm-constrs.h" #include "rtx-vector-builder.h" +#include "targhooks.h" using namespace riscv_vector; @@ -424,7 +425,7 @@ get_avl_type_rtx (enum avl_type type) /* Return the RVV vector mode that has NUNITS elements of mode INNER_MODE. This function is not only used by builtins, but also will be used by auto-vectorization in the future. */ -opt_machine_mode +static opt_machine_mode get_vector_mode (scalar_mode inner_mode, poly_uint64 nunits) { enum mode_class mclass; @@ -729,4 +730,62 @@ gen_avl_for_scalar_move (rtx avl) } } +/* SCALABLE means that the vector-length is agnostic (run-time invariant and + compile-time unknown). FIXED meands that the vector-length is specific + (compile-time known). Both RVV_SCALABLE and RVV_FIXED_VLMAX are doing + auto-vectorization using VLMAX vsetvl configuration. */ +static bool +autovec_use_vlmax_p (void) +{ + return riscv_autovec_preference == RVV_SCALABLE + || riscv_autovec_preference == RVV_FIXED_VLMAX; +} + +/* Return the vectorization machine mode for RVV according to LMUL. */ +machine_mode +preferred_simd_mode (scalar_mode mode) +{ + if (autovec_use_vlmax_p ()) + { + /* We use LMUL = 1 as base bytesize which is BYTES_PER_RISCV_VECTOR and + riscv_autovec_lmul as multiply factor to calculate the the NUNITS to + get the auto-vectorization mode. */ + poly_uint64 nunits; + poly_uint64 vector_size + = BYTES_PER_RISCV_VECTOR * ((int) riscv_autovec_lmul); + poly_uint64 scalar_size = GET_MODE_SIZE (mode); + if (!multiple_p (vector_size, scalar_size, &nunits)) + return word_mode; + machine_mode rvv_mode; + if (get_vector_mode (mode, nunits).exists (&rvv_mode)) + return rvv_mode; + } + /* TODO: We will support minimum length VLS auto-vectorization in the future. + */ + return word_mode; +} + +void +expand_while_len (rtx *ops) +{ + poly_int64 nunits; + gcc_assert (poly_int_rtx_p (ops[2], &nunits)); + /* We arbitrary picked QImode as inner scalar mode to get vector mode. + since vsetvl only demand ratio. We let VSETVL PASS to optimize it. */ + scalar_int_mode mode = QImode; + machine_mode rvv_mode; + if (get_vector_mode (mode, nunits).exists (&rvv_mode)) + { + rtx vsetvl_rtx + = gen_no_side_effects_vsetvl_rtx (rvv_mode, ops[0], ops[1]); + emit_insn (vsetvl_rtx); + } + else + { + rtx tmp = gen_reg_rtx (Pmode); + emit_move_insn (tmp, gen_int_mode (nunits, Pmode)); + expand_binop (Pmode, umin_optab, tmp, ops[1], ops[0], true, OPTAB_LIB); + } +} + } // namespace riscv_vector diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def index bfb591773dc..f75287d9070 100644 --- a/gcc/config/riscv/riscv-vector-switch.def +++ b/gcc/config/riscv/riscv-vector-switch.def @@ -121,37 +121,43 @@ TODO: FP16 vector needs support of 'zvfh', we don't support it yet. */ /* Mask modes. Disable VNx128BI when TARGET_MIN_VLEN < 128. */ /* Mask modes. Disable VNx64BImode when TARGET_MIN_VLEN == 32. */ /* Mask modes. Disable VNx1BImode when TARGET_MIN_VLEN >= 128. */ -ENTRY (VNx128BI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 1) +ENTRY (VNx128BI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, + LMUL_8, 1) ENTRY (VNx64BI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 1, LMUL_4, 2) ENTRY (VNx32BI, true, LMUL_8, 1, LMUL_4, 2, LMUL_2, 4) ENTRY (VNx16BI, true, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8) ENTRY (VNx8BI, true, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16) ENTRY (VNx4BI, true, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32) ENTRY (VNx2BI, true, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64) -ENTRY (VNx1BI, TARGET_MIN_VLEN < 128, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0) +ENTRY (VNx1BI, TARGET_MIN_VLEN < 128, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, + 0) /* SEW = 8. Disable VNx128QImode when TARGET_MIN_VLEN < 128. */ /* SEW = 8. Disable VNx64QImode when TARGET_MIN_VLEN == 32. */ /* SEW = 8. Disable VNx1QImode when TARGET_MIN_VLEN >= 128. */ -ENTRY (VNx128QI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 1) +ENTRY (VNx128QI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, + LMUL_8, 1) ENTRY (VNx64QI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 1, LMUL_4, 2) ENTRY (VNx32QI, true, LMUL_8, 1, LMUL_4, 2, LMUL_2, 4) ENTRY (VNx16QI, true, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8) ENTRY (VNx8QI, true, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16) ENTRY (VNx4QI, true, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32) ENTRY (VNx2QI, true, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64) -ENTRY (VNx1QI, TARGET_MIN_VLEN < 128, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0) +ENTRY (VNx1QI, TARGET_MIN_VLEN < 128, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, + 0) /* SEW = 16. Disable VNx64HImode when TARGET_MIN_VLEN < 128. */ /* SEW = 16. Disable VNx32HImode when TARGET_MIN_VLEN == 32. */ /* SEW = 16. Disable VNx1HImode when TARGET_MIN_VLEN >= 128. */ -ENTRY (VNx64HI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 2) +ENTRY (VNx64HI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, + LMUL_8, 2) ENTRY (VNx32HI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 2, LMUL_4, 4) ENTRY (VNx16HI, true, LMUL_8, 2, LMUL_4, 4, LMUL_2, 8) ENTRY (VNx8HI, true, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16) ENTRY (VNx4HI, true, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32) ENTRY (VNx2HI, true, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64) -ENTRY (VNx1HI, TARGET_MIN_VLEN < 128, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0) +ENTRY (VNx1HI, TARGET_MIN_VLEN < 128, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, + 0) /* TODO:Disable all FP16 vector, enable them when 'zvfh' is supported. */ ENTRY (VNx64HF, false, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 2) @@ -167,38 +173,45 @@ ENTRY (VNx1HF, false, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0) For single-precision floating-point, we need TARGET_VECTOR_FP32 == RVV_ENABLE. */ /* SEW = 32. Disable VNx1SImode/VNx1SFmode when TARGET_MIN_VLEN >= 128. */ -ENTRY (VNx32SI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 4) +ENTRY (VNx32SI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, + LMUL_8, 4) ENTRY (VNx16SI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 4, LMUL_4, 8) ENTRY (VNx8SI, true, LMUL_8, 4, LMUL_4, 8, LMUL_2, 16) ENTRY (VNx4SI, true, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32) ENTRY (VNx2SI, true, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64) ENTRY (VNx1SI, TARGET_MIN_VLEN < 128, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0) -ENTRY (VNx32SF, TARGET_VECTOR_FP32 && (TARGET_MIN_VLEN >= 128), LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 4) +ENTRY (VNx32SF, TARGET_VECTOR_FP32 && (TARGET_MIN_VLEN >= 128), LMUL_RESERVED, + 0, LMUL_RESERVED, 0, LMUL_8, 4) ENTRY (VNx16SF, TARGET_VECTOR_FP32 && (TARGET_MIN_VLEN > 32), LMUL_RESERVED, 0, LMUL_8, 4, LMUL_4, 8) ENTRY (VNx8SF, TARGET_VECTOR_FP32, LMUL_8, 4, LMUL_4, 8, LMUL_2, 16) ENTRY (VNx4SF, TARGET_VECTOR_FP32, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32) ENTRY (VNx2SF, TARGET_VECTOR_FP32, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64) -ENTRY (VNx1SF, TARGET_VECTOR_FP32 && TARGET_MIN_VLEN < 128, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0) +ENTRY (VNx1SF, TARGET_VECTOR_FP32 && TARGET_MIN_VLEN < 128, LMUL_1, 32, LMUL_F2, + 64, LMUL_RESERVED, 0) /* SEW = 64. Disable VNx16DImode/VNx16DFmode when TARGET_MIN_VLEN < 128. */ /* SEW = 64. Enable VNx8DImode/VNx8DFmode when TARGET_MIN_VLEN > 32. For double-precision floating-point, we need TARGET_VECTOR_FP64 == RVV_ENABLE. */ /* SEW = 64. Disable VNx1DImode/VNx1DFmode when TARGET_MIN_VLEN >= 128. */ -ENTRY (VNx16DI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8) -ENTRY (VNx8DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 8, LMUL_4, 16) -ENTRY (VNx4DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32) -ENTRY (VNx2DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64) -ENTRY (VNx1DI, TARGET_MIN_VLEN > 32 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0) - -ENTRY (VNx16DF, TARGET_VECTOR_FP64 && (TARGET_MIN_VLEN >= 128), LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8) +ENTRY (VNx16DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, LMUL_RESERVED, + 0, LMUL_RESERVED, 0, LMUL_8, 8) +ENTRY (VNx8DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_8, 8, LMUL_4, 16) +ENTRY (VNx4DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32) +ENTRY (VNx2DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64) +ENTRY (VNx1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, + LMUL_1, 64, LMUL_RESERVED, 0) + +ENTRY (VNx16DF, TARGET_VECTOR_FP64 && (TARGET_MIN_VLEN >= 128), LMUL_RESERVED, + 0, LMUL_RESERVED, 0, LMUL_8, 8) ENTRY (VNx8DF, TARGET_VECTOR_FP64 && (TARGET_MIN_VLEN > 32), LMUL_RESERVED, 0, LMUL_8, 8, LMUL_4, 16) ENTRY (VNx4DF, TARGET_VECTOR_FP64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32) ENTRY (VNx2DF, TARGET_VECTOR_FP64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64) -ENTRY (VNx1DF, TARGET_VECTOR_FP64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0) +ENTRY (VNx1DF, TARGET_VECTOR_FP64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, + LMUL_1, 64, LMUL_RESERVED, 0) #undef TARGET_VECTOR_FP32 #undef TARGET_VECTOR_FP64 diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc index 7e8a5376705..52b453a7660 100644 --- a/gcc/config/riscv/riscv-vsetvl.cc +++ b/gcc/config/riscv/riscv-vsetvl.cc @@ -532,6 +532,43 @@ get_all_predecessors (basic_block cfg_bb) return blocks; } +/* Recursively find all successor blocks for cfg_bb. */ +static hash_set +get_all_successors (basic_block cfg_bb) +{ + hash_set blocks; + auto_vec work_list; + hash_set visited_list; + work_list.safe_push (cfg_bb); + + while (!work_list.is_empty ()) + { + basic_block new_cfg_bb = work_list.pop (); + visited_list.add (new_cfg_bb); + edge e; + edge_iterator ei; + FOR_EACH_EDGE (e, ei, new_cfg_bb->succs) + { + if (!visited_list.contains (e->dest)) + work_list.safe_push (e->dest); + blocks.add (e->dest); + } + } + return blocks; +} + +/* Get all overlap blocks between set. */ +static hash_set +get_all_overlap_blocks (hash_set blocks1, + hash_set blocks2) +{ + hash_set blocks; + for (const auto &block : blocks1) + if (blocks2.contains (block)) + blocks.add (block); + return blocks; +} + /* Return true if there is an INSN in insns staying in the block BB. */ static bool any_set_in_bb_p (hash_set sets, const bb_info *bb) @@ -1054,6 +1091,51 @@ change_vsetvl_insn (const insn_info *insn, const vector_insn_info &info) change_insn (rinsn, new_pat); } +static void +local_eliminate_vsetvl_insn (const vector_insn_info &dem) +{ + const insn_info *insn = dem.get_insn (); + if (!insn || insn->is_artificial ()) + return; + rtx_insn *rinsn = insn->rtl (); + const bb_info *bb = insn->bb (); + if (vsetvl_insn_p (rinsn)) + { + rtx vl = get_vl (rinsn); + for (insn_info *i = insn->next_nondebug_insn (); + real_insn_and_same_bb_p (i, bb); i = i->next_nondebug_insn ()) + { + if (i->is_call () || i->is_asm () + || find_access (i->defs (), VL_REGNUM) + || find_access (i->defs (), VTYPE_REGNUM)) + return; + + if (has_vtype_op (i->rtl ())) + { + if (!vsetvl_discard_result_insn_p (PREV_INSN (i->rtl ()))) + return; + rtx avl = get_avl (i->rtl ()); + if (avl != vl) + return; + set_info *def = find_access (i->uses (), REGNO (avl))->def (); + if (def->insn () != insn) + return; + + vector_insn_info new_info; + new_info.parse_insn (i); + if (!new_info.skip_avl_compatible_p (dem)) + return; + + new_info.set_avl_info (dem.get_avl_info ()); + new_info = dem.merge (new_info, LOCAL_MERGE); + change_vsetvl_insn (insn, new_info); + eliminate_insn (PREV_INSN (i->rtl ())); + return; + } + } + } +} + static bool source_equal_p (insn_info *insn1, insn_info *insn2) { @@ -1984,6 +2066,19 @@ vector_insn_info::compatible_p (const vector_insn_info &other) const return true; } +bool +vector_insn_info::skip_avl_compatible_p (const vector_insn_info &other) const +{ + gcc_assert (valid_or_dirty_p () && other.valid_or_dirty_p () + && "Can't compare invalid demanded infos"); + unsigned array_size = sizeof (incompatible_conds) / sizeof (demands_cond); + /* Bypass AVL incompatible cases. */ + for (unsigned i = 1; i < array_size; i++) + if (incompatible_conds[i].dual_incompatible_p (*this, other)) + return false; + return true; +} + bool vector_insn_info::compatible_avl_p (const vl_vtype_info &other) const { @@ -2178,7 +2273,7 @@ vector_insn_info::fuse_mask_policy (const vector_insn_info &info1, vector_insn_info vector_insn_info::merge (const vector_insn_info &merge_info, - enum merge_type type = LOCAL_MERGE) const + enum merge_type type) const { if (!vsetvl_insn_p (get_insn ()->rtl ())) gcc_assert (this->compatible_p (merge_info) @@ -2642,6 +2737,7 @@ private: void pre_vsetvl (void); /* Phase 5. */ + bool global_eliminate_vsetvl_p (const bb_info *) const; void cleanup_insns (void) const; /* Phase 6. */ @@ -2716,7 +2812,7 @@ pass_vsetvl::compute_local_backward_infos (const bb_info *bb) && !reg_available_p (insn, change)) && change.compatible_p (info)) { - info = change.merge (info); + info = change.merge (info, LOCAL_MERGE); /* Fix PR109399, we should update user vsetvl instruction if there is a change in demand fusion. */ if (vsetvl_insn_p (insn->rtl ())) @@ -3990,14 +4086,124 @@ pass_vsetvl::pre_vsetvl (void) commit_edge_insertions (); } +/* Eliminate VSETVL insn that has multiple AVL source, we don't let LCM + do that since it's quite complicated and may be buggy in some situations. +*/ +bool +pass_vsetvl::global_eliminate_vsetvl_p (const bb_info *bb) const +{ + const auto &dem + = m_vector_manager->vector_block_infos[bb->index ()].local_dem; + if (!dem.valid_p ()) + return false; + if (dem.get_insn ()->is_artificial ()) + return false; + + insn_info *insn = dem.get_insn (); + if (!has_vtype_op (insn->rtl ())) + return false; + + rtx_insn *prev_rinsn = PREV_INSN (insn->rtl ()); + if (!prev_rinsn) + return false; + if (!vsetvl_discard_result_insn_p (prev_rinsn)) + return false; + + if (!dem.has_avl_reg ()) + return false; + rtx avl = dem.get_avl (); + set_info *def = find_access (insn->uses (), REGNO (avl))->def (); + hash_set sets = get_all_sets (def, true, true, true); + if (sets.is_empty ()) + return false; + + sbitmap avin = m_vector_manager->vector_avin[bb->index ()]; + if (!bitmap_empty_p (avin)) + return false; + + hash_set pred_cfg_bbs = get_all_predecessors (bb->cfg_bb ()); + auto_vec vsetvl_infos; + for (const auto &set : sets) + { + if (set->insn ()->is_artificial ()) + return false; + insn_info *set_insn = set->insn (); + if (!vsetvl_insn_p (set_insn->rtl ())) + return false; + vector_insn_info vsetvl_info; + vsetvl_info.parse_insn (set_insn); + if (!vsetvl_info.skip_avl_compatible_p (dem)) + return false; + + /* Make sure there is no other vsetvl from set_bb to bb. */ + hash_set succ_cfg_bbs + = get_all_successors (set->insn ()->bb ()->cfg_bb ()); + hash_set overlap_cfg_bbs + = get_all_overlap_blocks (pred_cfg_bbs, succ_cfg_bbs); + for (const auto &overlap_cfg_bb : overlap_cfg_bbs) + { + unsigned int index = overlap_cfg_bb->index; + if (index == bb->index ()) + continue; + const auto &overlap_dem + = m_vector_manager->vector_block_infos[index].local_dem; + /* TODO: Currently, we only allow optimize user vsetvl when + there is empty overlap blocks. + + We could support check accurately there is no instructions + modifiy VL/VTYPE in overlap blocks. */ + if (!overlap_dem.empty_p ()) + return false; + } + vsetvl_infos.safe_push (vsetvl_info); + } + + /* Update VTYPE for each SET vsetvl instructions. */ + for (const auto &vsetvl_info : vsetvl_infos) + { + vector_insn_info info = dem; + info.set_avl_info (vsetvl_info.get_avl_info ()); + info = vsetvl_info.merge (info, LOCAL_MERGE); + insn_info *vsetvl_insn = vsetvl_info.get_insn (); + change_vsetvl_insn (vsetvl_insn, info); + } + + return true; +} + void pass_vsetvl::cleanup_insns (void) const { for (const bb_info *bb : crtl->ssa->bbs ()) { + /* Eliminate global vsetvl: + bb 0: + vsetvl a5,zero,... + bb 1: + vsetvl a5,a6,... + + bb 2: + vsetvl zero,a5. + + Eliminate vsetvl in bb2 when a5 is only coming from + bb 0 and bb1. */ + const auto &local_dem + = m_vector_manager->vector_block_infos[bb->index ()].local_dem; + if (global_eliminate_vsetvl_p (bb)) + eliminate_insn (PREV_INSN (local_dem.get_insn ()->rtl ())); + for (insn_info *insn : bb->real_nondebug_insns ()) { rtx_insn *rinsn = insn->rtl (); + const auto &dem = m_vector_manager->vector_insn_infos[insn->uid ()]; + /* Eliminate local vsetvl: + bb 0: + vsetvl a5,a6,... + vsetvl zero,a5. + + Eliminate vsetvl in bb2 when a5 is only coming from + bb 0. */ + local_eliminate_vsetvl_insn (dem); if (vlmax_avl_insn_p (rinsn)) { diff --git a/gcc/config/riscv/riscv-vsetvl.h b/gcc/config/riscv/riscv-vsetvl.h index d05472c86a0..d7a6c14e931 100644 --- a/gcc/config/riscv/riscv-vsetvl.h +++ b/gcc/config/riscv/riscv-vsetvl.h @@ -380,6 +380,7 @@ public: void fuse_mask_policy (const vector_insn_info &, const vector_insn_info &); bool compatible_p (const vector_insn_info &) const; + bool skip_avl_compatible_p (const vector_insn_info &) const; bool compatible_avl_p (const vl_vtype_info &) const; bool compatible_avl_p (const avl_info &) const; bool compatible_vtype_p (const vl_vtype_info &) const; diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc index b460c8a0b8b..3f68740737d 100644 --- a/gcc/config/riscv/riscv.cc +++ b/gcc/config/riscv/riscv.cc @@ -6217,7 +6217,15 @@ riscv_convert_vector_bits (void) to set RVV mode size. The RVV machine modes size are run-time constant if TARGET_VECTOR is enabled. The RVV machine modes size remains default compile-time constant if TARGET_VECTOR is disabled. */ - return TARGET_VECTOR ? poly_uint16 (1, 1) : 1; + if (TARGET_VECTOR) + { + if (riscv_autovec_preference == RVV_FIXED_VLMAX) + return (int) TARGET_MIN_VLEN / (riscv_bytes_per_vector_chunk * 8); + else + return poly_uint16 (1, 1); + } + else + return 1; } /* Implement TARGET_OPTION_OVERRIDE. */ @@ -7076,6 +7084,27 @@ riscv_shamt_matches_mask_p (int shamt, HOST_WIDE_INT mask) return shamt == ctz_hwi (mask); } +/* Implement TARGET_VECTORIZE_PREFERRED_SIMD_MODE. */ + +static machine_mode +riscv_preferred_simd_mode (scalar_mode mode) +{ + /* We only enable auto-vectorization when TARGET_MIN_VLEN >= 128 + which is -march=rv64gcv. Since GCC loop vectorizer report ICE + when we enable -march=rv64gc_zve32* and -march=rv32gc_zve64x. + in tree-vect-slp.cc:437. Since we have VNx1SImode in -march=*zve32* + and VNx1DImode in -march=*zve64*, they are enabled in targetm. + vector_mode_supported_p and SLP vectorizer will try to use them. + Currently, we can support auto-vectorization in -march=rv32_zve32x_zvl128b. + Wheras, -march=rv32_zve32x_zvl32b or -march=rv32_zve32x_zvl64b are + disabled. + */ + if (TARGET_VECTOR && TARGET_MIN_VLEN >= 128) + return riscv_vector::preferred_simd_mode (mode); + + return word_mode; +} + /* Initialize the GCC target structure. */ #undef TARGET_ASM_ALIGNED_HI_OP #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t" @@ -7327,6 +7356,9 @@ riscv_shamt_matches_mask_p (int shamt, HOST_WIDE_INT mask) #undef TARGET_DWARF_POLY_INDETERMINATE_VALUE #define TARGET_DWARF_POLY_INDETERMINATE_VALUE riscv_dwarf_poly_indeterminate_value +#undef TARGET_VECTORIZE_PREFERRED_SIMD_MODE +#define TARGET_VECTORIZE_PREFERRED_SIMD_MODE riscv_preferred_simd_mode + struct gcc_target targetm = TARGET_INITIALIZER; #include "gt-riscv.h" diff --git a/gcc/config/riscv/riscv.opt b/gcc/config/riscv/riscv.opt index ff1dd4ddd4f..7d26e450be5 100644 --- a/gcc/config/riscv/riscv.opt +++ b/gcc/config/riscv/riscv.opt @@ -254,3 +254,43 @@ Enum(isa_spec_class) String(20191213) Value(ISA_SPEC_CLASS_20191213) misa-spec= Target RejectNegative Joined Enum(isa_spec_class) Var(riscv_isa_spec) Init(TARGET_DEFAULT_ISA_SPEC) Set the version of RISC-V ISA spec. + +Enum +Name(riscv_autovec_preference) Type(enum riscv_autovec_preference_enum) +The RISC-V auto-vectorization preference: + +EnumValue +Enum(riscv_autovec_preference) String(none) Value(NO_AUTOVEC) + +EnumValue +Enum(riscv_autovec_preference) String(scalable) Value(RVV_SCALABLE) + +EnumValue +Enum(riscv_autovec_preference) String(fixed-vlmin) Value(RVV_FIXED_VLMIN) + +EnumValue +Enum(riscv_autovec_preference) String(fixed-vlmax) Value(RVV_FIXED_VLMAX) + +-param=riscv-autovec-preference= +Target RejectNegative Joined Enum(riscv_autovec_preference) Var(riscv_autovec_preference) Init(NO_AUTOVEC) +-param=riscv-autovec-preference= Set the preference of auto-vectorization in RISC-V port. + +Enum +Name(riscv_autovec_lmul) Type(enum riscv_autovec_lmul_enum) +The RVV possible LMUL: + +EnumValue +Enum(riscv_autovec_lmul) String(m1) Value(RVV_M1) + +EnumValue +Enum(riscv_autovec_lmul) String(m2) Value(RVV_M2) + +EnumValue +Enum(riscv_autovec_lmul) String(m4) Value(RVV_M4) + +EnumValue +Enum(riscv_autovec_lmul) String(m8) Value(RVV_M8) + +-param=riscv-autovec-lmul= +Target RejectNegative Joined Enum(riscv_autovec_lmul) Var(riscv_autovec_lmul) Init(RVV_M1) +-param=riscv-autovec-lmul= Set the RVV LMUL of auto-vectorization in RISC-V port. diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index 27bdacc35af..9151a4c9891 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -23,7 +23,7 @@ ;; This file include : ;; ;; - Intrinsics (https://github.com/riscv/rvv-intrinsic-doc) -;; - Auto-vectorization (TBD) +;; - Auto-vectorization (autovec.md) ;; - Combine optimization (TBD) (include "vector-iterators.md") @@ -2015,7 +2015,7 @@ riscv_vector::neg_simm5_p (operands[4]), [] (rtx *operands, rtx boardcast_scalar) { emit_insn (gen_pred_sub (operands[0], operands[1], - operands[2], operands[3], boardcast_scalar, operands[5], + operands[2], boardcast_scalar, operands[3], operands[5], operands[6], operands[7], operands[8])); })) DONE; @@ -7688,3 +7688,5 @@ "vleff.v\t%0,%3%p1" [(set_attr "type" "vldff") (set_attr "mode" "")]) + +(include "autovec.md")