From patchwork Wed Jan 11 11:37:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Schwinge X-Patchwork-Id: 62938 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B72AD3857C4F for ; Wed, 11 Jan 2023 11:38:11 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from esa4.mentor.iphmx.com (esa4.mentor.iphmx.com [68.232.137.252]) by sourceware.org (Postfix) with ESMTPS id A854B3858C83 for ; Wed, 11 Jan 2023 11:37:51 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org A854B3858C83 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=codesourcery.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=mentor.com X-IronPort-AV: E=Sophos;i="5.96,315,1665475200"; d="scan'208,223";a="93063252" Received: from orw-gwy-02-in.mentorg.com ([192.94.38.167]) by esa4.mentor.iphmx.com with ESMTP; 11 Jan 2023 03:37:50 -0800 IronPort-SDR: +cwJuKXyw4zlqG4OGcBwb6tdr0MUtainwV/Y8jooQNSfIGSOET3Gw8bxZf47M+2pez+xeDtZhU BeHFA+FaJrayvCL7IhmcrI8nM3/RQ6N3Oq/+goCErulINJznisFe07q6MyVqyxOnKlzVZ5RUc1 PNjAFrO7oG5aaOLb2gvs/9YYWXFmVY5JGJcs8PKIq+byFTvF6UfF/F2dyu4F3Ze0RRbH80wTRk B/mwoSoAny8B0+TO2qhMMlrqSdG+myY5f2Crfl+U8JzcOBuDsHzBFlAAKeeLB8KOwf4XHE7/nV QCk= From: Thomas Schwinge To: Tom de Vries , Subject: [PING] nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution (was: [committed][nvptx] Add uniform_warp_check insn) In-Reply-To: <87a63ofrpf.fsf@euler.schwinge.homeip.net> References: <20220201183125.GA4286@delia.home> <87a63ofrpf.fsf@euler.schwinge.homeip.net> User-Agent: Notmuch/0.29.3+94~g74c3f1b (https://notmuchmail.org) Emacs/28.2 (x86_64-pc-linux-gnu) Date: Wed, 11 Jan 2023 12:37:40 +0100 Message-ID: <87tu0xl2t7.fsf@euler.schwinge.homeip.net> MIME-Version: 1.0 X-Originating-IP: [137.202.0.90] X-ClientProxiedBy: svr-ies-mbx-13.mgc.mentorg.com (139.181.222.13) To svr-ies-mbx-10.mgc.mentorg.com (139.181.222.10) X-Spam-Status: No, score=-11.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Hi! Ping. Grüße Thomas On 2022-12-15T19:27:08+0100, I wrote: > Hi Tom! > > First "a bit" of context; skip to "the proposed patch" if you'd like to > see just that. > > > On 2022-02-01T19:31:27+0100, Tom de Vries via Gcc-patches wrote: >> On a GT 1030, with driver version 470.94 and -mptx=3.1 I run into: >> ... >> FAIL: libgomp.oacc-c/../libgomp.oacc-c-c++-common/parallel-dims.c \ >> -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none \ >> -O2 execution test >> ... >> which minimizes to the same test-case as listed in commit "[nvptx] >> Update default ptx isa to 6.3". >> >> The problem is again that the first diverging branch is not handled as such in >> SASS, which causes problems with a subsequent shfl insn, but given that we >> have -mptx=3.1 we can't use the bar.warp.sync insn. >> >> Given that the default is now -mptx=6.3, and consequently -mptx=3.1 is of a >> lesser importance, implement the next best thing: abort when detecting >> non-convergence using this insn: >> ... >> { .reg.b32 act; >> vote.ballot.b32 act,1; >> .reg.pred uni; >> setp.eq.b32 uni,act,0xffffffff; >> @ !uni trap; >> @ !uni exit; >> } >> ... >> >> Interestingly, the effect of this is that rather than aborting, the test-case >> now passes. > > (I suppose this "nudges" the PTX -> SASS compiler into the right > direction?) > > > For avoidance of doubt, my following discussion is not about the specific > (first) use of 'nvptx_uniform_warp_check' introduced here in this > commit r12-6971-gf32f74c2e8cef5fe37af6d4e8d7e8f6b4c8ae9a8 > "[nvptx] Add uniform_warp_check insn": > >> --- a/gcc/config/nvptx/nvptx.cc >> +++ b/gcc/config/nvptx/nvptx.cc >> @@ -4631,15 +4631,29 @@ nvptx_single (unsigned mask, basic_block from, basic_block to) >> if (tail_branch) >> { >> label_insn = emit_label_before (label, before); >> - if (TARGET_PTX_6_0 && mode == GOMP_DIM_VECTOR) >> - warp_sync = emit_insn_after (gen_nvptx_warpsync (), label_insn); >> + if (mode == GOMP_DIM_VECTOR) >> + { >> + if (TARGET_PTX_6_0) >> + warp_sync = emit_insn_after (gen_nvptx_warpsync (), >> + label_insn); >> + else >> + warp_sync = emit_insn_after (gen_nvptx_uniform_warp_check (), >> + label_insn); >> + } >> before = label_insn; >> } >> else >> { >> label_insn = emit_label_after (label, tail); >> - if (TARGET_PTX_6_0 && mode == GOMP_DIM_VECTOR) >> - warp_sync = emit_insn_after (gen_nvptx_warpsync (), label_insn); >> + if (mode == GOMP_DIM_VECTOR) >> + { >> + if (TARGET_PTX_6_0) >> + warp_sync = emit_insn_after (gen_nvptx_warpsync (), >> + label_insn); >> + else >> + warp_sync = emit_insn_after (gen_nvptx_uniform_warp_check (), >> + label_insn); >> + } >> if ((mode == GOMP_DIM_VECTOR || mode == GOMP_DIM_WORKER) >> && CALL_P (tail) && find_reg_note (tail, REG_NORETURN, NULL)) >> emit_insn_after (gen_exit (), label_insn); > > Later, other uses have been added, for example in OpenMP '-muniform-simt' > code generation. > > My following discussion is about the implementation of > 'nvptx_uniform_warp_check', originally introduced as follows: > >> --- a/gcc/config/nvptx/nvptx.md >> +++ b/gcc/config/nvptx/nvptx.md >> @@ -57,6 +57,7 @@ (define_c_enum "unspecv" [ >> UNSPECV_XCHG >> UNSPECV_BARSYNC >> UNSPECV_WARPSYNC >> + UNSPECV_UNIFORM_WARP_CHECK >> UNSPECV_MEMBAR >> UNSPECV_MEMBAR_CTA >> UNSPECV_MEMBAR_GL >> @@ -1985,6 +1986,23 @@ (define_insn "nvptx_warpsync" >> "\\tbar.warp.sync\\t0xffffffff;" >> [(set_attr "predicable" "false")]) >> >> +(define_insn "nvptx_uniform_warp_check" >> + [(unspec_volatile [(const_int 0)] UNSPECV_UNIFORM_WARP_CHECK)] >> + "" >> + { >> + output_asm_insn ("{", NULL); >> + output_asm_insn ("\\t" ".reg.b32" "\\t" "act;", NULL); >> + output_asm_insn ("\\t" "vote.ballot.b32" "\\t" "act,1;", NULL); >> + output_asm_insn ("\\t" ".reg.pred" "\\t" "uni;", NULL); >> + output_asm_insn ("\\t" "setp.eq.b32" "\\t" "uni,act,0xffffffff;", >> + NULL); >> + output_asm_insn ("@ !uni\\t" "trap;", NULL); >> + output_asm_insn ("@ !uni\\t" "exit;", NULL); >> + output_asm_insn ("}", NULL); >> + return ""; >> + } >> + [(set_attr "predicable" "false")]) > > Later adjusted, but the fundamental idea is still the same. > > > Via temporarily disabling 'nvptx_uniform_warp_check': > > (define_insn "nvptx_uniform_warp_check" > [(unspec_volatile [(const_int 0)] UNSPECV_UNIFORM_WARP_CHECK)] > "" > { > +#if 0 > const char *insns[] = { > "{", > "\\t" ".reg.b32" "\\t" "%%r_act;", > "%.\\t" "vote.ballot.b32" "\\t" "%%r_act,1;", > "\\t" ".reg.pred" "\\t" "%%r_do_abort;", > "\\t" "mov.pred" "\\t" "%%r_do_abort,0;", > "%.\\t" "setp.ne.b32" "\\t" "%%r_do_abort,%%r_act," > "0xffffffff;", > "@ %%r_do_abort\\t" "trap;", > "@ %%r_do_abort\\t" "exit;", > "}", > NULL > }; > for (const char **p = &insns[0]; *p != NULL; p++) > output_asm_insn (*p, NULL); > +#endif > return ""; > }) > > ..., I've first tested/confirmed the problem that it was originally > solving. Testing with: > > $ nvidia-smi > [...] > | NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 | > [...] > | 0 Quadro P1000 [...] > > For 'check-gcc' with '--target_board=nvptx-none-run/-mptx=3.1 nvptx.exp', > this (obviously) regresses: > > PASS: gcc.target/nvptx/uniform-simt-2.c (test for excess errors) > PASS: gcc.target/nvptx/uniform-simt-2.c scan-assembler-times @%r[0-9]*\tatom.global.cas 1 > PASS: gcc.target/nvptx/uniform-simt-2.c scan-assembler-times shfl.idx.b32 1 > [-PASS:-]{+FAIL:+} gcc.target/nvptx/uniform-simt-2.c scan-assembler-times vote.ballot.b32 1 > > For 'check-target-libgomp' with > '--target_board=unix/-foffload-options=nvptx-none=-mptx=3.1', there are > not obvious regressions for any OpenMP test cases. > > For example, for the test case 'libgomp.c/pr104783-2.c' of > commit a624388b9546b066250be8baa118b7d50c403c25 > "[nvptx] Add warp sync at simt exit", 'nvptx_uniform_warp_check' is not > applicable per se: this is about an issue with sm_70+ Independent Thread > Scheduling, which is applicable only 'if (TARGET_PTX_6_0)', and in that > case, we emit 'nvptx_warpsync', not 'nvptx_uniform_warp_check'. > > For other OpenMP test cases (which I've not analyzed in detail), we're > maybe simply lucky that 'nvptx_uniform_warp_check' is not relevant > (... at least in this testing configuration). (For avoidance of doubt, I > have no reason to believe that there's any problem with the > PR104783 "[nvptx, openmp] Hang/abort with atomic update in simd construct", > PR104916 "[nvptx] Handle Independent Thread Scheduling for sm_70+ with -muniform-simt", > "[nvptx] Use nvptx_warpsync / nvptx_uniform_warp_check for -muniform-simt", > or other such code changes; mentioning this just for completeness.) > > ..., but as regards OpenACC test cases, this still regresses several: > > [-PASS:-]{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/parallel-dims.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test > > (That's the one cited in the commit log of > commit r12-6971-gf32f74c2e8cef5fe37af6d4e8d7e8f6b4c8ae9a8 > "[nvptx] Add uniform_warp_check insn".) > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-10.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-10.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-4.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-4.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-] > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-5.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-5.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-] > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-6.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-6.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-] > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-7.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-7.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-] > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-64-1.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-64-3.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vred2d-128.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test > > Same for C++. > > [-PASS:-]{+FAIL:+} libgomp.oacc-c++/ref-1.C -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-fortran/gemm-2.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test > > [-PASS:-]{+WARNING: program timed out.+} > {+FAIL:+} libgomp.oacc-fortran/gemm.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -g execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -Os execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O1 execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -g execution test > > [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -Os execution test > > So that's "good": plenty of evidence that 'nvptx_uniform_warp_check' is > necessary and working. > > > Now, "the proposed patch". I'd like to make 'nvptx_uniform_warp_check' > fit for non-full-warp execution. For example, to be able to execute such > code in single-threaded 'cuLaunchKernel' for execution of global > constructors/destructors, where those may, for example, call into nvptx > target libraries compiled with '-mgomp' (thus, '-muniform-simt'). > > OK to push (after proper testing, and with TODO markers adjusted/removed) > the attached > "nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution"? > > > Grüße > Thomas ----------------- Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955 From 1d8df3b793fc43dd23b2679d4a31b761e6ac799c Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Mon, 12 Dec 2022 22:05:37 +0100 Subject: [PATCH] nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution For example, this allows for '-muniform-simt' code to be executed single-threaded, which currently fails (device-side 'trap'), as the 0xffffffff mask isn't correct if not all 32 threads of a warp are active. The same issue/fix, I suppose but have not verified, would apply if we were to allow for OpenACC 'vector_length' smaller than 32, for example for OpenACC 'serial'. We use 'nvptx_uniform_warp_check' only for PTX ISA version less than 6.0. Otherwise we're using 'nvptx_warpsync', which emits 'bar.warp.sync 0xffffffff', which evidently appears to do the right thing. (I've tested '-muniform-simt' code executing single-threaded.) gcc/ * config/nvptx/nvptx.md (nvptx_uniform_warp_check): Make fit for non-full-warp execution. gcc/testsuite/ * gcc.target/nvptx/nvptx.exp (check_effective_target_default_ptx_isa_version_at_least_6_0): New. * gcc.target/nvptx/uniform-simt-5.c: New. libgomp/ * plugin/plugin-nvptx.c (nvptx_exec): Assert what we know about 'blockDimX'. --- gcc/config/nvptx/nvptx.md | 16 ++++++++++- gcc/testsuite/gcc.target/nvptx/nvptx.exp | 5 ++++ .../gcc.target/nvptx/uniform-simt-5.c | 28 +++++++++++++++++++ libgomp/plugin/plugin-nvptx.c | 3 ++ 4 files changed, 51 insertions(+), 1 deletion(-) create mode 100644 gcc/testsuite/gcc.target/nvptx/uniform-simt-5.c diff --git a/gcc/config/nvptx/nvptx.md b/gcc/config/nvptx/nvptx.md index 8ed685027b5f..8a1bb630a0a7 100644 --- a/gcc/config/nvptx/nvptx.md +++ b/gcc/config/nvptx/nvptx.md @@ -2282,10 +2282,24 @@ "{", "\\t" ".reg.b32" "\\t" "%%r_act;", "%.\\t" "vote.ballot.b32" "\\t" "%%r_act,1;", + /* For '%r_exp', we essentially need 'activemask.b32', but that is "Introduced in PTX ISA version 6.2", and this code here is used only 'if (!TARGET_PTX_6_0)'. Thus, emulate it. + TODO Is that actually correct? Wouldn't 'activemask.b32' rather replace our 'vote.ballot.b32' given that it registers the *currently active threads*? */ + /* Compute the "membermask" of all threads of the warp that are expected to be converged here. + For OpenACC, '%ntid.x' is 'vector_length', which per 'nvptx_goacc_validate_dims' always is a multiple of 32. + For OpenMP, '%ntid.x' always is 32. + Thus, this is typically 0xffffffff, but additionally always for the case that not all 32 threads of the warp have been launched. + This assume that lane IDs are assigned in ascending order. */ + //TODO Can we rely on '1 << 32 == 0', and '0 - 1 = 0xffffffff'? + //TODO https://developer.nvidia.com/blog/using-cuda-warp-level-primitives/ + //TODO https://stackoverflow.com/questions/54055195/activemask-vs-ballot-sync + "\\t" ".reg.b32" "\\t" "%%r_exp;", + "%.\\t" "mov.b32" "\\t" "%%r_exp, %%ntid.x;", + "%.\\t" "shl.b32" "\\t" "%%r_exp, 1, %%r_exp;", + "%.\\t" "sub.u32" "\\t" "%%r_exp, %%r_exp, 1;", "\\t" ".reg.pred" "\\t" "%%r_do_abort;", "\\t" "mov.pred" "\\t" "%%r_do_abort,0;", "%.\\t" "setp.ne.b32" "\\t" "%%r_do_abort,%%r_act," - "0xffffffff;", + "%%r_exp;", "@ %%r_do_abort\\t" "trap;", "@ %%r_do_abort\\t" "exit;", "}", diff --git a/gcc/testsuite/gcc.target/nvptx/nvptx.exp b/gcc/testsuite/gcc.target/nvptx/nvptx.exp index e9622ae7aaa8..17e03daeb7e0 100644 --- a/gcc/testsuite/gcc.target/nvptx/nvptx.exp +++ b/gcc/testsuite/gcc.target/nvptx/nvptx.exp @@ -49,6 +49,11 @@ proc check_effective_target_default_ptx_isa_version_at_least { major minor } { return $res } +# Return 1 if code by default compiles for at least PTX ISA version 6.0. +proc check_effective_target_default_ptx_isa_version_at_least_6_0 { } { + return [check_effective_target_default_ptx_isa_version_at_least 6 0] +} + # Return 1 if code with PTX ISA version major.minor or higher can be run. proc check_effective_target_runtime_ptx_isa_version_at_least { major minor } { set name runtime_ptx_isa_version_${major}_${minor} diff --git a/gcc/testsuite/gcc.target/nvptx/uniform-simt-5.c b/gcc/testsuite/gcc.target/nvptx/uniform-simt-5.c new file mode 100644 index 000000000000..b2f78198db21 --- /dev/null +++ b/gcc/testsuite/gcc.target/nvptx/uniform-simt-5.c @@ -0,0 +1,28 @@ +/* Verify that '-muniform-simt' code may be executed single-threaded. + + { dg-do run } + { dg-options {-save-temps -O2 -muniform-simt} } */ + +enum memmodel +{ + MEMMODEL_RELAXED = 0 +}; + +unsigned long long int v64; +unsigned long long int *p64 = &v64; + +int +main() +{ + /* Trigger uniform-SIMT processing. */ + __atomic_fetch_add (p64, v64, MEMMODEL_RELAXED); + + return 0; +} + +/* Per 'omp_simt_exit': + - 'nvptx_warpsync' + { dg-final { scan-assembler-times {bar\.warp\.sync\t0xffffffff;} 1 { target default_ptx_isa_version_at_least_6_0 } } } + - 'nvptx_uniform_warp_check' + { dg-final { scan-assembler-times {vote\.ballot\.b32\t%r_act,1;} 1 { target { ! default_ptx_isa_version_at_least_6_0 } } } } +*/ diff --git a/libgomp/plugin/plugin-nvptx.c b/libgomp/plugin/plugin-nvptx.c index 4f4c25a90baf..5f8aed56c8b1 100644 --- a/libgomp/plugin/plugin-nvptx.c +++ b/libgomp/plugin/plugin-nvptx.c @@ -984,6 +984,9 @@ nvptx_exec (void (*fn), size_t mapnum, void **hostaddrs, void **devaddrs, api_info); } + /* Per 'nvptx_goacc_validate_dims'. */ + assert (dims[GOMP_DIM_VECTOR] % warp_size == 0); + kargs[0] = &dp; CUDA_CALL_ASSERT (cuLaunchKernel, function, dims[GOMP_DIM_GANG], 1, 1, -- 2.35.1