Message ID | 20230207132802.223510-5-lancelot.six@amd.com |
---|---|
State | Committed |
Commit | 39f6d7c6b06b06a0372284f30c86417a8c0d6ba5 |
Headers |
Return-Path: <gdb-patches-bounces+patchwork=sourceware.org@sourceware.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B22913858032 for <patchwork@sourceware.org>; Tue, 7 Feb 2023 13:29:11 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B22913858032 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1675776551; bh=BO5jtvQidGmVj/wEq2oVmEm+DeGIMNKwglrOlogcdss=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=FP8OSQuz2M0AURV8NY7zza8S0dgUmJCZDcbuzGOfXTAbXUAkRoQgSnsOgfjV+MFWF eTOOIzHDRmV9i0vwowFEsetF8kCgA4mP6xo3Kv27DUID7Adb7YBKRkh30eKCG7oh2A Twi/tbEmu5vDhbdgHrPdvrEM+mHivNpS6wHMFx8Q= X-Original-To: gdb-patches@sourceware.org Delivered-To: gdb-patches@sourceware.org Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on20608.outbound.protection.outlook.com [IPv6:2a01:111:f400:7e89::608]) by sourceware.org (Postfix) with ESMTPS id 6D894385841D for <gdb-patches@sourceware.org>; Tue, 7 Feb 2023 13:28:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 6D894385841D ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YZY49McuTaJh00gAsWnVUoxgjB9x8wCkoafsXz+a9ohcu3SfeDP09P7vrouBgxVT45zeDhWQfFriqKexeGcSnA0Qt3damD9rhaPo4vxKei1Fhm99EYTM9yhI7sSHh11mGoTWEp4B/XZW+397sxZpVBl+p+fPICXF0NmGzyVHAiPCsQpheoy7rO59A22k0uveidx37FjSX7NHEqydCFvphmX1Gqn1ft+q0hTP/TR1eStnN01l0dh8e98QFXitlbAeE7QXHIism4Z+qM0QqCFp3LIjdoHF7mDfYkF7pUTkwPjIMQoW5RI10lldxSwU4Gn02ee9ItMml92h0pXAICcDIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BO5jtvQidGmVj/wEq2oVmEm+DeGIMNKwglrOlogcdss=; b=GZD85ZRUWm8O9nkXy2Tfr/XBLDB8igGTQ1Cr6qbPmN8CcSg+llzs/LTI8kOAmRjU7T2wXGSaOj6bkCGO2zmuA2vSZfw5CxapJaneEBMaVKVI9zmyyZupqLOR1fAWAVbP58v6+Gbx+fhO+u59kM64QhstSH1X7bIhRCsAzf4ztsH2Lvk0O3px+uDaNFgbRqRNAdB1EJ4cVaYa3tCMoS1IzW3Xj46ZIidb6JWvqEa39u8EMlPjcBx0UmVZloO3xyTEBw2OKA8SOXxqhbpP3ZaTzTaPip7QhxRa/u8zrZL496+b1jU0N698BPPuq0BtplSTTaV6vEajb0It76miKVcUQg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=sourceware.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none Received: from DM6PR11CA0009.namprd11.prod.outlook.com (2603:10b6:5:190::22) by DM6PR12MB4497.namprd12.prod.outlook.com (2603:10b6:5:2a5::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.34; Tue, 7 Feb 2023 13:28:35 +0000 Received: from DM6NAM11FT010.eop-nam11.prod.protection.outlook.com (2603:10b6:5:190:cafe::36) by DM6PR11CA0009.outlook.office365.com (2603:10b6:5:190::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.35 via Frontend Transport; Tue, 7 Feb 2023 13:28:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT010.mail.protection.outlook.com (10.13.172.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6086.16 via Frontend Transport; Tue, 7 Feb 2023 13:28:35 +0000 Received: from khazad-dum.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 7 Feb 2023 07:28:33 -0600 To: <gdb-patches@sourceware.org> CC: <lsix@lancelotsix.com>, Lancelot SIX <lancelot.six@amd.com> Subject: [PATCH 4/4] gdb/testsuite: allow_hipcc_tests tests the hipcc compiler Date: Tue, 7 Feb 2023 13:28:02 +0000 Message-ID: <20230207132802.223510-5-lancelot.six@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230207132802.223510-1-lancelot.six@amd.com> References: <20230207132802.223510-1-lancelot.six@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT010:EE_|DM6PR12MB4497:EE_ X-MS-Office365-Filtering-Correlation-Id: 0f18535d-7f5d-42ae-f192-08db090f372e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rEzerzfiYfrAn9wXO4WgPm+Vrk59u3hKfdu7ClBY3p2Oc9AercezbQiS9qY+FlTr4X5RnFyS71iAkoGR3FDCtbzrdtGSKAX8c9L/jDtoeHH9c8rLJvW7eTKyGjdrxYNyqBxinoVBas9QniMaCgRRtlFv9yYusPwrAodThdOJ9b4tptA1Pgg5MOa+UOv366E2ngdGsv+x2fn8i/XmNp4V3jNuEsMKEbCeMImxgaX8h+JtyyI3up7V08ZOcGd95YJ7GJfbLO/jz1MqOAL6ijN5+ms/7aLw0jLwxBUA8iiZbi0SLFj14bERJ0EJwlOYHSKmwJXQVYlUfQzVXlsjJTNVF58U0q8tY0LwL2McymqhnYAL5HQ8QkTSGcPC179VN6dKPdaDdhsQtUOOrdN0efmpuGrtROf5cTiYYa5lqLjLKarGIMvLouP5Pq3BJgApMO1Do8AmjCqkHS6ecnYoFQ4KWD+1MiXm/ObET/R4zlmIHLPnI4xyI/OweoR9jNZ2M3GIEviamPzzE+QK4Yj+InDScvppS5TFnEMyqYsNYWSiHfABHJ+cdvnrV/SBNVRrwQi8RSTBP1vzNHO8LonyKb21A72BsLHmNtCu2/eknvrQ24Gez3t2mzCtJ3H+DwZ80Nzas7WlFhwxyh8LT5ZRANUoZpKiaAfxZeopUHp3eKpZWHTk3U6+CuL23hXuybHcOgAj50POvlGiNfyS8FsIZqaVS3zVjeobd/bpTbcpcG4IAEk= X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230025)(4636009)(136003)(39860400002)(346002)(396003)(376002)(451199018)(36840700001)(46966006)(40470700004)(83380400001)(36860700001)(81166007)(7696005)(36756003)(40460700003)(70586007)(6916009)(40480700001)(70206006)(86362001)(16526019)(4326008)(8676002)(82310400005)(2616005)(336012)(426003)(356005)(47076005)(82740400003)(478600001)(6666004)(186003)(26005)(1076003)(5660300002)(8936002)(316002)(54906003)(2906002)(41300700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2023 13:28:35.7102 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0f18535d-7f5d-42ae-f192-08db090f372e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT010.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4497 X-Spam-Status: No, score=-11.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FORGED_SPF_HELO, GIT_PATCH_0, SPF_HELO_PASS, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list <gdb-patches.sourceware.org> List-Unsubscribe: <https://sourceware.org/mailman/options/gdb-patches>, <mailto:gdb-patches-request@sourceware.org?subject=unsubscribe> List-Archive: <https://sourceware.org/pipermail/gdb-patches/> List-Post: <mailto:gdb-patches@sourceware.org> List-Help: <mailto:gdb-patches-request@sourceware.org?subject=help> List-Subscribe: <https://sourceware.org/mailman/listinfo/gdb-patches>, <mailto:gdb-patches-request@sourceware.org?subject=subscribe> From: Lancelot SIX via Gdb-patches <gdb-patches@sourceware.org> Reply-To: Lancelot SIX <lancelot.six@amd.com> Errors-To: gdb-patches-bounces+patchwork=sourceware.org@sourceware.org Sender: "Gdb-patches" <gdb-patches-bounces+patchwork=sourceware.org@sourceware.org> |
Series |
Fix gdb.rocm/simple.exp on hosts without ROCm
|
|
Commit Message
Lancelot SIX
Feb. 7, 2023, 1:28 p.m. UTC
Update allow_hipcc_tests so all gdb.rocm tests are skipped if we do not have a working hipcc compiler available. To achieve this, adjust gdb_simple_compile to ensure that the hip program is saved in a ".cpp" file before calling hipcc otherwise compilation will fail. One thing to note is that it is possible to have a hipcc installed with a CUDA backend. Compiling with this back-end will successfully result in an application, but GDB cannot debug it (at least for the offload part). In the context of the gdb.rocm tests, we want to detect such situation where gdb_simple_compile would give a false positive. To achieve this, this patch checks that there is at least one AMDGPU device available and that hipcc can compile for this or those targets. Detecting the device is done using the rocm_agent_enumerator tool which is installed with the all ROCm installations (it is used by hipcc to detect identify targets if this is not specified on the comand line). This patch also makes the allow_hipcc_tests proc a cached proc. --- gdb/testsuite/lib/gdb.exp | 4 +++ gdb/testsuite/lib/rocm.exp | 69 +++++++++++++++++++++++++++++++++++++- 2 files changed, 72 insertions(+), 1 deletion(-)
Comments
Hi, Some of this patch is based on original work from Pedro in https://github.com/ROCm-Developer-Tools/ROCgdb/commit/031a44b9c6ec2c381030a919b837e5dfc255e688. I just realized that I forgot to give credit where due. I'll add the following tag: Co-Authored-By: Pedro Alves <pedro@palves.net> Best, Laneclot. On 07/02/2023 13:28, Lancelot SIX wrote: > Update allow_hipcc_tests so all gdb.rocm tests are skipped if we do not > have a working hipcc compiler available. > > To achieve this, adjust gdb_simple_compile to ensure that the hip > program is saved in a ".cpp" file before calling hipcc otherwise > compilation will fail. > > One thing to note is that it is possible to have a hipcc installed with > a CUDA backend. Compiling with this back-end will successfully result > in an application, but GDB cannot debug it (at least for the offload > part). In the context of the gdb.rocm tests, we want to detect such > situation where gdb_simple_compile would give a false positive. > > To achieve this, this patch checks that there is at least one AMDGPU > device available and that hipcc can compile for this or those targets. > Detecting the device is done using the rocm_agent_enumerator tool which > is installed with the all ROCm installations (it is used by hipcc to > detect identify targets if this is not specified on the comand line). > > This patch also makes the allow_hipcc_tests proc a cached proc. > --- > gdb/testsuite/lib/gdb.exp | 4 +++ > gdb/testsuite/lib/rocm.exp | 69 +++++++++++++++++++++++++++++++++++++- > 2 files changed, 72 insertions(+), 1 deletion(-) > > diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp > index faa0ac05a9a..6333728f71e 100644 > --- a/gdb/testsuite/lib/gdb.exp > +++ b/gdb/testsuite/lib/gdb.exp > @@ -4581,6 +4581,10 @@ proc gdb_simple_compile {name code {type object} {compile_flags {}} {object obj} > set ext "go" > break > } > + if { "$flag" eq "hip" } { > + set ext "cpp" > + break > + } > } > set src [standard_temp_file $name-[pid].$ext] > set obj [standard_temp_file $name-[pid].$postfix] > diff --git a/gdb/testsuite/lib/rocm.exp b/gdb/testsuite/lib/rocm.exp > index b5b59748c27..06d5b3988b8 100644 > --- a/gdb/testsuite/lib/rocm.exp > +++ b/gdb/testsuite/lib/rocm.exp > @@ -15,7 +15,51 @@ > # > # Support library for testing ROCm (AMD GPU) GDB features. > > -proc allow_hipcc_tests { } { > +# Get the list of gpu targets to compile for. > +# > +# If HCC_AMDGPU_TARGET is set in the environment, use it. Otherwise, > +# try reading it from the system using the rocm_agent_enumerator > +# utility. > + > +proc hcc_amdgpu_targets {} { > + # Look for HCC_AMDGPU_TARGET (same env var hipcc uses). If > + # that fails, try using rocm_agent_enumerator (again, same as > + # hipcc does). > + if {[info exists ::env(HCC_AMDGPU_TARGET)]} { > + return [split $::env(HCC_AMDGPU_TARGET) ","] > + } > + > + set rocm_agent_enumerator "rocm_agent_enumerator" > + > + # If available, use ROCM_PATH to locate rocm_agent_enumerator. > + if { [info exists ::env(ROCM_PATH)] } { > + set rocm_agent_enumerator \ > + "$::env(ROCM_PATH)/bin/rocm_agent_enumerator" > + } > + > + # If we fail to locate the rocm_agent_enumerator, just return an empty > + # list of targets and let the caller decide if this should be an error. > + if { [which $rocm_agent_enumerator] == 0 } { > + return [list] > + } > + > + set result [remote_exec host $rocm_agent_enumerator] > + if { [lindex $result 0] != 0 } { > + error "rocm_agent_enumerator failed" > + } > + > + set targets [list] > + foreach target [lindex $result 1] { > + # Ignore gfx000 which is the host CPU. > + if { $target ne "gfx000" } { > + lappend targets $target > + } > + } > + > + return $targets > +} > + > +gdb_caching_proc allow_hipcc_tests { > # Only the native target supports ROCm debugging. E.g., when > # testing against GDBserver, there's no point in running the ROCm > # tests. > @@ -29,6 +73,29 @@ proc allow_hipcc_tests { } { > return 0 > } > > + # Check we have a working hipcc compiler available. > + set targets [hcc_amdgpu_targets] > + if { [llength $targets] == 0} { > + return 0 > + } > + > + set flags [list hip additional_flags=--offload-arch=[join $targets ","]] > + if {![gdb_simple_compile hipprobe { > + #include <hip/hip_runtime.h> > + __global__ void > + kern () {} > + > + int > + main () > + { > + kern<<<1, 1>>> (); > + hipDeviceSynchronize (); > + return 0; > + } > + } executable $flags]} { > + return 0 > + } > + > return 1 > } >
On 2/7/23 08:28, Lancelot SIX via Gdb-patches wrote: > Update allow_hipcc_tests so all gdb.rocm tests are skipped if we do not > have a working hipcc compiler available. > > To achieve this, adjust gdb_simple_compile to ensure that the hip > program is saved in a ".cpp" file before calling hipcc otherwise > compilation will fail. > > One thing to note is that it is possible to have a hipcc installed with > a CUDA backend. Compiling with this back-end will successfully result > in an application, but GDB cannot debug it (at least for the offload > part). In the context of the gdb.rocm tests, we want to detect such > situation where gdb_simple_compile would give a false positive. > > To achieve this, this patch checks that there is at least one AMDGPU > device available and that hipcc can compile for this or those targets. > Detecting the device is done using the rocm_agent_enumerator tool which > is installed with the all ROCm installations (it is used by hipcc to > detect identify targets if this is not specified on the comand line). > > This patch also makes the allow_hipcc_tests proc a cached proc. > --- > gdb/testsuite/lib/gdb.exp | 4 +++ > gdb/testsuite/lib/rocm.exp | 69 +++++++++++++++++++++++++++++++++++++- > 2 files changed, 72 insertions(+), 1 deletion(-) > > diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp > index faa0ac05a9a..6333728f71e 100644 > --- a/gdb/testsuite/lib/gdb.exp > +++ b/gdb/testsuite/lib/gdb.exp > @@ -4581,6 +4581,10 @@ proc gdb_simple_compile {name code {type object} {compile_flags {}} {object obj} > set ext "go" > break > } > + if { "$flag" eq "hip" } { > + set ext "cpp" > + break > + } > } > set src [standard_temp_file $name-[pid].$ext] > set obj [standard_temp_file $name-[pid].$postfix] > diff --git a/gdb/testsuite/lib/rocm.exp b/gdb/testsuite/lib/rocm.exp > index b5b59748c27..06d5b3988b8 100644 > --- a/gdb/testsuite/lib/rocm.exp > +++ b/gdb/testsuite/lib/rocm.exp > @@ -15,7 +15,51 @@ > # > # Support library for testing ROCm (AMD GPU) GDB features. > > -proc allow_hipcc_tests { } { > +# Get the list of gpu targets to compile for. > +# > +# If HCC_AMDGPU_TARGET is set in the environment, use it. Otherwise, > +# try reading it from the system using the rocm_agent_enumerator > +# utility. > + > +proc hcc_amdgpu_targets {} { > + # Look for HCC_AMDGPU_TARGET (same env var hipcc uses). If > + # that fails, try using rocm_agent_enumerator (again, same as > + # hipcc does). > + if {[info exists ::env(HCC_AMDGPU_TARGET)]} { > + return [split $::env(HCC_AMDGPU_TARGET) ","] > + } > + > + set rocm_agent_enumerator "rocm_agent_enumerator" > + > + # If available, use ROCM_PATH to locate rocm_agent_enumerator. > + if { [info exists ::env(ROCM_PATH)] } { > + set rocm_agent_enumerator \ > + "$::env(ROCM_PATH)/bin/rocm_agent_enumerator" > + } > + > + # If we fail to locate the rocm_agent_enumerator, just return an empty > + # list of targets and let the caller decide if this should be an error. > + if { [which $rocm_agent_enumerator] == 0 } { > + return [list] > + } > + > + set result [remote_exec host $rocm_agent_enumerator] > + if { [lindex $result 0] != 0 } { > + error "rocm_agent_enumerator failed" > + } > + > + set targets [list] > + foreach target [lindex $result 1] { > + # Ignore gfx000 which is the host CPU. > + if { $target ne "gfx000" } { > + lappend targets $target > + } > + } > + > + return $targets I typically don't have ROCM_PATH set, and I don't add /opt/rocm/bin to my PATH, so rocm_agent_enumerator will not be found by default here. However, gdb_find_hipcc does fall back on /opt/rocm/bin, so it does find my hipcc. I think we should use similar strategies for both cases. I don't mind having to set ROCM_PATH or adding /opt/rocm/bin to my PATH if needed, as long as we're consistent about it. > +} > + > +gdb_caching_proc allow_hipcc_tests { > # Only the native target supports ROCm debugging. E.g., when > # testing against GDBserver, there's no point in running the ROCm > # tests. > @@ -29,6 +73,29 @@ proc allow_hipcc_tests { } { > return 0 > } > > + # Check we have a working hipcc compiler available. > + set targets [hcc_amdgpu_targets] > + if { [llength $targets] == 0} { > + return 0 > + } > + > + set flags [list hip additional_flags=--offload-arch=[join $targets ","]] > + if {![gdb_simple_compile hipprobe { > + #include <hip/hip_runtime.h> > + __global__ void > + kern () {} > + > + int > + main () > + { > + kern<<<1, 1>>> (); > + hipDeviceSynchronize (); > + return 0; > + } > + } executable $flags]} { > + return 0 > + } So, this last part ensures we don't have a "CUDA hipcc false positive"? It would be good to put a comment above it to indicate the precise intent. Simon
On 07/02/2023 14:12, Simon Marchi wrote: > Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding. > > > On 2/7/23 08:28, Lancelot SIX via Gdb-patches wrote: >> Update allow_hipcc_tests so all gdb.rocm tests are skipped if we do not >> have a working hipcc compiler available. >> >> To achieve this, adjust gdb_simple_compile to ensure that the hip >> program is saved in a ".cpp" file before calling hipcc otherwise >> compilation will fail. >> >> One thing to note is that it is possible to have a hipcc installed with >> a CUDA backend. Compiling with this back-end will successfully result >> in an application, but GDB cannot debug it (at least for the offload >> part). In the context of the gdb.rocm tests, we want to detect such >> situation where gdb_simple_compile would give a false positive. >> >> To achieve this, this patch checks that there is at least one AMDGPU >> device available and that hipcc can compile for this or those targets. >> Detecting the device is done using the rocm_agent_enumerator tool which >> is installed with the all ROCm installations (it is used by hipcc to >> detect identify targets if this is not specified on the comand line). >> >> This patch also makes the allow_hipcc_tests proc a cached proc. >> --- >> gdb/testsuite/lib/gdb.exp | 4 +++ >> gdb/testsuite/lib/rocm.exp | 69 +++++++++++++++++++++++++++++++++++++- >> 2 files changed, 72 insertions(+), 1 deletion(-) >> >> diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp >> index faa0ac05a9a..6333728f71e 100644 >> --- a/gdb/testsuite/lib/gdb.exp >> +++ b/gdb/testsuite/lib/gdb.exp >> @@ -4581,6 +4581,10 @@ proc gdb_simple_compile {name code {type object} {compile_flags {}} {object obj} >> set ext "go" >> break >> } >> + if { "$flag" eq "hip" } { >> + set ext "cpp" >> + break >> + } >> } >> set src [standard_temp_file $name-[pid].$ext] >> set obj [standard_temp_file $name-[pid].$postfix] >> diff --git a/gdb/testsuite/lib/rocm.exp b/gdb/testsuite/lib/rocm.exp >> index b5b59748c27..06d5b3988b8 100644 >> --- a/gdb/testsuite/lib/rocm.exp >> +++ b/gdb/testsuite/lib/rocm.exp >> @@ -15,7 +15,51 @@ >> # >> # Support library for testing ROCm (AMD GPU) GDB features. >> >> -proc allow_hipcc_tests { } { >> +# Get the list of gpu targets to compile for. >> +# >> +# If HCC_AMDGPU_TARGET is set in the environment, use it. Otherwise, >> +# try reading it from the system using the rocm_agent_enumerator >> +# utility. >> + >> +proc hcc_amdgpu_targets {} { >> + # Look for HCC_AMDGPU_TARGET (same env var hipcc uses). If >> + # that fails, try using rocm_agent_enumerator (again, same as >> + # hipcc does). >> + if {[info exists ::env(HCC_AMDGPU_TARGET)]} { >> + return [split $::env(HCC_AMDGPU_TARGET) ","] >> + } >> + >> + set rocm_agent_enumerator "rocm_agent_enumerator" >> + >> + # If available, use ROCM_PATH to locate rocm_agent_enumerator. >> + if { [info exists ::env(ROCM_PATH)] } { >> + set rocm_agent_enumerator \ >> + "$::env(ROCM_PATH)/bin/rocm_agent_enumerator" >> + } >> + >> + # If we fail to locate the rocm_agent_enumerator, just return an empty >> + # list of targets and let the caller decide if this should be an error. >> + if { [which $rocm_agent_enumerator] == 0 } { >> + return [list] >> + } >> + >> + set result [remote_exec host $rocm_agent_enumerator] >> + if { [lindex $result 0] != 0 } { >> + error "rocm_agent_enumerator failed" >> + } >> + >> + set targets [list] >> + foreach target [lindex $result 1] { >> + # Ignore gfx000 which is the host CPU. >> + if { $target ne "gfx000" } { >> + lappend targets $target >> + } >> + } >> + >> + return $targets > > I typically don't have ROCM_PATH set, and I don't add /opt/rocm/bin to > my PATH, so rocm_agent_enumerator will not be found by default here. > However, gdb_find_hipcc does fall back on /opt/rocm/bin, so it does find > my hipcc. I think we should use similar strategies for both cases. I > don't mind having to set ROCM_PATH or adding /opt/rocm/bin to my PATH if > needed, as long as we're consistent about it. > When installing the pre-built packages, at least on Ubuntu, there should be symlinks from /usr/bin to /etc/alternavites to /opt/rocm-$VERSION/bin. I expect that when those tools start to be packaged by the various distributions, they will be available in PATH. As I am not a big fan of the hard-coded /opt/rocm prefix, especially in the context of upstream, I would prefer to update gdb_find_hipcc. Would something like this be OK? It search for hipcc in dejagnu's tool_root_dir, $::env(ROCM_PATH) (if set) and fallbacks to "hipcc" if none of the above worked to search in PATH. From b8638cafabd36bc9316591da3c8326e10277372a Mon Sep 17 00:00:00 2001 From: Lancelot SIX <lancelot.six@amd.com> Date: Tue, 7 Feb 2023 15:13:47 +0000 Subject: [PATCH] gdb/testsuite: look for hipcc in env(ROCM_PATH) If the hipcc compiler cannot be found in dejagnu's tool_root_dir, look for it in $::env(ROCM_PATH) (if set). If hipcc is still not found, fallback to "hipcc" so the compiler will be searched in the PATH. This removes the fallback to the hard-coded "/opt/rocm/bin" prefix. This change is done so ROCM tools are searched in a uniform manner. --- gdb/testsuite/lib/future.exp | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/gdb/testsuite/lib/future.exp b/gdb/testsuite/lib/future.exp index 5720d3837d5..2e8315bbfe1 100644 --- a/gdb/testsuite/lib/future.exp +++ b/gdb/testsuite/lib/future.exp @@ -125,8 +125,11 @@ proc gdb_find_hipcc {} { global tool_root_dir if {![is_remote host]} { set hipcc [lookfor_file $tool_root_dir hipcc] + if {$hipcc == "" && [info exists ::env(ROCM_PATH)]} { + set hipcc [lookfor_file $::env(ROCM_PATH)/bin hipcc] + } if {$hipcc == ""} { - set hipcc [lookfor_file /opt/rocm/bin hipcc] + set hipcc hipcc } } else { set hipcc ""
diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp index faa0ac05a9a..6333728f71e 100644 --- a/gdb/testsuite/lib/gdb.exp +++ b/gdb/testsuite/lib/gdb.exp @@ -4581,6 +4581,10 @@ proc gdb_simple_compile {name code {type object} {compile_flags {}} {object obj} set ext "go" break } + if { "$flag" eq "hip" } { + set ext "cpp" + break + } } set src [standard_temp_file $name-[pid].$ext] set obj [standard_temp_file $name-[pid].$postfix] diff --git a/gdb/testsuite/lib/rocm.exp b/gdb/testsuite/lib/rocm.exp index b5b59748c27..06d5b3988b8 100644 --- a/gdb/testsuite/lib/rocm.exp +++ b/gdb/testsuite/lib/rocm.exp @@ -15,7 +15,51 @@ # # Support library for testing ROCm (AMD GPU) GDB features. -proc allow_hipcc_tests { } { +# Get the list of gpu targets to compile for. +# +# If HCC_AMDGPU_TARGET is set in the environment, use it. Otherwise, +# try reading it from the system using the rocm_agent_enumerator +# utility. + +proc hcc_amdgpu_targets {} { + # Look for HCC_AMDGPU_TARGET (same env var hipcc uses). If + # that fails, try using rocm_agent_enumerator (again, same as + # hipcc does). + if {[info exists ::env(HCC_AMDGPU_TARGET)]} { + return [split $::env(HCC_AMDGPU_TARGET) ","] + } + + set rocm_agent_enumerator "rocm_agent_enumerator" + + # If available, use ROCM_PATH to locate rocm_agent_enumerator. + if { [info exists ::env(ROCM_PATH)] } { + set rocm_agent_enumerator \ + "$::env(ROCM_PATH)/bin/rocm_agent_enumerator" + } + + # If we fail to locate the rocm_agent_enumerator, just return an empty + # list of targets and let the caller decide if this should be an error. + if { [which $rocm_agent_enumerator] == 0 } { + return [list] + } + + set result [remote_exec host $rocm_agent_enumerator] + if { [lindex $result 0] != 0 } { + error "rocm_agent_enumerator failed" + } + + set targets [list] + foreach target [lindex $result 1] { + # Ignore gfx000 which is the host CPU. + if { $target ne "gfx000" } { + lappend targets $target + } + } + + return $targets +} + +gdb_caching_proc allow_hipcc_tests { # Only the native target supports ROCm debugging. E.g., when # testing against GDBserver, there's no point in running the ROCm # tests. @@ -29,6 +73,29 @@ proc allow_hipcc_tests { } { return 0 } + # Check we have a working hipcc compiler available. + set targets [hcc_amdgpu_targets] + if { [llength $targets] == 0} { + return 0 + } + + set flags [list hip additional_flags=--offload-arch=[join $targets ","]] + if {![gdb_simple_compile hipprobe { + #include <hip/hip_runtime.h> + __global__ void + kern () {} + + int + main () + { + kern<<<1, 1>>> (); + hipDeviceSynchronize (); + return 0; + } + } executable $flags]} { + return 0 + } + return 1 }