[v2,02/10] Add Aarch64 SVE Linux headers

Message ID 9FBBFBF2-9363-49AA-8BC3-20E4E0AFBFED@arm.com
State New, archived
Headers

Commit Message

Alan Hayward June 8, 2018, 2:13 p.m. UTC
  (Moved review to correct thread)
Thanks for the reviews.

> On 7 Jun 2018, at 21:18, Simon Marchi <simon.marchi@ericsson.com> wrote:
> 
> Hi Alan,
> 
> Just some quick comments.
> 
> I get this when building on x86-64 with --enable-targets=all:

Hmm.. I had lost that flag from my build script. I Re-added it, and
reproduced the issues.

> 
>  CXX    aarch64-tdep.o
> In file included from /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-sve-linux-ptrace.h:29:0,
>                 from /home/emaisin/src/binutils-gdb/gdb/aarch64-tdep.c:61:
> /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-linux-sigcontext.h:19:22: error: field ‘head’ has incomplete type ‘_aarch64_ctx’
>  struct _aarch64_ctx head;
>                      ^
> /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-linux-sigcontext.h:19:9: note: forward declaration of ‘struct _aarch64_ctx’
>  struct _aarch64_ctx head;
>         ^
> 
> First, we should not include "nat/aarch64-sve-linux-ptrace.h" (a file that only makes
> sense when building on AArch64) in aarch64-tdep.c, a file built on all architecture
> when including the support for AArch64 debugging.  It looks like aarch64-tdep.c
> needs sve_vq_from_vl.  Maybe that definition could be moved to arch/, which can be
> included in aarch64-tdep.c.
> 

I had put it in there because I wanted to try and make it a complete block
copied from Linux. The issue makes sense, so I’ve updated to restore
sve_vq_from_vl/sve_vl_from_vq back to arch/aarch64.h and removed it from
nat/aarch64-linux-sigcontext.h 


> Then, is the _aarch64_ctx structure guaranteed to be defined on older AArch64 kernels
> or should we include it too?


_aarch64_ctx is part of the standard aarch64 signal handling. A quick git blame gives
me 2012 - which is roughly the age of aarch64. So, it should always be defined.


Updated patch below. Checked it builds (with other sve patches) on:
X86 all-targets
Aarch64 Linux 4.10 (pre sve headers) ubuntu 16.04
Aarch64 Linux 4.15 (with sve headers) ubuntu 18.04

Are you ok with the new version?
  

Comments

Simon Marchi June 8, 2018, 2:37 p.m. UTC | #1
On 2018-06-08 10:13, Alan Hayward wrote:
> (Moved review to correct thread)
> Thanks for the reviews.
> 
>> On 7 Jun 2018, at 21:18, Simon Marchi <simon.marchi@ericsson.com> 
>> wrote:
>> 
>> Hi Alan,
>> 
>> Just some quick comments.
>> 
>> I get this when building on x86-64 with --enable-targets=all:
> 
> Hmm.. I had lost that flag from my build script. I Re-added it, and
> reproduced the issues.
> 
>> 
>>  CXX    aarch64-tdep.o
>> In file included from 
>> /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-sve-linux-ptrace.h:29:0,
>>                 from 
>> /home/emaisin/src/binutils-gdb/gdb/aarch64-tdep.c:61:
>> /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-linux-sigcontext.h:19:22: 
>> error: field ‘head’ has incomplete type ‘_aarch64_ctx’
>>  struct _aarch64_ctx head;
>>                      ^
>> /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-linux-sigcontext.h:19:9: 
>> note: forward declaration of ‘struct _aarch64_ctx’
>>  struct _aarch64_ctx head;
>>         ^
>> 
>> First, we should not include "nat/aarch64-sve-linux-ptrace.h" (a file 
>> that only makes
>> sense when building on AArch64) in aarch64-tdep.c, a file built on all 
>> architecture
>> when including the support for AArch64 debugging.  It looks like 
>> aarch64-tdep.c
>> needs sve_vq_from_vl.  Maybe that definition could be moved to arch/, 
>> which can be
>> included in aarch64-tdep.c.
>> 
> 
> I had put it in there because I wanted to try and make it a complete 
> block
> copied from Linux. The issue makes sense, so I’ve updated to restore
> sve_vq_from_vl/sve_vl_from_vq back to arch/aarch64.h and removed it 
> from
> nat/aarch64-linux-sigcontext.h
> 
> 
>> Then, is the _aarch64_ctx structure guaranteed to be defined on older 
>> AArch64 kernels
>> or should we include it too?
> 
> 
> _aarch64_ctx is part of the standard aarch64 signal handling. A quick
> git blame gives
> me 2012 - which is roughly the age of aarch64. So, it should always be 
> defined.
> 
> 
> Updated patch below. Checked it builds (with other sve patches) on:
> X86 all-targets
> Aarch64 Linux 4.10 (pre sve headers) ubuntu 16.04
> Aarch64 Linux 4.15 (with sve headers) ubuntu 18.04
> 
> Are you ok with the new version?

The code looks good to me, thanks.  I am still unsure about the 
licensing side of it, let me ask the FSF people about it, I'll come back 
to you when it's done.  I hope it won't take too long!

Simon
  
Simon Marchi June 8, 2018, 3:23 p.m. UTC | #2
On 2018-06-08 10:37, Simon Marchi wrote:
> The code looks good to me, thanks.  I am still unsure about the
> licensing side of it, let me ask the FSF people about it, I'll come
> back to you when it's done.  I hope it won't take too long!

Hi Alan,

After discussion with other maintainers, it was suggested to avoid 
involving the legal staff if we want to resolve this anytime soon.

Since ARM already holds the copyright to these header files anyway (they 
were all written by ARM people), you may be able to submit that code as 
regular FSF-assigned code, without changing the status of the kernel 
copy.  But nobody here is a lawyer, so nobody wants to say for sure :).

Maybe it's ok after all if we don't include these headers (at least for 
now), and require that GDB for native AArch64 is built against the 
headers of a >= 4.15 kernel?  They can always be included later, but it 
would avoid delaying the inclusion of the feature, since you want to 
have it before we branch 8.2.

Simon
  
Alan Hayward June 8, 2018, 3:27 p.m. UTC | #3
> On 8 Jun 2018, at 15:37, Simon Marchi <simon.marchi@polymtl.ca> wrote:

> 

> On 2018-06-08 10:13, Alan Hayward wrote:

>> (Moved review to correct thread)

>> Thanks for the reviews.

>>> On 7 Jun 2018, at 21:18, Simon Marchi <simon.marchi@ericsson.com> wrote:

>>> Hi Alan,

>>> Just some quick comments.

>>> I get this when building on x86-64 with --enable-targets=all:

>> Hmm.. I had lost that flag from my build script. I Re-added it, and

>> reproduced the issues.

>>> CXX    aarch64-tdep.o

>>> In file included from /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-sve-linux-ptrace.h:29:0,

>>>                from /home/emaisin/src/binutils-gdb/gdb/aarch64-tdep.c:61:

>>> /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-linux-sigcontext.h:19:22: error: field ‘head’ has incomplete type ‘_aarch64_ctx’

>>> struct _aarch64_ctx head;

>>>                     ^

>>> /home/emaisin/src/binutils-gdb/gdb/nat/aarch64-linux-sigcontext.h:19:9: note: forward declaration of ‘struct _aarch64_ctx’

>>> struct _aarch64_ctx head;

>>>        ^

>>> First, we should not include "nat/aarch64-sve-linux-ptrace.h" (a file that only makes

>>> sense when building on AArch64) in aarch64-tdep.c, a file built on all architecture

>>> when including the support for AArch64 debugging.  It looks like aarch64-tdep.c

>>> needs sve_vq_from_vl.  Maybe that definition could be moved to arch/, which can be

>>> included in aarch64-tdep.c.

>> I had put it in there because I wanted to try and make it a complete block

>> copied from Linux. The issue makes sense, so I’ve updated to restore

>> sve_vq_from_vl/sve_vl_from_vq back to arch/aarch64.h and removed it from

>> nat/aarch64-linux-sigcontext.h

>>> Then, is the _aarch64_ctx structure guaranteed to be defined on older AArch64 kernels

>>> or should we include it too?

>> _aarch64_ctx is part of the standard aarch64 signal handling. A quick

>> git blame gives

>> me 2012 - which is roughly the age of aarch64. So, it should always be defined.

>> Updated patch below. Checked it builds (with other sve patches) on:

>> X86 all-targets

>> Aarch64 Linux 4.10 (pre sve headers) ubuntu 16.04

>> Aarch64 Linux 4.15 (with sve headers) ubuntu 18.04

>> Are you ok with the new version?

> 

> The code looks good to me, thanks.  I am still unsure about the licensing side of it, let me ask the FSF people about it, I'll come back to you when it's done.  I hope it won't take too long!

> 


Ok, thanks for chasing that up. Happy to be cc’ed (or not) on any email.

This patch (I think) only blocks 5/10 and 9/10 in the series. The rest should be
ok to still go in (once reviewed).


Alan.
  
Alan Hayward June 12, 2018, 2:37 p.m. UTC | #4
> On 8 Jun 2018, at 16:23, Simon Marchi <simon.marchi@polymtl.ca> wrote:

> 

> On 2018-06-08 10:37, Simon Marchi wrote:

>> The code looks good to me, thanks.  I am still unsure about the

>> licensing side of it, let me ask the FSF people about it, I'll come

>> back to you when it's done.  I hope it won't take too long!

> 

> Hi Alan,

> 

> After discussion with other maintainers, it was suggested to avoid involving the legal staff if we want to resolve this anytime soon.

> 

> Since ARM already holds the copyright to these header files anyway (they were all written by ARM people), you may be able to submit that code as regular FSF-assigned code, without changing the status of the kernel copy.  But nobody here is a lawyer, so nobody wants to say for sure :).

> 

> Maybe it's ok after all if we don't include these headers (at least for now), and require that GDB for native AArch64 is built against the headers of a >= 4.15 kernel?  They can always be included later, but it would avoid delaying the inclusion of the feature, since you want to have it before we branch 8.2.

> 


Sorry, I did miss this one (I think I sent my reply to the previous
one more or less the same time you sent this).

If I commit this, (I think) this is going to cause buildbot to break
for the aarch64 builds.
(Out of interest - I’ve heard people say they tested on buildbot. Are
there some instructions for doing that? I can try it out.)

I suspect updating buildbot is also not a quick fix.

If all that’s not ok (I suspect not), I’ll have a quick word with the
more legal aware people on my side, see if there is any opinion.


Alan.
  
Pedro Alves June 12, 2018, 2:43 p.m. UTC | #5
On 06/12/2018 03:37 PM, Alan Hayward wrote:
> 
> 
>> On 8 Jun 2018, at 16:23, Simon Marchi <simon.marchi@polymtl.ca> wrote:
>>
>> On 2018-06-08 10:37, Simon Marchi wrote:
>>> The code looks good to me, thanks.  I am still unsure about the
>>> licensing side of it, let me ask the FSF people about it, I'll come
>>> back to you when it's done.  I hope it won't take too long!
>>
>> Hi Alan,
>>
>> After discussion with other maintainers, it was suggested to avoid involving the legal staff if we want to resolve this anytime soon.
>>
>> Since ARM already holds the copyright to these header files anyway (they were all written by ARM people), you may be able to submit that code as regular FSF-assigned code, without changing the status of the kernel copy.  But nobody here is a lawyer, so nobody wants to say for sure :).
>>
>> Maybe it's ok after all if we don't include these headers (at least for now), and require that GDB for native AArch64 is built against the headers of a >= 4.15 kernel?  They can always be included later, but it would avoid delaying the inclusion of the feature, since you want to have it before we branch 8.2.
>>
> 
> Sorry, I did miss this one (I think I sent my reply to the previous
> one more or less the same time you sent this).
> 
> If I commit this, 

What's "this" ?  

How about we add a configure check to check if the system headers support
the needed SVE bits, and guard the native gdb SVE bits with
HAVE_AARCH64_SVE or something like that?

(I think) this is going to cause buildbot to break
> for the aarch64 builds.

Thanks,
Pedro Alves
  
Simon Marchi June 12, 2018, 2:51 p.m. UTC | #6
On 2018-06-12 10:37, Alan Hayward wrote:
> Sorry, I did miss this one (I think I sent my reply to the previous
> one more or less the same time you sent this).
> 
> If I commit this, (I think) this is going to cause buildbot to break
> for the aarch64 builds.
> (Out of interest - I’ve heard people say they tested on buildbot. Are
> there some instructions for doing that? I can try it out.)

Hmm you're right.  Though maybe we can have additional 
commands/configure options specific to the aarch64 builders?  They could 
download a kernel tarball, install the headers somewhere (that doesn't 
take long, no need to build the kernel) and point to them.  Sergio, 
would that be possible/a good idea?

See this for the Buildbot, in particular the try jobs:

https://sourceware.org/gdb/wiki/BuildBot

> I suspect updating buildbot is also not a quick fix.

I don't think updating the kernel on the AArch64 machines is an option, 
but we have control over what the buildbot does.

> If all that’s not ok (I suspect not), I’ll have a quick word with the
> more legal aware people on my side, see if there is any opinion.

Ok, thanks.

Simon
  
Simon Marchi June 12, 2018, 3:06 p.m. UTC | #7
On 2018-06-12 10:43, Pedro Alves wrote:
> What's "this" ?

 From what I understand, "this" is the suggestion I made in my previous 
mail, require the user to build against the headers of a recent kernel 
(that provide the SVE macros), and not provide a stop-gap copy in the 
GDB tree.  It would break the buildbot, because they have an old kernel 
that doesn't provide the SVE macros the GDB code uses (e.g. 
SVE_PT_REGS_OFFSET).

> How about we add a configure check to check if the system headers 
> support
> the needed SVE bits, and guard the native gdb SVE bits with
> HAVE_AARCH64_SVE or something like that?

I think that would be a good compromise.  By default, building on a 
machine with an older kernel would exclude SVE support.  But it would be 
possible to add it by pointing to the headers of a recent kernel.  So 
when building on a machine with an older kernel...

- ... without any special flags, you don't get SVE support.
- ... with just --enable-sve, you get a configure error.
- ... with --enable-sve and CFLAGS/CXXFLAGS pointing to headers of a 
kernel w/ SVE macros, you get SVE support.

Does that make sense?

Simon
  
Alan Hayward June 12, 2018, 3:09 p.m. UTC | #8
> On 12 Jun 2018, at 15:43, Pedro Alves <palves@redhat.com> wrote:

> 

> On 06/12/2018 03:37 PM, Alan Hayward wrote:

>> 

>> 

>>> On 8 Jun 2018, at 16:23, Simon Marchi <simon.marchi@polymtl.ca> wrote:

>>> 

>>> On 2018-06-08 10:37, Simon Marchi wrote:

>>>> The code looks good to me, thanks.  I am still unsure about the

>>>> licensing side of it, let me ask the FSF people about it, I'll come

>>>> back to you when it's done.  I hope it won't take too long!

>>> 

>>> Hi Alan,

>>> 

>>> After discussion with other maintainers, it was suggested to avoid involving the legal staff if we want to resolve this anytime soon.

>>> 

>>> Since ARM already holds the copyright to these header files anyway (they were all written by ARM people), you may be able to submit that code as regular FSF-assigned code, without changing the status of the kernel copy.  But nobody here is a lawyer, so nobody wants to say for sure :).

>>> 

>>> Maybe it's ok after all if we don't include these headers (at least for now), and require that GDB for native AArch64 is built against the headers of a >= 4.15 kernel?  They can always be included later, but it would avoid delaying the inclusion of the feature, since you want to have it before we branch 8.2.

>>> 

>> 

>> Sorry, I did miss this one (I think I sent my reply to the previous

>> one more or less the same time you sent this).

>> 

>> If I commit this, 

> 

> What's "this" ?  


Should have said “if I commit 4/10 without 2/10 it’s going to cause
buildbot to break" 

> 

> How about we add a configure check to check if the system headers support

> the needed SVE bits, and guard the native gdb SVE bits with

> HAVE_AARCH64_SVE or something like that?

> 


Excellent idea. I can have a look at doing this - should be fairly quick to do.

In the meantime, lets keep the other discussion going.


Alan.
  
Pedro Alves June 12, 2018, 3:11 p.m. UTC | #9
On 06/12/2018 04:06 PM, Simon Marchi wrote:
> On 2018-06-12 10:43, Pedro Alves wrote:
>> What's "this" ?
> 
> From what I understand, "this" is the suggestion I made in my previous mail, require the user to build against the headers of a recent kernel (that provide the SVE macros), and not provide a stop-gap copy in the GDB tree.  It would break the buildbot, because they have an old kernel that doesn't provide the SVE macros the GDB code uses (e.g. SVE_PT_REGS_OFFSET).

OK, that was not what I had suggested the other day (which was to detect SVE
support at configure time), so I got confused.

> 
>> How about we add a configure check to check if the system headers support
>> the needed SVE bits, and guard the native gdb SVE bits with
>> HAVE_AARCH64_SVE or something like that?
> 
> I think that would be a good compromise.  By default, building on a machine with an older kernel would exclude SVE support.  But it would be possible to add it by pointing to the headers of a recent kernel.  So when building on a machine with an older kernel...
> 
> - ... without any special flags, you don't get SVE support.
> - ... with just --enable-sve, you get a configure error.
> - ... with --enable-sve and CFLAGS/CXXFLAGS pointing to headers of a kernel w/ SVE macros, you get SVE support.
> 
> Does that make sense?
Yes.  Not sure an --enable-sve switch is necessary (compared to just having
headers vs not having headers), but I'd be fine with having one.

Thanks,
Pedro Alves
  
Simon Marchi June 12, 2018, 3:21 p.m. UTC | #10
On 2018-06-12 11:11, Pedro Alves wrote:
> On 06/12/2018 04:06 PM, Simon Marchi wrote:
>> I think that would be a good compromise.  By default, building on a 
>> machine with an older kernel would exclude SVE support.  But it would 
>> be possible to add it by pointing to the headers of a recent kernel.  
>> So when building on a machine with an older kernel...
>> 
>> - ... without any special flags, you don't get SVE support.
>> - ... with just --enable-sve, you get a configure error.
>> - ... with --enable-sve and CFLAGS/CXXFLAGS pointing to headers of a 
>> kernel w/ SVE macros, you get SVE support.
>> 
>> Does that make sense?
> Yes.  Not sure an --enable-sve switch is necessary (compared to just 
> having
> headers vs not having headers), but I'd be fine with having one.

I think it is useful if you want to make sure your build will have the 
support:

- auto/not specified: include the support if the prerequisites are 
available
- enable: include the support, error at configure if prerequisites are 
missing
- disable: don't include the support

Otherwise, just a typo in your include path can result in a build 
without the feature you want, and you only discover it later, that's 
annoying.

Simon
  
Sergio Durigan Junior June 12, 2018, 4:34 p.m. UTC | #11
On Tuesday, June 12 2018, Simon Marchi wrote:

> On 2018-06-12 10:37, Alan Hayward wrote:
>> Sorry, I did miss this one (I think I sent my reply to the previous
>> one more or less the same time you sent this).
>>
>> If I commit this, (I think) this is going to cause buildbot to break
>> for the aarch64 builds.
>> (Out of interest - I’ve heard people say they tested on buildbot. Are
>> there some instructions for doing that? I can try it out.)
>
> Hmm you're right.  Though maybe we can have additional
> commands/configure options specific to the aarch64 builders?  They
> could download a kernel tarball, install the headers somewhere (that
> doesn't take long, no need to build the kernel) and point to them.
> Sergio, would that be possible/a good idea?

I'm not sure.  For starters, the Aarch64 builders have kinda been
forgotten since Yao stopped contributing regularly to GDB (he is the
maintainer of the machines behind the builders).  So the very first
thing we'd need to do is to put the builders in a good shape again
(they're currently with 273 pending builds in the queue!).  This is
something that's been on my TODO list for a while now, and I was going
to ask Alan (or anyone from ARM) if they're not interested in taking
over the maintenance of these machines.

Then, I think the best approach for the SVE builds would be to manually
download a Linux kernel, put the sources somewhere, and then I could
configure a specific builder to build GDB with the SVE headers.
  
Alan Hayward June 12, 2018, 5:51 p.m. UTC | #12
> On 12 Jun 2018, at 17:34, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

> 

> On Tuesday, June 12 2018, Simon Marchi wrote:

> 

>> On 2018-06-12 10:37, Alan Hayward wrote:

>>> Sorry, I did miss this one (I think I sent my reply to the previous

>>> one more or less the same time you sent this).

>>> 

>>> If I commit this, (I think) this is going to cause buildbot to break

>>> for the aarch64 builds.

>>> (Out of interest - I’ve heard people say they tested on buildbot. Are

>>> there some instructions for doing that? I can try it out.)

>> 

>> Hmm you're right.  Though maybe we can have additional

>> commands/configure options specific to the aarch64 builders?  They

>> could download a kernel tarball, install the headers somewhere (that

>> doesn't take long, no need to build the kernel) and point to them.

>> Sergio, would that be possible/a good idea?

> 

> I'm not sure.  For starters, the Aarch64 builders have kinda been

> forgotten since Yao stopped contributing regularly to GDB (he is the

> maintainer of the machines behind the builders).  So the very first

> thing we'd need to do is to put the builders in a good shape again

> (they're currently with 273 pending builds in the queue!).  This is

> something that's been on my TODO list for a while now, and I was going

> to ask Alan (or anyone from ARM) if they're not interested in taking

> over the maintenance of these machines.


Looking after the aarch64 boxes does sound like a job for an Arm person.
I guess it’ll be fairly important to get those queues cleared _before_
8.2 is released. I can certainly take a look at the pending builds in
the next few weeks.
Once I’ve got the sve stuff cleared I wanted to take a step back and see
what general things needed doing for aarch64 (I’ve also got a couple of
lingering non-gdb things to wrap up too, so that’s going to eat into my
time).

> 

> Then, I think the best approach for the SVE builds would be to manually

> download a Linux kernel, put the sources somewhere, and then I could

> configure a specific builder to build GDB with the SVE headers.

> 


Given the current queues, I suspect we’d not get this done before the 8.2
branch.

I’m thinking configure check of Pedro’s sounds the first step, then once
the aarch64 build queues have been cleared, get some sve builds added.

The SVE headers are in Ubuntu 18.04 - so “all” that’s needed is to do a
dist upgrade on them (I suspect there are probably lots of reasons why
that can’t be done!) 



Alan.
  
Sergio Durigan Junior June 12, 2018, 8:29 p.m. UTC | #13
On Tuesday, June 12 2018, Alan Hayward wrote:

>> On 12 Jun 2018, at 17:34, Sergio Durigan Junior <sergiodj@redhat.com> wrote:
>> 
>> On Tuesday, June 12 2018, Simon Marchi wrote:
>> 
>>> On 2018-06-12 10:37, Alan Hayward wrote:
>>>> Sorry, I did miss this one (I think I sent my reply to the previous
>>>> one more or less the same time you sent this).
>>>> 
>>>> If I commit this, (I think) this is going to cause buildbot to break
>>>> for the aarch64 builds.
>>>> (Out of interest - I’ve heard people say they tested on buildbot. Are
>>>> there some instructions for doing that? I can try it out.)
>>> 
>>> Hmm you're right.  Though maybe we can have additional
>>> commands/configure options specific to the aarch64 builders?  They
>>> could download a kernel tarball, install the headers somewhere (that
>>> doesn't take long, no need to build the kernel) and point to them.
>>> Sergio, would that be possible/a good idea?
>> 
>> I'm not sure.  For starters, the Aarch64 builders have kinda been
>> forgotten since Yao stopped contributing regularly to GDB (he is the
>> maintainer of the machines behind the builders).  So the very first
>> thing we'd need to do is to put the builders in a good shape again
>> (they're currently with 273 pending builds in the queue!).  This is
>> something that's been on my TODO list for a while now, and I was going
>> to ask Alan (or anyone from ARM) if they're not interested in taking
>> over the maintenance of these machines.
>
> Looking after the aarch64 boxes does sound like a job for an Arm person.
> I guess it’ll be fairly important to get those queues cleared _before_
> 8.2 is released. I can certainly take a look at the pending builds in
> the next few weeks.

Yeah, what I do in these cases is cancel all of the pending builds,
i.e., start fresh.

I'm glad you're interested in taking care of the machines.  TBH, I was
even considering disabling them for now, since at least one machine has
been offline for a long time, and the other has this giant queue.  I'm
not sure if you're going to use the same machines as Yao was using
(IIRC, he was using machines from the GCC Compile Farm).

>> Then, I think the best approach for the SVE builds would be to manually
>> download a Linux kernel, put the sources somewhere, and then I could
>> configure a specific builder to build GDB with the SVE headers.
>> 
>
> Given the current queues, I suspect we’d not get this done before the 8.2
> branch.

I wouldn't count on that.

> I’m thinking configure check of Pedro’s sounds the first step, then once
> the aarch64 build queues have been cleared, get some sve builds added.
>
> The SVE headers are in Ubuntu 18.04 - so “all” that’s needed is to do a
> dist upgrade on them (I suspect there are probably lots of reasons why
> that can’t be done!) 

Yeah, I honestly don't know :-/.  If you're planning to continue using
the GCC Farm machine, then I think the best option would be to contact
the admins and ask them.

Feel free to contact me in private if you need help sorting this out.

Cheers,
  
Ramana Radhakrishnan June 15, 2018, 9:45 a.m. UTC | #14
On 12/06/2018 21:29, Sergio Durigan Junior wrote:
> On Tuesday, June 12 2018, Alan Hayward wrote:
> 
>>> On 12 Jun 2018, at 17:34, Sergio Durigan Junior <sergiodj@redhat.com> wrote:
>>>
>>> On Tuesday, June 12 2018, Simon Marchi wrote:
>>>
>>>> On 2018-06-12 10:37, Alan Hayward wrote:
>>>>> Sorry, I did miss this one (I think I sent my reply to the previous
>>>>> one more or less the same time you sent this).
>>>>>
>>>>> If I commit this, (I think) this is going to cause buildbot to break
>>>>> for the aarch64 builds.
>>>>> (Out of interest - I’ve heard people say they tested on buildbot. Are
>>>>> there some instructions for doing that? I can try it out.)
>>>>
>>>> Hmm you're right.  Though maybe we can have additional
>>>> commands/configure options specific to the aarch64 builders?  They
>>>> could download a kernel tarball, install the headers somewhere (that
>>>> doesn't take long, no need to build the kernel) and point to them.
>>>> Sergio, would that be possible/a good idea?
>>>
>>> I'm not sure.  For starters, the Aarch64 builders have kinda been
>>> forgotten since Yao stopped contributing regularly to GDB (he is the
>>> maintainer of the machines behind the builders).  So the very first
>>> thing we'd need to do is to put the builders in a good shape again
>>> (they're currently with 273 pending builds in the queue!).  This is
>>> something that's been on my TODO list for a while now, and I was going
>>> to ask Alan (or anyone from ARM) if they're not interested in taking
>>> over the maintenance of these machines.
>>
>> Looking after the aarch64 boxes does sound like a job for an Arm person.
>> I guess it’ll be fairly important to get those queues cleared _before_
>> 8.2 is released. I can certainly take a look at the pending builds in
>> the next few weeks.
> 
> Yeah, what I do in these cases is cancel all of the pending builds,
> i.e., start fresh.
> 
> I'm glad you're interested in taking care of the machines.  TBH, I was
> even considering disabling them for now, since at least one machine has
> been offline for a long time, and the other has this giant queue.  I'm
> not sure if you're going to use the same machines as Yao was using
> (IIRC, he was using machines from the GCC Compile Farm).
> 
>>> Then, I think the best approach for the SVE builds would be to manually
>>> download a Linux kernel, put the sources somewhere, and then I could
>>> configure a specific builder to build GDB with the SVE headers.
>>>
>>
>> Given the current queues, I suspect we’d not get this done before the 8.2
>> branch.
> 
> I wouldn't count on that.
> 
>> I’m thinking configure check of Pedro’s sounds the first step, then once
>> the aarch64 build queues have been cleared, get some sve builds added.
>>
>> The SVE headers are in Ubuntu 18.04 - so “all” that’s needed is to do a
>> dist upgrade on them (I suspect there are probably lots of reasons why
>> that can’t be done!)
> 
> Yeah, I honestly don't know :-/.  If you're planning to continue using
> the GCC Farm machine, then I think the best option would be to contact
> the admins and ask them.
> 

So I arranged for those machines in the compile farm and I believe I 
have super user privileges on them. Updating to 18.04 may be an option 
but something I don't want to do remotely and in a rush.

Alan, please reach out to me if you need any help on the compile farm 
machines, I can help out.

Ramana

> Feel free to contact me in private if you need help sorting this out.
> 
> Cheers,
>
  
Alan Hayward June 15, 2018, 5:14 p.m. UTC | #15
> On 15 Jun 2018, at 10:45, Ramana Radhakrishnan <ramana.radhakrishnan@foss.arm.com> wrote:

> 

> On 12/06/2018 21:29, Sergio Durigan Junior wrote:

>> On Tuesday, June 12 2018, Alan Hayward wrote:

>>>> On 12 Jun 2018, at 17:34, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

>>>> 

>>>> On Tuesday, June 12 2018, Simon Marchi wrote:

>>>> 

>>>>> On 2018-06-12 10:37, Alan Hayward wrote:

>>>>>> Sorry, I did miss this one (I think I sent my reply to the previous

>>>>>> one more or less the same time you sent this).

>>>>>> 

>>>>>> If I commit this, (I think) this is going to cause buildbot to break

>>>>>> for the aarch64 builds.

>>>>>> (Out of interest - I’ve heard people say they tested on buildbot. Are

>>>>>> there some instructions for doing that? I can try it out.)

>>>>> 

>>>>> Hmm you're right.  Though maybe we can have additional

>>>>> commands/configure options specific to the aarch64 builders?  They

>>>>> could download a kernel tarball, install the headers somewhere (that

>>>>> doesn't take long, no need to build the kernel) and point to them.

>>>>> Sergio, would that be possible/a good idea?

>>>> 

>>>> I'm not sure.  For starters, the Aarch64 builders have kinda been

>>>> forgotten since Yao stopped contributing regularly to GDB (he is the

>>>> maintainer of the machines behind the builders).  So the very first

>>>> thing we'd need to do is to put the builders in a good shape again

>>>> (they're currently with 273 pending builds in the queue!).  This is

>>>> something that's been on my TODO list for a while now, and I was going

>>>> to ask Alan (or anyone from ARM) if they're not interested in taking

>>>> over the maintenance of these machines.

>>> 

>>> Looking after the aarch64 boxes does sound like a job for an Arm person.

>>> I guess it’ll be fairly important to get those queues cleared _before_

>>> 8.2 is released. I can certainly take a look at the pending builds in

>>> the next few weeks.

>> Yeah, what I do in these cases is cancel all of the pending builds,

>> i.e., start fresh.

>> I'm glad you're interested in taking care of the machines.  TBH, I was

>> even considering disabling them for now, since at least one machine has

>> been offline for a long time, and the other has this giant queue.  I'm

>> not sure if you're going to use the same machines as Yao was using

>> (IIRC, he was using machines from the GCC Compile Farm).

>>>> Then, I think the best approach for the SVE builds would be to manually

>>>> download a Linux kernel, put the sources somewhere, and then I could

>>>> configure a specific builder to build GDB with the SVE headers.

>>>> 

>>> 

>>> Given the current queues, I suspect we’d not get this done before the 8.2

>>> branch.

>> I wouldn't count on that.

>>> I’m thinking configure check of Pedro’s sounds the first step, then once

>>> the aarch64 build queues have been cleared, get some sve builds added.

>>> 

>>> The SVE headers are in Ubuntu 18.04 - so “all” that’s needed is to do a

>>> dist upgrade on them (I suspect there are probably lots of reasons why

>>> that can’t be done!)

>> Yeah, I honestly don't know :-/.  If you're planning to continue using

>> the GCC Farm machine, then I think the best option would be to contact

>> the admins and ask them.

> 

> So I arranged for those machines in the compile farm and I believe I have super user privileges on them. Updating to 18.04 may be an option but something I don't want to do remotely and in a rush.

> 

> Alan, please reach out to me if you need any help on the compile farm machines, I can help out.

> 


Oh that’s good news (I thought they had come from Linaro).

If the headers get added to the gdb build then there’s no urgency to
update the boxes to ubuntu 18.04. However, 18.04 is the latest LTS, so
it’s possibly worth doing eventually (but not right now).


Alan.
  
Sergio Durigan Junior Sept. 20, 2018, 9:15 p.m. UTC | #16
On Friday, June 15 2018, Alan Hayward wrote:

>> On 15 Jun 2018, at 10:45, Ramana Radhakrishnan <ramana.radhakrishnan@foss.arm.com> wrote:
>> 
>> On 12/06/2018 21:29, Sergio Durigan Junior wrote:
>>> On Tuesday, June 12 2018, Alan Hayward wrote:
>>>>> On 12 Jun 2018, at 17:34, Sergio Durigan Junior <sergiodj@redhat.com> wrote:
>>>>> 
>>>>> On Tuesday, June 12 2018, Simon Marchi wrote:
>>>>> 
>>>>>> On 2018-06-12 10:37, Alan Hayward wrote:
>>>>>>> Sorry, I did miss this one (I think I sent my reply to the previous
>>>>>>> one more or less the same time you sent this).
>>>>>>> 
>>>>>>> If I commit this, (I think) this is going to cause buildbot to break
>>>>>>> for the aarch64 builds.
>>>>>>> (Out of interest - I’ve heard people say they tested on buildbot. Are
>>>>>>> there some instructions for doing that? I can try it out.)
>>>>>> 
>>>>>> Hmm you're right.  Though maybe we can have additional
>>>>>> commands/configure options specific to the aarch64 builders?  They
>>>>>> could download a kernel tarball, install the headers somewhere (that
>>>>>> doesn't take long, no need to build the kernel) and point to them.
>>>>>> Sergio, would that be possible/a good idea?
>>>>> 
>>>>> I'm not sure.  For starters, the Aarch64 builders have kinda been
>>>>> forgotten since Yao stopped contributing regularly to GDB (he is the
>>>>> maintainer of the machines behind the builders).  So the very first
>>>>> thing we'd need to do is to put the builders in a good shape again
>>>>> (they're currently with 273 pending builds in the queue!).  This is
>>>>> something that's been on my TODO list for a while now, and I was going
>>>>> to ask Alan (or anyone from ARM) if they're not interested in taking
>>>>> over the maintenance of these machines.
>>>> 
>>>> Looking after the aarch64 boxes does sound like a job for an Arm person.
>>>> I guess it’ll be fairly important to get those queues cleared _before_
>>>> 8.2 is released. I can certainly take a look at the pending builds in
>>>> the next few weeks.
>>> Yeah, what I do in these cases is cancel all of the pending builds,
>>> i.e., start fresh.
>>> I'm glad you're interested in taking care of the machines.  TBH, I was
>>> even considering disabling them for now, since at least one machine has
>>> been offline for a long time, and the other has this giant queue.  I'm
>>> not sure if you're going to use the same machines as Yao was using
>>> (IIRC, he was using machines from the GCC Compile Farm).
>>>>> Then, I think the best approach for the SVE builds would be to manually
>>>>> download a Linux kernel, put the sources somewhere, and then I could
>>>>> configure a specific builder to build GDB with the SVE headers.
>>>>> 
>>>> 
>>>> Given the current queues, I suspect we’d not get this done before the 8.2
>>>> branch.
>>> I wouldn't count on that.
>>>> I’m thinking configure check of Pedro’s sounds the first step, then once
>>>> the aarch64 build queues have been cleared, get some sve builds added.
>>>> 
>>>> The SVE headers are in Ubuntu 18.04 - so “all” that’s needed is to do a
>>>> dist upgrade on them (I suspect there are probably lots of reasons why
>>>> that can’t be done!)
>>> Yeah, I honestly don't know :-/.  If you're planning to continue using
>>> the GCC Farm machine, then I think the best option would be to contact
>>> the admins and ask them.
>> 
>> So I arranged for those machines in the compile farm and I believe I
>> have super user privileges on them. Updating to 18.04 may be an
>> option but something I don't want to do remotely and in a rush.
>> 
>> Alan, please reach out to me if you need any help on the compile farm machines, I can help out.
>> 
>
> Oh that’s good news (I thought they had come from Linaro).
>
> If the headers get added to the gdb build then there’s no urgency to
> update the boxes to ubuntu 18.04. However, 18.04 is the latest LTS, so
> it’s possibly worth doing eventually (but not right now).

Hi guys,

Just a ping to see if you have progressed on this.  I've left the AArch*
builders there, and now they're *really* behind (more than 1000 builds
in the queue), and at least one of the buildslaves is offline.

I will temporarily remove the builders now, but it would be really nice
to keep having AArch* builders in our BuildBot.

Thanks a lot,
  
Alan Hayward Sept. 24, 2018, 2:15 p.m. UTC | #17
> On 20 Sep 2018, at 22:15, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

> 

> On Friday, June 15 2018, Alan Hayward wrote:

> 

>>> On 15 Jun 2018, at 10:45, Ramana Radhakrishnan <ramana.radhakrishnan@foss.arm.com> wrote:

>>> 

>>> On 12/06/2018 21:29, Sergio Durigan Junior wrote:

>>>> On Tuesday, June 12 2018, Alan Hayward wrote:

>>>>>> On 12 Jun 2018, at 17:34, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

>>>>>> 

>>>>>> On Tuesday, June 12 2018, Simon Marchi wrote:

>>>>>> 

>>>>>>> On 2018-06-12 10:37, Alan Hayward wrote:

>>>>>>>> Sorry, I did miss this one (I think I sent my reply to the previous

>>>>>>>> one more or less the same time you sent this).

>>>>>>>> 

>>>>>>>> If I commit this, (I think) this is going to cause buildbot to break

>>>>>>>> for the aarch64 builds.

>>>>>>>> (Out of interest - I’ve heard people say they tested on buildbot. Are

>>>>>>>> there some instructions for doing that? I can try it out.)

>>>>>>> 

>>>>>>> Hmm you're right.  Though maybe we can have additional

>>>>>>> commands/configure options specific to the aarch64 builders?  They

>>>>>>> could download a kernel tarball, install the headers somewhere (that

>>>>>>> doesn't take long, no need to build the kernel) and point to them.

>>>>>>> Sergio, would that be possible/a good idea?

>>>>>> 

>>>>>> I'm not sure.  For starters, the Aarch64 builders have kinda been

>>>>>> forgotten since Yao stopped contributing regularly to GDB (he is the

>>>>>> maintainer of the machines behind the builders).  So the very first

>>>>>> thing we'd need to do is to put the builders in a good shape again

>>>>>> (they're currently with 273 pending builds in the queue!).  This is

>>>>>> something that's been on my TODO list for a while now, and I was going

>>>>>> to ask Alan (or anyone from ARM) if they're not interested in taking

>>>>>> over the maintenance of these machines.

>>>>> 

>>>>> Looking after the aarch64 boxes does sound like a job for an Arm person.

>>>>> I guess it’ll be fairly important to get those queues cleared _before_

>>>>> 8.2 is released. I can certainly take a look at the pending builds in

>>>>> the next few weeks.

>>>> Yeah, what I do in these cases is cancel all of the pending builds,

>>>> i.e., start fresh.

>>>> I'm glad you're interested in taking care of the machines.  TBH, I was

>>>> even considering disabling them for now, since at least one machine has

>>>> been offline for a long time, and the other has this giant queue.  I'm

>>>> not sure if you're going to use the same machines as Yao was using

>>>> (IIRC, he was using machines from the GCC Compile Farm).

>>>>>> Then, I think the best approach for the SVE builds would be to manually

>>>>>> download a Linux kernel, put the sources somewhere, and then I could

>>>>>> configure a specific builder to build GDB with the SVE headers.

>>>>>> 

>>>>> 

>>>>> Given the current queues, I suspect we’d not get this done before the 8.2

>>>>> branch.

>>>> I wouldn't count on that.

>>>>> I’m thinking configure check of Pedro’s sounds the first step, then once

>>>>> the aarch64 build queues have been cleared, get some sve builds added.

>>>>> 

>>>>> The SVE headers are in Ubuntu 18.04 - so “all” that’s needed is to do a

>>>>> dist upgrade on them (I suspect there are probably lots of reasons why

>>>>> that can’t be done!)

>>>> Yeah, I honestly don't know :-/.  If you're planning to continue using

>>>> the GCC Farm machine, then I think the best option would be to contact

>>>> the admins and ask them.

>>> 

>>> So I arranged for those machines in the compile farm and I believe I

>>> have super user privileges on them. Updating to 18.04 may be an

>>> option but something I don't want to do remotely and in a rush.

>>> 

>>> Alan, please reach out to me if you need any help on the compile farm machines, I can help out.

>>> 

>> 

>> Oh that’s good news (I thought they had come from Linaro).

>> 

>> If the headers get added to the gdb build then there’s no urgency to

>> update the boxes to ubuntu 18.04. However, 18.04 is the latest LTS, so

>> it’s possibly worth doing eventually (but not right now).

> 

> Hi guys,

> 

> Just a ping to see if you have progressed on this.  I've left the AArch*

> builders there, and now they're *really* behind (more than 1000 builds

> in the queue), and at least one of the buildslaves is offline.

> 

> I will temporarily remove the builders now, but it would be really nice

> to keep having AArch* builders in our BuildBot.

> 

> Thanks a lot,



Ramana has got some aarch64 machines up on packet.net for use in buildbot instead
of the existing machines. I think a few things just need finalising before they can
be handed over.

Once that’s done I can get buildbot set up on them. Are there some simple instructions
for getting this going?


Alan.
  
Sergio Durigan Junior Sept. 24, 2018, 2:39 p.m. UTC | #18
On Monday, September 24 2018, Alan Hayward wrote:

>> On 20 Sep 2018, at 22:15, Sergio Durigan Junior <sergiodj@redhat.com> wrote:
>> Hi guys,
>> 
>> Just a ping to see if you have progressed on this.  I've left the AArch*
>> builders there, and now they're *really* behind (more than 1000 builds
>> in the queue), and at least one of the buildslaves is offline.
>> 
>> I will temporarily remove the builders now, but it would be really nice
>> to keep having AArch* builders in our BuildBot.
>> 
>> Thanks a lot,
>
>
> Ramana has got some aarch64 machines up on packet.net for use in buildbot instead
> of the existing machines. I think a few things just need finalising before they can
> be handed over.

That's great news.  Thanks for doing that.

> Once that’s done I can get buildbot set up on them. Are there some simple instructions
> for getting this going?

There are instructions on our wiki:

  https://sourceware.org/gdb/wiki/BuildBot#How_to_add_your_buildslave

But please do let me know if you need any help.  I can take care of the
configuration on my side, so you don't have to submit a patch for the
master.cfg file (although you can if you want).

Thanks,
  
Alan Hayward Oct. 11, 2018, 9:23 a.m. UTC | #19
> On 24 Sep 2018, at 15:39, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

> 

> On Monday, September 24 2018, Alan Hayward wrote:

> 

>>> On 20 Sep 2018, at 22:15, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

>>> Hi guys,

>>> 

>>> Just a ping to see if you have progressed on this.  I've left the AArch*

>>> builders there, and now they're *really* behind (more than 1000 builds

>>> in the queue), and at least one of the buildslaves is offline.

>>> 

>>> I will temporarily remove the builders now, but it would be really nice

>>> to keep having AArch* builders in our BuildBot.

>>> 

>>> Thanks a lot,

>> 

>> 

>> Ramana has got some aarch64 machines up on packet.net for use in buildbot instead

>> of the existing machines. I think a few things just need finalising before they can

>> be handed over.

> 

> That's great news.  Thanks for doing that.

> 

>> Once that’s done I can get buildbot set up on them. Are there some simple instructions

>> for getting this going?

> 

> There are instructions on our wiki:

> 

>  https://sourceware.org/gdb/wiki/BuildBot#How_to_add_your_buildslave

> 

> But please do let me know if you need any help.  I can take care of the

> configuration on my side, so you don't have to submit a patch for the

> master.cfg file (although you can if you want).

> 


The machine is now ready for buildbot!

Aarch64, Ubuntu 16.04.5 LTS, 96 cores

I’ve setup buildbot-slave-0.8.14 in a virtualenv/
(Oddly, I had to install twisted==16.4.1, as anything newer than that caused a hang).

I’ve manually checked you can build gdb and run the testsuite.

My recent experiments with the testsuite on Aarch64 show all the threaded tests
are quite racy on a fully loaded ubuntu, whereas on redhat/suse they are fairly
stable. I’m still looking into why this is. But, in the short-term maybe we should
restrict the number of jobs to 32 (or maybe even fewer?)

Sergio, could you please add the relevant server config.


Thanks,
Alan.
  
Sergio Durigan Junior Oct. 12, 2018, 7:06 p.m. UTC | #20
On Thursday, October 11 2018, Alan Hayward wrote:

>> On 24 Sep 2018, at 15:39, Sergio Durigan Junior <sergiodj@redhat.com> wrote:
>> 
>> On Monday, September 24 2018, Alan Hayward wrote:
>> 
>>>> On 20 Sep 2018, at 22:15, Sergio Durigan Junior <sergiodj@redhat.com> wrote:
>>>> Hi guys,
>>>> 
>>>> Just a ping to see if you have progressed on this.  I've left the AArch*
>>>> builders there, and now they're *really* behind (more than 1000 builds
>>>> in the queue), and at least one of the buildslaves is offline.
>>>> 
>>>> I will temporarily remove the builders now, but it would be really nice
>>>> to keep having AArch* builders in our BuildBot.
>>>> 
>>>> Thanks a lot,
>>> 
>>> 
>>> Ramana has got some aarch64 machines up on packet.net for use in buildbot instead
>>> of the existing machines. I think a few things just need finalising before they can
>>> be handed over.
>> 
>> That's great news.  Thanks for doing that.
>> 
>>> Once that’s done I can get buildbot set up on them. Are there some simple instructions
>>> for getting this going?
>> 
>> There are instructions on our wiki:
>> 
>>  https://sourceware.org/gdb/wiki/BuildBot#How_to_add_your_buildslave
>> 
>> But please do let me know if you need any help.  I can take care of the
>> configuration on my side, so you don't have to submit a patch for the
>> master.cfg file (although you can if you want).
>> 
>
> The machine is now ready for buildbot!
>
> Aarch64, Ubuntu 16.04.5 LTS, 96 cores

That's great news, Alan!

> I’ve setup buildbot-slave-0.8.14 in a virtualenv/
> (Oddly, I had to install twisted==16.4.1, as anything newer than that caused a hang).
>
> I’ve manually checked you can build gdb and run the testsuite.
>
> My recent experiments with the testsuite on Aarch64 show all the threaded tests
> are quite racy on a fully loaded ubuntu, whereas on redhat/suse they are fairly
> stable. I’m still looking into why this is. But, in the short-term maybe we should
> restrict the number of jobs to 32 (or maybe even fewer?)

Sure, no problem.  What do you think of 16?

> Sergio, could you please add the relevant server config.

It's a good idea to follow the instructions here:

  <https://sourceware.org/gdb/wiki/BuildBot#Buildslave_configuration>

And make sure that all of the necessary/recommended deps are installed
in the machine.  The more deps, the more tests will be performed.

You will need a password to connect to the BuildBot master.  I will send
it to you in private.

I also recommend creating at least 3 builders associated with each
slave: native, native-gdbserver, and native-extended-gdbserver.  If
you're OK with it, I'll do that.

Last question: is there any special flags needed to build GDB on the
machine?

Thanks!
  
Alan Hayward Oct. 15, 2018, 10:16 a.m. UTC | #21
> On 12 Oct 2018, at 20:06, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

> 

> On Thursday, October 11 2018, Alan Hayward wrote:

> 

>>> On 24 Sep 2018, at 15:39, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

>>> 

>>> On Monday, September 24 2018, Alan Hayward wrote:

>>> 

>>>>> On 20 Sep 2018, at 22:15, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

>>>>> Hi guys,

>>>>> 

>>>>> Just a ping to see if you have progressed on this.  I've left the AArch*

>>>>> builders there, and now they're *really* behind (more than 1000 builds

>>>>> in the queue), and at least one of the buildslaves is offline.

>>>>> 

>>>>> I will temporarily remove the builders now, but it would be really nice

>>>>> to keep having AArch* builders in our BuildBot.

>>>>> 

>>>>> Thanks a lot,

>>>> 

>>>> 

>>>> Ramana has got some aarch64 machines up on packet.net for use in buildbot instead

>>>> of the existing machines. I think a few things just need finalising before they can

>>>> be handed over.

>>> 

>>> That's great news.  Thanks for doing that.

>>> 

>>>> Once that’s done I can get buildbot set up on them. Are there some simple instructions

>>>> for getting this going?

>>> 

>>> There are instructions on our wiki:

>>> 

>>> https://sourceware.org/gdb/wiki/BuildBot#How_to_add_your_buildslave

>>> 

>>> But please do let me know if you need any help.  I can take care of the

>>> configuration on my side, so you don't have to submit a patch for the

>>> master.cfg file (although you can if you want).

>>> 

>> 

>> The machine is now ready for buildbot!

>> 

>> Aarch64, Ubuntu 16.04.5 LTS, 96 cores

> 

> That's great news, Alan!

> 

>> I’ve setup buildbot-slave-0.8.14 in a virtualenv/

>> (Oddly, I had to install twisted==16.4.1, as anything newer than that caused a hang).

>> 

>> I’ve manually checked you can build gdb and run the testsuite.

>> 

>> My recent experiments with the testsuite on Aarch64 show all the threaded tests

>> are quite racy on a fully loaded ubuntu, whereas on redhat/suse they are fairly

>> stable. I’m still looking into why this is. But, in the short-term maybe we should

>> restrict the number of jobs to 32 (or maybe even fewer?)

> 

> Sure, no problem.  What do you think of 16?


I’ve been running some more tests over the weekend. At 32 I still get quite a bit of racy
behaviour, and at 16 it looks roughly the same as an x86 run.

So yes, 16 sounds good.

> 

>> Sergio, could you please add the relevant server config.

> 

> It's a good idea to follow the instructions here:

> 

>  <https://sourceware.org/gdb/wiki/BuildBot#Buildslave_configuration>

> 

> And make sure that all of the necessary/recommended deps are installed

> in the machine.  The more deps, the more tests will be performed.


All looks good.

I’m not sure who gets access to the wiki (looks like I can’t log in).
Errors I noticed:
* There is a mention of both 0.8.14 and 0.8.12 for buildslave
* The Debian specific instructions should probably also be for Ubuntu too.

> 

> You will need a password to connect to the BuildBot master.  I will send

> it to you in private.


Slave created.

> 

> I also recommend creating at least 3 builders associated with each

> slave: native, native-gdbserver, and native-extended-gdbserver.  If

> you're OK with it, I'll do that.

> 


That’s fine.


> Last question: is there any special flags needed to build GDB on the

> machine?

> 


Nope. My usual build line is:
$ configure --enable-sim --disable-gprof --disable-gold --disable-gas
$ make


> Thanks!

> 

> -- 

> Sergio

> GPG key ID: 237A 54B1 0287 28BF 00EF  31F4 D0EB 7628 65FC 5E36

> Please send encrypted e-mail if possible

> http://sergiodj.net/
  
Sergio Durigan Junior Oct. 15, 2018, 12:42 p.m. UTC | #22
On Monday, October 15 2018, Alan Hayward wrote:

>> On 12 Oct 2018, at 20:06, Sergio Durigan Junior <sergiodj@redhat.com> wrote:
>> 
>> On Thursday, October 11 2018, Alan Hayward wrote:
>> 
>>> I’ve setup buildbot-slave-0.8.14 in a virtualenv/
>>> (Oddly, I had to install twisted==16.4.1, as anything newer than that caused a hang).
>>> 
>>> I’ve manually checked you can build gdb and run the testsuite.
>>> 
>>> My recent experiments with the testsuite on Aarch64 show all the threaded tests
>>> are quite racy on a fully loaded ubuntu, whereas on redhat/suse they are fairly
>>> stable. I’m still looking into why this is. But, in the short-term maybe we should
>>> restrict the number of jobs to 32 (or maybe even fewer?)
>> 
>> Sure, no problem.  What do you think of 16?
>
> I’ve been running some more tests over the weekend. At 32 I still get quite a bit of racy
> behaviour, and at 16 it looks roughly the same as an x86 run.
>
> So yes, 16 sounds good.

Cool, I configured the buildslave to use 16 cores.

>> 
>>> Sergio, could you please add the relevant server config.
>> 
>> It's a good idea to follow the instructions here:
>> 
>>  <https://sourceware.org/gdb/wiki/BuildBot#Buildslave_configuration>
>> 
>> And make sure that all of the necessary/recommended deps are installed
>> in the machine.  The more deps, the more tests will be performed.
>
> All looks good.
>
> I’m not sure who gets access to the wiki (looks like I can’t log in).
> Errors I noticed:
> * There is a mention of both 0.8.14 and 0.8.12 for buildslave

Fixed.

> * The Debian specific instructions should probably also be for Ubuntu too.

Fixed.

Thanks for the heads up.

>> 
>> You will need a password to connect to the BuildBot master.  I will send
>> it to you in private.
>
> Slave created.

Hm, how did you create the slave?  I don't see it connected to the
BuildBot master:

  https://gdb-build.sergiodj.net/buildslaves/ubuntu16-aarch64

>> 
>> I also recommend creating at least 3 builders associated with each
>> slave: native, native-gdbserver, and native-extended-gdbserver.  If
>> you're OK with it, I'll do that.
>> 
>
> That’s fine.

Done.

>> Last question: is there any special flags needed to build GDB on the
>> machine?
>> 
>
> Nope. My usual build line is:
> $ configure --enable-sim --disable-gprof --disable-gold --disable-gas
> $ make

That's great, it should work without modifications then.

Thanks,
  
Alan Hayward Oct. 15, 2018, 2:02 p.m. UTC | #23
> 

>>> 

>>> You will need a password to connect to the BuildBot master.  I will send

>>> it to you in private.

>> 

>> Slave created.

> 

> Hm, how did you create the slave?  I don't see it connected to the

> BuildBot master:

> 

>  https://gdb-build.sergiodj.net/buildslaves/ubuntu16-aarch64

> 


It’s connected properly now.

I've clicked retry on the last aarch64 build (from March 2018), and that has passed.

There is also a new HEAD commit which is now running.

Anything else I need to do?
And is there anything else I should be monitoring (or will all fails just magically go
to the mailing list) ?


Alan.
  
Sergio Durigan Junior Oct. 15, 2018, 3:32 p.m. UTC | #24
On Monday, October 15 2018, Alan Hayward wrote:

>> 
>>>> 
>>>> You will need a password to connect to the BuildBot master.  I will send
>>>> it to you in private.
>>> 
>>> Slave created.
>> 
>> Hm, how did you create the slave?  I don't see it connected to the
>> BuildBot master:
>> 
>>  https://gdb-build.sergiodj.net/buildslaves/ubuntu16-aarch64
>> 
>
> It’s connected properly now.

Cool.

> I've clicked retry on the last aarch64 build (from March 2018), and that has passed.
>
> There is also a new HEAD commit which is now running.
>
> Anything else I need to do?
> And is there anything else I should be monitoring (or will all fails just magically go
> to the mailing list) ?

For now, I have disabled the email notifications.  IME there's always
something that we need to tweak in the first days to make sure that the
builders are running fine.  So if you could just keep an eye on the
builds and make sure that everything is OK, I'd appreciate.

After a few days have passed, I will enable the email notifications.
Regular test failures will be sent to the gdb-testers@ ml, and breakages
will be sent to gdb-patches@.

That should be it :-).

Thanks,
  
Sergio Durigan Junior Oct. 17, 2018, 6:45 p.m. UTC | #25
On Monday, October 15 2018, I wrote:

> On Monday, October 15 2018, Alan Hayward wrote:
>
>>> 
>>>>> 
>>>>> You will need a password to connect to the BuildBot master.  I will send
>>>>> it to you in private.
>>>> 
>>>> Slave created.
>>> 
>>> Hm, how did you create the slave?  I don't see it connected to the
>>> BuildBot master:
>>> 
>>>  https://gdb-build.sergiodj.net/buildslaves/ubuntu16-aarch64
>>> 
>>
>> It’s connected properly now.
>
> Cool.
>
>> I've clicked retry on the last aarch64 build (from March 2018), and that has passed.
>>
>> There is also a new HEAD commit which is now running.
>>
>> Anything else I need to do?
>> And is there anything else I should be monitoring (or will all fails just magically go
>> to the mailing list) ?
>
> For now, I have disabled the email notifications.  IME there's always
> something that we need to tweak in the first days to make sure that the
> builders are running fine.  So if you could just keep an eye on the
> builds and make sure that everything is OK, I'd appreciate.
>
> After a few days have passed, I will enable the email notifications.
> Regular test failures will be sent to the gdb-testers@ ml, and breakages
> will be sent to gdb-patches@.

For the record, I've now enabled the e-mail notifications for the
Aarch64 builders.  I've also added them to the list of Try builders, so
it's possible to submit try jobs to them.

Thanks,
  
Alan Hayward Oct. 24, 2018, 9:56 a.m. UTC | #26
> On 17 Oct 2018, at 19:45, Sergio Durigan Junior <sergiodj@redhat.com> wrote:

> 

>> For now, I have disabled the email notifications.  IME there's always

>> something that we need to tweak in the first days to make sure that the

>> builders are running fine.  So if you could just keep an eye on the

>> builds and make sure that everything is OK, I'd appreciate.

>> 

>> After a few days have passed, I will enable the email notifications.

>> Regular test failures will be sent to the gdb-testers@ ml, and breakages

>> will be sent to gdb-patches@.

> 

> For the record, I've now enabled the e-mail notifications for the

> Aarch64 builders.  I've also added them to the list of Try builders, so

> it's possible to submit try jobs to them.

> 


Currently the AArch64 builders keep going back to failed regressions. This is
mostly due to *all* the gdb.threads and gdb.server tests being racy on AArch64
Ubuntu. Keeping it down to 16 threads has reduced the frequency, but not
removed it.
I’m currently trying to fix up as many of the AArch64 test failures across the
whole test suite as I can. I suspect fixing the threaded issue is going to
take a while longer.

In the meantime is it possible to update just the AArch64 buildbot scripts so
they don’t take into account the thread/server tests for regressions? Either
remove the tests before running, or grep them out of the results. Not sure if
that’s possible with the way it’s set up (and not sure where to look). This
would then give us confidence with the rest of the tests. They can be
reinstated when stable again.

Alan.
  
Sergio Durigan Junior Oct. 25, 2018, 4:26 p.m. UTC | #27
On Wednesday, October 24 2018, Alan Hayward wrote:

>> On 17 Oct 2018, at 19:45, Sergio Durigan Junior <sergiodj@redhat.com> wrote:
>> 
>>> For now, I have disabled the email notifications.  IME there's always
>>> something that we need to tweak in the first days to make sure that the
>>> builders are running fine.  So if you could just keep an eye on the
>>> builds and make sure that everything is OK, I'd appreciate.
>>> 
>>> After a few days have passed, I will enable the email notifications.
>>> Regular test failures will be sent to the gdb-testers@ ml, and breakages
>>> will be sent to gdb-patches@.
>> 
>> For the record, I've now enabled the e-mail notifications for the
>> Aarch64 builders.  I've also added them to the list of Try builders, so
>> it's possible to submit try jobs to them.
>> 
>
> Currently the AArch64 builders keep going back to failed regressions. This is
> mostly due to *all* the gdb.threads and gdb.server tests being racy on AArch64
> Ubuntu. Keeping it down to 16 threads has reduced the frequency, but not
> removed it.
> I’m currently trying to fix up as many of the AArch64 test failures across the
> whole test suite as I can. I suspect fixing the threaded issue is going to
> take a while longer.

TBH, all builders suffer from this problem.  They all have racy tests,
and even though I tried to implement a system to detect such tests and
exclude them from the reports that are sent to gdb-testers, they still
sneak in the reports.  That's the main (and sole?) reason why
gdb-testers is currently impossible to follow, and people don't really
read it.

> In the meantime is it possible to update just the AArch64 buildbot scripts so
> they don’t take into account the thread/server tests for regressions? Either
> remove the tests before running, or grep them out of the results. Not sure if
> that’s possible with the way it’s set up (and not sure where to look). This
> would then give us confidence with the rest of the tests. They can be
> reinstated when stable again.

I think it can be done, but it's not so trivial, and I'm busy with other
stuff right now :-/.  Sorry about that.

I'll try to take a look at this problem during the weekend.  BTW, the
configuration files for the GDB BuildBot live at:

  https://git.sergiodj.net/gdb-buildbot.git/

Thanks,
  

Patch

diff --git a/gdb/nat/aarch64-linux-ptrace.h b/gdb/nat/aarch64-linux-ptrace.h
new file mode 100644
index 0000000000000000000000000000000000000000..1d0bf1b314038457632eef22e1c2d010d1604c93
--- /dev/null
+++ b/gdb/nat/aarch64-linux-ptrace.h
@@ -0,0 +1,150 @@ 
+/* This file contains Aarch64 Linux ptrace defines. It is required for those
+   compiling with older kernel headers.  Eventually, when the kernel defines are
+   old enough, this file should be removed.
+
+   Contents are directly copied directly from
+   linux/arch/arm64/include/uapi/asm/ptrace.h in Linux 4.17.
+
+   See:
+   https://github.com/torvalds/linux/blob/v4.17/arch/arm64/include/uapi/asm/ptrace.h
+*/
+
+#ifndef AARCH64_LINUX_PTRACE_H
+#define AARCH64_LINUX_PTRACE_H
+
+/* SVE/FP/SIMD state (NT_ARM_SVE) */
+
+struct user_sve_header {
+	__u32 size; /* total meaningful regset content in bytes */
+	__u32 max_size; /* maxmium possible size for this thread */
+	__u16 vl; /* current vector length */
+	__u16 max_vl; /* maximum possible vector length */
+	__u16 flags;
+	__u16 __reserved;
+};
+
+/* Definitions for user_sve_header.flags: */
+#define SVE_PT_REGS_MASK		(1 << 0)
+
+#define SVE_PT_REGS_FPSIMD		0
+#define SVE_PT_REGS_SVE			SVE_PT_REGS_MASK
+
+/*
+ * Common SVE_PT_* flags:
+ * These must be kept in sync with prctl interface in <linux/ptrace.h>
+ */
+#define SVE_PT_VL_INHERIT		(PR_SVE_VL_INHERIT >> 16)
+#define SVE_PT_VL_ONEXEC		(PR_SVE_SET_VL_ONEXEC >> 16)
+
+
+/*
+ * The remainder of the SVE state follows struct user_sve_header.  The
+ * total size of the SVE state (including header) depends on the
+ * metadata in the header:  SVE_PT_SIZE(vq, flags) gives the total size
+ * of the state in bytes, including the header.
+ *
+ * Refer to <asm/sigcontext.h> for details of how to pass the correct
+ * "vq" argument to these macros.
+ */
+
+/* Offset from the start of struct user_sve_header to the register data */
+#define SVE_PT_REGS_OFFSET					\
+	((sizeof(struct sve_context) + (SVE_VQ_BYTES - 1))	\
+		/ SVE_VQ_BYTES * SVE_VQ_BYTES)
+
+/*
+ * The register data content and layout depends on the value of the
+ * flags field.
+ */
+
+/*
+ * (flags & SVE_PT_REGS_MASK) == SVE_PT_REGS_FPSIMD case:
+ *
+ * The payload starts at offset SVE_PT_FPSIMD_OFFSET, and is of type
+ * struct user_fpsimd_state.  Additional data might be appended in the
+ * future: use SVE_PT_FPSIMD_SIZE(vq, flags) to compute the total size.
+ * SVE_PT_FPSIMD_SIZE(vq, flags) will never be less than
+ * sizeof(struct user_fpsimd_state).
+ */
+
+#define SVE_PT_FPSIMD_OFFSET		SVE_PT_REGS_OFFSET
+
+#define SVE_PT_FPSIMD_SIZE(vq, flags)	(sizeof(struct user_fpsimd_state))
+
+/*
+ * (flags & SVE_PT_REGS_MASK) == SVE_PT_REGS_SVE case:
+ *
+ * The payload starts at offset SVE_PT_SVE_OFFSET, and is of size
+ * SVE_PT_SVE_SIZE(vq, flags).
+ *
+ * Additional macros describe the contents and layout of the payload.
+ * For each, SVE_PT_SVE_x_OFFSET(args) is the start offset relative to
+ * the start of struct user_sve_header, and SVE_PT_SVE_x_SIZE(args) is
+ * the size in bytes:
+ *
+ *	x	type				description
+ *	-	----				-----------
+ *	ZREGS		\
+ *	ZREG		|
+ *	PREGS		| refer to <asm/sigcontext.h>
+ *	PREG		|
+ *	FFR		/
+ *
+ *	FPSR	uint32_t			FPSR
+ *	FPCR	uint32_t			FPCR
+ *
+ * Additional data might be appended in the future.
+ */
+
+#define SVE_PT_SVE_ZREG_SIZE(vq)	SVE_SIG_ZREG_SIZE(vq)
+#define SVE_PT_SVE_PREG_SIZE(vq)	SVE_SIG_PREG_SIZE(vq)
+#define SVE_PT_SVE_FFR_SIZE(vq)		SVE_SIG_FFR_SIZE(vq)
+#define SVE_PT_SVE_FPSR_SIZE		sizeof(__u32)
+#define SVE_PT_SVE_FPCR_SIZE		sizeof(__u32)
+
+#define __SVE_SIG_TO_PT(offset) \
+	((offset) - SVE_SIG_REGS_OFFSET + SVE_PT_REGS_OFFSET)
+
+#define SVE_PT_SVE_OFFSET		SVE_PT_REGS_OFFSET
+
+#define SVE_PT_SVE_ZREGS_OFFSET \
+	__SVE_SIG_TO_PT(SVE_SIG_ZREGS_OFFSET)
+#define SVE_PT_SVE_ZREG_OFFSET(vq, n) \
+	__SVE_SIG_TO_PT(SVE_SIG_ZREG_OFFSET(vq, n))
+#define SVE_PT_SVE_ZREGS_SIZE(vq) \
+	(SVE_PT_SVE_ZREG_OFFSET(vq, SVE_NUM_ZREGS) - SVE_PT_SVE_ZREGS_OFFSET)
+
+#define SVE_PT_SVE_PREGS_OFFSET(vq) \
+	__SVE_SIG_TO_PT(SVE_SIG_PREGS_OFFSET(vq))
+#define SVE_PT_SVE_PREG_OFFSET(vq, n) \
+	__SVE_SIG_TO_PT(SVE_SIG_PREG_OFFSET(vq, n))
+#define SVE_PT_SVE_PREGS_SIZE(vq) \
+	(SVE_PT_SVE_PREG_OFFSET(vq, SVE_NUM_PREGS) - \
+		SVE_PT_SVE_PREGS_OFFSET(vq))
+
+#define SVE_PT_SVE_FFR_OFFSET(vq) \
+	__SVE_SIG_TO_PT(SVE_SIG_FFR_OFFSET(vq))
+
+#define SVE_PT_SVE_FPSR_OFFSET(vq)				\
+	((SVE_PT_SVE_FFR_OFFSET(vq) + SVE_PT_SVE_FFR_SIZE(vq) +	\
+			(SVE_VQ_BYTES - 1))			\
+		/ SVE_VQ_BYTES * SVE_VQ_BYTES)
+#define SVE_PT_SVE_FPCR_OFFSET(vq) \
+	(SVE_PT_SVE_FPSR_OFFSET(vq) + SVE_PT_SVE_FPSR_SIZE)
+
+/*
+ * Any future extension appended after FPCR must be aligned to the next
+ * 128-bit boundary.
+ */
+
+#define SVE_PT_SVE_SIZE(vq, flags)					\
+	((SVE_PT_SVE_FPCR_OFFSET(vq) + SVE_PT_SVE_FPCR_SIZE		\
+			- SVE_PT_SVE_OFFSET + (SVE_VQ_BYTES - 1))	\
+		/ SVE_VQ_BYTES * SVE_VQ_BYTES)
+
+#define SVE_PT_SIZE(vq, flags)						\
+	 (((flags) & SVE_PT_REGS_MASK) == SVE_PT_REGS_SVE ?		\
+		  SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags)	\
+		: SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags))
+
+#endif /* AARCH64_LINUX_PTRACE_H */
diff --git a/gdb/nat/aarch64-linux-sigcontext.h b/gdb/nat/aarch64-linux-sigcontext.h
new file mode 100644
index 0000000000000000000000000000000000000000..7253b85cc1f28859a68293c02d87052a48aa567f
--- /dev/null
+++ b/gdb/nat/aarch64-linux-sigcontext.h
@@ -0,0 +1,127 @@ 
+/* This file contains Aarch64 Linux sigcontext defines. It is required for those
+   compiling with older kernel headers.  Eventually, when the kernel defines are
+   old enough, this file should be removed.
+
+   Contents are directly copied directly from
+   linux/arch/arm64/include/uapi/asm/sigcontext.h in Linux 4.17.
+
+   See:
+   https://github.com/torvalds/linux/blob/v4.17/arch/arm64/include/uapi/asm/sigcontext.h
+*/
+
+#ifndef AARCH64_LINUX_SIGCONTEXT_H
+#define AARCH64_LINUX_SIGCONTEXT_H
+
+
+#define SVE_MAGIC	0x53564501
+
+struct sve_context {
+	struct _aarch64_ctx head;
+	__u16 vl;
+	__u16 __reserved[3];
+};
+
+/*
+ * The SVE architecture leaves space for future expansion of the
+ * vector length beyond its initial architectural limit of 2048 bits
+ * (16 quadwords).
+ *
+ * See linux/Documentation/arm64/sve.txt for a description of the VL/VQ
+ * terminology.
+ */
+#define SVE_VQ_BYTES		16	/* number of bytes per quadword */
+
+#define SVE_VQ_MIN		1
+#define SVE_VQ_MAX		512
+
+#define SVE_VL_MIN		(SVE_VQ_MIN * SVE_VQ_BYTES)
+#define SVE_VL_MAX		(SVE_VQ_MAX * SVE_VQ_BYTES)
+
+#define SVE_NUM_ZREGS		32
+#define SVE_NUM_PREGS		16
+
+#define sve_vl_valid(vl) \
+	((vl) % SVE_VQ_BYTES == 0 && (vl) >= SVE_VL_MIN && (vl) <= SVE_VL_MAX)
+
+/*
+ * If the SVE registers are currently live for the thread at signal delivery,
+ * sve_context.head.size >=
+ *	SVE_SIG_CONTEXT_SIZE(sve_vq_from_vl(sve_context.vl))
+ * and the register data may be accessed using the SVE_SIG_*() macros.
+ *
+ * If sve_context.head.size <
+ *	SVE_SIG_CONTEXT_SIZE(sve_vq_from_vl(sve_context.vl)),
+ * the SVE registers were not live for the thread and no register data
+ * is included: in this case, the SVE_SIG_*() macros should not be
+ * used except for this check.
+ *
+ * The same convention applies when returning from a signal: a caller
+ * will need to remove or resize the sve_context block if it wants to
+ * make the SVE registers live when they were previously non-live or
+ * vice-versa.  This may require the the caller to allocate fresh
+ * memory and/or move other context blocks in the signal frame.
+ *
+ * Changing the vector length during signal return is not permitted:
+ * sve_context.vl must equal the thread's current vector length when
+ * doing a sigreturn.
+ *
+ *
+ * Note: for all these macros, the "vq" argument denotes the SVE
+ * vector length in quadwords (i.e., units of 128 bits).
+ *
+ * The correct way to obtain vq is to use sve_vq_from_vl(vl).  The
+ * result is valid if and only if sve_vl_valid(vl) is true.  This is
+ * guaranteed for a struct sve_context written by the kernel.
+ *
+ *
+ * Additional macros describe the contents and layout of the payload.
+ * For each, SVE_SIG_x_OFFSET(args) is the start offset relative to
+ * the start of struct sve_context, and SVE_SIG_x_SIZE(args) is the
+ * size in bytes:
+ *
+ *	x	type				description
+ *	-	----				-----------
+ *	REGS					the entire SVE context
+ *
+ *	ZREGS	__uint128_t[SVE_NUM_ZREGS][vq]	all Z-registers
+ *	ZREG	__uint128_t[vq]			individual Z-register Zn
+ *
+ *	PREGS	uint16_t[SVE_NUM_PREGS][vq]	all P-registers
+ *	PREG	uint16_t[vq]			individual P-register Pn
+ *
+ *	FFR	uint16_t[vq]			first-fault status register
+ *
+ * Additional data might be appended in the future.
+ */
+
+#define SVE_SIG_ZREG_SIZE(vq)	((__u32)(vq) * SVE_VQ_BYTES)
+#define SVE_SIG_PREG_SIZE(vq)	((__u32)(vq) * (SVE_VQ_BYTES / 8))
+#define SVE_SIG_FFR_SIZE(vq)	SVE_SIG_PREG_SIZE(vq)
+
+#define SVE_SIG_REGS_OFFSET					\
+	((sizeof(struct sve_context) + (SVE_VQ_BYTES - 1))	\
+		/ SVE_VQ_BYTES * SVE_VQ_BYTES)
+
+#define SVE_SIG_ZREGS_OFFSET	SVE_SIG_REGS_OFFSET
+#define SVE_SIG_ZREG_OFFSET(vq, n) \
+	(SVE_SIG_ZREGS_OFFSET + SVE_SIG_ZREG_SIZE(vq) * (n))
+#define SVE_SIG_ZREGS_SIZE(vq) \
+	(SVE_SIG_ZREG_OFFSET(vq, SVE_NUM_ZREGS) - SVE_SIG_ZREGS_OFFSET)
+
+#define SVE_SIG_PREGS_OFFSET(vq) \
+	(SVE_SIG_ZREGS_OFFSET + SVE_SIG_ZREGS_SIZE(vq))
+#define SVE_SIG_PREG_OFFSET(vq, n) \
+	(SVE_SIG_PREGS_OFFSET(vq) + SVE_SIG_PREG_SIZE(vq) * (n))
+#define SVE_SIG_PREGS_SIZE(vq) \
+	(SVE_SIG_PREG_OFFSET(vq, SVE_NUM_PREGS) - SVE_SIG_PREGS_OFFSET(vq))
+
+#define SVE_SIG_FFR_OFFSET(vq) \
+	(SVE_SIG_PREGS_OFFSET(vq) + SVE_SIG_PREGS_SIZE(vq))
+
+#define SVE_SIG_REGS_SIZE(vq) \
+	(SVE_SIG_FFR_OFFSET(vq) + SVE_SIG_FFR_SIZE(vq) - SVE_SIG_REGS_OFFSET)
+
+#define SVE_SIG_CONTEXT_SIZE(vq) (SVE_SIG_REGS_OFFSET + SVE_SIG_REGS_SIZE(vq))
+
+
+#endif /* AARCH64_LINUX_SIGCONTEXT_H */
diff --git a/gdb/nat/aarch64-sve-linux-ptrace.h b/gdb/nat/aarch64-sve-linux-ptrace.h
index 61f841466c8279c14322894e4cedbe3b6e39db4b..2d6f5714c0fd77cd51142500ba04dd0a70717d2d 100644
--- a/gdb/nat/aarch64-sve-linux-ptrace.h
+++ b/gdb/nat/aarch64-sve-linux-ptrace.h
@@ -20,54 +20,22 @@ 
 #ifndef AARCH64_SVE_LINUX_PTRACE_H
 #define AARCH64_SVE_LINUX_PTRACE_H

-/* Where indicated, this file contains defines and macros lifted directly from
-   the Linux kernel headers, with no modification.
-   Refer to Linux kernel documentation for details.  */
-
 #include <asm/sigcontext.h>
 #include <sys/utsname.h>
 #include <sys/ptrace.h>
 #include <asm/ptrace.h>

-/* Read VQ for the given tid using ptrace.  If SVE is not supported then zero
-   is returned (on a system that supports SVE, then VQ cannot be zero).  */
-
-uint64_t aarch64_sve_get_vq (int tid);
-
-/* Structures and defines taken from sigcontext.h.  */
-
 #ifndef SVE_SIG_ZREGS_SIZE
-
-#define SVE_VQ_BYTES		16	/* number of bytes per quadword */
-
-#define SVE_VQ_MIN		1
-#define SVE_VQ_MAX		512
-
-#define SVE_VL_MIN		(SVE_VQ_MIN * SVE_VQ_BYTES)
-#define SVE_VL_MAX		(SVE_VQ_MAX * SVE_VQ_BYTES)
-
-#define SVE_NUM_ZREGS		32
-#define SVE_NUM_PREGS		16
-
-#define sve_vl_valid(vl) \
-	((vl) % SVE_VQ_BYTES == 0 && (vl) >= SVE_VL_MIN && (vl) <= SVE_VL_MAX)
-
-#endif /* SVE_SIG_ZREGS_SIZE.  */
-
-
-/* Structures and defines taken from ptrace.h.  */
+#include "aarch64-linux-sigcontext.h"
+#endif

 #ifndef SVE_PT_SVE_ZREG_SIZE
+#include "aarch64-linux-ptrace.h"
+#endif

-struct user_sve_header {
-	__u32 size; /* total meaningful regset content in bytes */
-	__u32 max_size; /* maxmium possible size for this thread */
-	__u16 vl; /* current vector length */
-	__u16 max_vl; /* maximum possible vector length */
-	__u16 flags;
-	__u16 __reserved;
-};
+/* Read VQ for the given tid using ptrace.  If SVE is not supported then zero
+   is returned (on a system that supports SVE, then VQ cannot be zero).  */

-#endif /* SVE_PT_SVE_ZREG_SIZE.  */
+uint64_t aarch64_sve_get_vq (int tid);

 #endif /* aarch64-sve-linux-ptrace.h */