fix for aarch64 sim tbnz bug

Message ID CABXYE2W++FWtcyNq4FAroJ2+VNJOHWs7-sdHMBm=6kOwSkD5Ug@mail.gmail.com
State New, archived
Headers

Commit Message

Jim Wilson Dec. 2, 2016, 4:49 a.m. UTC
  Debugged another gcc testsuite failure, and found that tbnz/tbz are
broken when the bit position to test is greater than 31.  There are
two problems.  The high bit of the bit position is shifted left by the
wrong amount.  And we need to use (uint64_t)1 to get a 64-bit shift
result.

Tested with a gcc C testsuite run.  This reduces failures from 2856 to 2710.

Jim
  

Comments

Mike Frysinger Dec. 2, 2016, 9:31 a.m. UTC | #1
On Thu, Dec 1, 2016 at 11:49 PM, Jim Wilson wrote:
> Debugged another gcc testsuite failure, and found that tbnz/tbz are
> broken when the bit position to test is greater than 31.  There are
> two problems.  The high bit of the bit position is shifted left by the
> wrong amount.  And we need to use (uint64_t)1 to get a 64-bit shift
> result.
>
> Tested with a gcc C testsuite run.  This reduces failures from 2856 to 2710.

can we please start getting tests added to sim too ?  using gcc
indirectly to validate the sim is a bit un
-mike
  
Nick Clifton Dec. 2, 2016, 12:03 p.m. UTC | #2
Hi Jim,

> Tested with a gcc C testsuite run.  This reduces failures from 2856 to 2710.

Patch approved - please apply.

Just one question:

+  if (aarch64_get_reg_u64 (cpu, rt, NO_SP) & (((uint64_t) 1) << pos))

Would:

+  if (aarch64_get_reg_u64 (cpu, rt, NO_SP) & (1UL << pos))

work as well, or would this break on 32-bit hosts ?

Cheers
  Nick
  
Andreas Schwab Dec. 2, 2016, 1:34 p.m. UTC | #3
On Dez 02 2016, Nick Clifton <nickc@redhat.com> wrote:

> Would:
>
> +  if (aarch64_get_reg_u64 (cpu, rt, NO_SP) & (1UL << pos))
>
> work as well, or would this break on 32-bit hosts ?

+  if ((aarch64_get_reg_u64 (cpu, rt, NO_SP) >> pos) & 1)

would work everywhere.

Andreas.
  
Jim Wilson Dec. 2, 2016, 3:59 p.m. UTC | #4
On Fri, Dec 2, 2016 at 4:03 AM, Nick Clifton <nickc@redhat.com> wrote:
> Just one question:
> +  if (aarch64_get_reg_u64 (cpu, rt, NO_SP) & (((uint64_t) 1) << pos))
> Would:
> +  if (aarch64_get_reg_u64 (cpu, rt, NO_SP) & (1UL << pos)
> work as well, or would this break on 32-bit hosts ?

I don't think that 1UL works, as long could be 32-bits.  It would have
to be 1ULL.  But that assumes that long long is 64-bits.
aarch64_get_reg_u64 is defined to return uint64_t, so casting to that
seemed the best choice to me, to keep types consistent.

Jim
  

Patch

	sim/aarch64
	* simulator.c (tbnz, tbz): Cast 1 to uint64_t before shifting.
	(dexTestBranchImmediate): Shift high bit of pos by 5 not 4.

diff --git a/sim/aarch64/simulator.c b/sim/aarch64/simulator.c
index 4fa5dc1..34fd17d 100644
--- a/sim/aarch64/simulator.c
+++ b/sim/aarch64/simulator.c
@@ -13353,7 +13353,7 @@  tbnz (sim_cpu *cpu, uint32_t  pos, int32_t offset)
   unsigned rt = INSTR (4, 0);
 
   TRACE_DECODE (cpu, "emulated at line %d", __LINE__);
-  if (aarch64_get_reg_u64 (cpu, rt, NO_SP) & (1 << pos))
+  if (aarch64_get_reg_u64 (cpu, rt, NO_SP) & (((uint64_t) 1) << pos))
     aarch64_set_next_PC_by_offset (cpu, offset);
 }
 
@@ -13364,7 +13364,7 @@  tbz (sim_cpu *cpu, uint32_t  pos, int32_t offset)
   unsigned rt = INSTR (4, 0);
 
   TRACE_DECODE (cpu, "emulated at line %d", __LINE__);
-  if (!(aarch64_get_reg_u64 (cpu, rt, NO_SP) & (1 << pos)))
+  if (!(aarch64_get_reg_u64 (cpu, rt, NO_SP) & (((uint64_t) 1) << pos)))
     aarch64_set_next_PC_by_offset (cpu, offset);
 }
 
@@ -13407,7 +13407,7 @@  dexTestBranchImmediate (sim_cpu *cpu)
      instr[18,5]  = simm14 : signed offset counted in words
      instr[4,0]   = uimm5  */
 
-  uint32_t pos = ((INSTR (31, 31) << 4) | INSTR (23, 19));
+  uint32_t pos = ((INSTR (31, 31) << 5) | INSTR (23, 19));
   int32_t offset = simm32 (aarch64_get_instr (cpu), 18, 5) << 2;
 
   NYI_assert (30, 25, 0x1b);