From patchwork Wed Dec 14 18:52:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 61940 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 1FCB5382B3F6 for ; Wed, 14 Dec 2022 18:52:38 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1FCB5382B3F6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1671043958; bh=eix9K/zrzBtBK2ygxL94UluoLF7LOzyIV839gxytiSw=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=nepXcqDK7Iv9Sq8AfAiLcEt7/l6QWDIA81rXs+OW84BR5XXDQlSIPyneMY6gYnkA/ 6Rfxo6t3o9Vr5BTnUsYSFGIzDNcCFiX0lfwPq3DAVBB/x7UrRTmpSE/m1Hzvyroogs 1r8rqnKrMwMP3G9u2FZIwE5aajSbh2oo8XhvaOR4= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com [IPv6:2607:f8b0:4864:20::831]) by sourceware.org (Postfix) with ESMTPS id A3D7F382B3FD for ; Wed, 14 Dec 2022 18:52:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A3D7F382B3FD Received: by mail-qt1-x831.google.com with SMTP id jr11so3231935qtb.7 for ; Wed, 14 Dec 2022 10:52:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eix9K/zrzBtBK2ygxL94UluoLF7LOzyIV839gxytiSw=; b=RCtjaTji4p5vAN/8aCbkI1C5hQWMhQuvi1by7P6syE392nN08sDAVQ+MGF8hB8rsJk MhATEsopmhhbF4rw9ryJbJo9LNp8/t7n0YNo49rc1IV9d33sJ93dFmPdO4bN2RvuQq1K 6SM8Uo91zy7zYGmW5PgrVEsG6YIXS0xy9QlvpJLN6wUNLpc+kEfAMn+amwwja8gbEMfK CUg9X3SGlNMujqmdhm84s3RPxG+3gApA0G34GjzYbtzvYZ3gsR2HqpAQivoM6dqNdaTx favdBZ+T6YbWJHtWubo6hEqupURrPPRzz77nxwgxfxzRsWonUrkWZaaugvHFf0LQcHfj 0rcQ== X-Gm-Message-State: ANoB5pkKM9ZV2QgPzRprjuJnn/01vZoUHuQpY5aM9twxcnS/1v7ukgT9 GpHE/bCF2ukW8ih8kzV9MiSZdLMIcc0= X-Google-Smtp-Source: AA0mqf4+bF3MGCAJgE3GA99QuijpSZSb2+pmAfXhgxjTEuZe7gz+Nau1sccSpFA8iV02xkdkQUavDg== X-Received: by 2002:a05:622a:1f8b:b0:39c:da20:d49a with SMTP id cb11-20020a05622a1f8b00b0039cda20d49amr34776192qtb.47.1671043934604; Wed, 14 Dec 2022 10:52:14 -0800 (PST) Received: from noahgold-desk.intel.com ([192.55.55.52]) by smtp.gmail.com with ESMTPSA id bq34-20020a05620a46a200b006fb93acc788sm10645183qkb.6.2022.12.14.10.52.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Dec 2022 10:52:14 -0800 (PST) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v4] x86: Prevent SIGSEGV in memcmp-sse2 when data is concurrently modified [BZ #29863] Date: Wed, 14 Dec 2022 10:52:10 -0800 Message-Id: <20221214185210.2930992-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221214001147.2814047-1-goldstein.w.n@gmail.com> References: <20221214001147.2814047-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b` are concurrently modified as `memcmp` runs, there can be a SIGSEGV in `L(ret_nonzero_vec_end_0)` because the sequential logic assumes that `(rdx - 32 + rax)` is a positive 32-bit integer. To be clear, this change does not mean the usage of `memcmp` is supported. The program behaviour is undefined (UB) in the presence of data races, and `memcmp` is incorrect when the values of `a` and/or `b` are modified concurrently (data race). This UB may manifest itself as a SIGSEGV. That being said, if we can allow the idiomatic use cases, like those in yottadb with opportunistic concurrency control (OCC), to execute without a SIGSEGV, at no cost to regular use cases, then we can aim to minimize harm to those existing users. The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant `addq %rdx, %rax`. The 1-extra byte of code size from using the 64-bit instruction doesn't contribute to overall code size as the next target is aligned and has multiple bytes of `nop` padding before it. As well all the logic between the add and `ret` still fits in the same fetch block, so the cost of this change is basically zero. The relevant sequential logic can be seen in the following pseudo-code: ``` /* * rsi = a * rdi = b * rdx = len - 32 */ /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32] in this case, this check is also assumed to cover a[0:(31 - len)] and b[0:(31 - len)]. */ movups (%rsi), %xmm0 movups (%rdi), %xmm1 PCMPEQ %xmm0, %xmm1 pmovmskb %xmm1, %eax subl %ecx, %eax jnz L(END_NEQ) /* cmp a[len-16:len-1] and b[len-16:len-1]. */ movups 16(%rsi, %rdx), %xmm0 movups 16(%rdi, %rdx), %xmm1 PCMPEQ %xmm0, %xmm1 pmovmskb %xmm1, %eax subl %ecx, %eax jnz L(END_NEQ2) ret L(END2): /* Position first mismatch. */ bsfl %eax, %eax /* The sequential version is able to assume this value is a positive 32-bit value because the first check included bytes in range a[0:(31 - len)] and b[0:(31 - len)] so `eax` must be greater than `31 - len` so the minimum value of `edx` + `eax` is `(len - 32) + (32 - len) >= 0`. In the concurrent case, however, `a` or `b` could have been changed so a mismatch in `eax` less or equal than `(31 - len)` is possible (the new low bound is `(16 - len)`. This can result in a negative 32-bit signed integer, which when zero extended to 64-bits is a random large value this out out of bounds. */ addl %edx, %eax /* Crash here because 32-bit negative number in `eax` zero extends to out of bounds 64-bit offset. */ movzbl 16(%rdi, %rax), %ecx movzbl 16(%rsi, %rax), %eax ``` This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e `addq %rdx, %rax`). This prevents the 32-bit zero extension and since `eax` is still a low bound of `16 - len` the `rdx + rax` is bound by `(len - 32) - (16 - len) >= -16`. Since we have a fixed offset of `16` in the memory access this must be in bounds. --- sysdeps/x86_64/multiarch/memcmp-sse2.S | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/sysdeps/x86_64/multiarch/memcmp-sse2.S b/sysdeps/x86_64/multiarch/memcmp-sse2.S index afd450d020..51bc9344f0 100644 --- a/sysdeps/x86_64/multiarch/memcmp-sse2.S +++ b/sysdeps/x86_64/multiarch/memcmp-sse2.S @@ -308,7 +308,17 @@ L(ret_nonzero_vec_end_0): setg %dl leal -1(%rdx, %rdx), %eax # else - addl %edx, %eax + /* Use `addq` instead of `addl` here so that even if `rax` + `rdx` + is negative value of the sum will be usable as a 64-bit offset + (negative 32-bit numbers zero-extend to a large and often + out-of-bounds 64-bit offsets). Note that `rax` + `rdx` >= 0 is + an invariant when `memcmp` is used correctly, but if the input + strings `rsi`/`rdi` are concurrently modified as the function + runs (there is a Data-Race) it is possible for `rax` + `rdx` to + be negative. Given that there is virtually no extra to cost + using `addq` instead of `addl` we may as well protect the + data-race case. */ + addq %rdx, %rax movzbl (VEC_SIZE * -1 + SIZE_OFFSET)(%rsi, %rax), %ecx movzbl (VEC_SIZE * -1 + SIZE_OFFSET)(%rdi, %rax), %eax subl %ecx, %eax