From patchwork Thu Jul 6 03:59:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ayush Mittal X-Patchwork-Id: 21448 Received: (qmail 119535 invoked by alias); 6 Jul 2017 04:03:33 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 118653 invoked by uid 89); 6 Jul 2017 04:02:38 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-26.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RP_MATCHES_RCVD, SPF_HELO_PASS, SPF_PASS autolearn=ham version=3.3.2 spammy=Hx-spam-relays-external:ESMTPA X-HELO: mailout3.samsung.com From: Ayush Mittal To: libc-alpha@sourceware.org Cc: pankaj.m@samsung.com, Ayush Mittal , Vaneet Narang Subject: [PATCH 1/1] msort : optimizing merge sort. Date: Thu, 6 Jul 2017 09:29:04 +0530 Message-Id: <1499313544-18244-1-git-send-email-ayush.m@samsung.com> X-CMS-MailID: 20170706040233epcas5p2ce07fffff862ed1c60c1fc7a83b244be X-Msg-Generator: CA X-Sender-IP: 182.195.40.13 X-Local-Sender: =?UTF-8?B?QXl1c2ggTWl0dGFsG1NSSS1EZWxoaS1QbGF0Zm9ybSBTL1cg?= =?UTF-8?B?MSBUZWFtG+yCvOyEseyghOyekBtFbmdpbmVlcg==?= X-Global-Sender: =?UTF-8?B?QXl1c2ggTWl0dGFsG1NSSS1EZWxoaS1QbGF0Zm9ybSBTL1cg?= =?UTF-8?B?MSBUZWFtG1NhbXN1bmcgRWxlY3Ryb25pY3MbRW5naW5lZXI=?= X-Sender-Code: =?UTF-8?B?QzEwG1NXQUhRG0MxMElEMDJJRDAyODExNQ==?= X-MTR: 20170706040233epcas5p2ce07fffff862ed1c60c1fc7a83b244be CMS-TYPE: 105P X-CMS-RootMailID: 20170706040233epcas5p2ce07fffff862ed1c60c1fc7a83b244be X-RootMTR: 20170706040233epcas5p2ce07fffff862ed1c60c1fc7a83b244be References: This patch improves the performance of merge sort when most of the elements are already sorted . It improves performace of merge procedure by skipping comparison and copying of elements when last element of first array and first element of second array is already sorted . When we simulated optimzed merge sort,the performace increases by 84% when mergesort is used for already sorted elements . Signed-off-by: Vaneet Narang Signed-off-by: Ayush Mittal --- stdlib/msort.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/stdlib/msort.c b/stdlib/msort.c index 02ef28b..498de09 100644 --- a/stdlib/msort.c +++ b/stdlib/msort.c @@ -39,7 +39,7 @@ static void msort_with_tmp (const struct msort_param *p, void *b, size_t n); static void msort_with_tmp (const struct msort_param *p, void *b, size_t n) { - char *b1, *b2; + char *b1, *b2, *b3; size_t n1, n2; if (n <= 1) @@ -49,6 +49,7 @@ msort_with_tmp (const struct msort_param *p, void *b, size_t n) n2 = n - n1; b1 = b; b2 = (char *) b + (n1 * p->s); + b3 = (char *) b + ((n1-1) * p->s); msort_with_tmp (p, b1, n1); msort_with_tmp (p, b2, n2); @@ -60,6 +61,8 @@ msort_with_tmp (const struct msort_param *p, void *b, size_t n) switch (p->var) { case 0: + if((*cmp) (b3, b2, arg) <= 0) + return ; while (n1 > 0 && n2 > 0) { if ((*cmp) (b1, b2, arg) <= 0) @@ -78,6 +81,8 @@ msort_with_tmp (const struct msort_param *p, void *b, size_t n) } break; case 1: + if((*cmp) (b3, b2, arg) <= 0) + return ; while (n1 > 0 && n2 > 0) { if ((*cmp) (b1, b2, arg) <= 0) @@ -96,6 +101,8 @@ msort_with_tmp (const struct msort_param *p, void *b, size_t n) } break; case 2: + if((*cmp) (b3, b2, arg) <= 0) + return ; while (n1 > 0 && n2 > 0) { unsigned long *tmpl = (unsigned long *) tmp; @@ -119,6 +126,8 @@ msort_with_tmp (const struct msort_param *p, void *b, size_t n) } break; case 3: + if((*cmp) (*(const void **)b3,*(const void **) b2, arg) <= 0) + return ; while (n1 > 0 && n2 > 0) { if ((*cmp) (*(const void **) b1, *(const void **) b2, arg) <= 0) @@ -137,6 +146,8 @@ msort_with_tmp (const struct msort_param *p, void *b, size_t n) } break; default: + if((*cmp) (b3, b2, arg) <= 0) + return ; while (n1 > 0 && n2 > 0) { if ((*cmp) (b1, b2, arg) <= 0)