更新时间:2023-11-10 10:47:04
一种选择是使用归并排序上所有三个链表,然后用一个最后的合并步骤,合并在一起成为一个整体的排序列表。
不像大多数为O(n log n)的排序算法,归并排序可以在链表高效运行。在一个较高的层次,直觉后面归并排序上链表如下:
上链表合并算法真的很漂亮。伪code工作大致是这样的:
此,可向运行在O(n)的时间,所以合并排序的整体复杂度为O(n log n)的。
一旦你排序的所有三个列表独立的,可以申请合并算法的三个列表组合成一个最终的排序列表。或者,你可以考虑串联起来这三个链表,然后用一个巨大的合并排序传递给所有排序的名单在同一时间。有没有明确的正确的方式来做到这一点;这真的取决于你。
以上算法的运行和西塔(N log n)的时间。它也只使用与西塔(log n)的内存,因为它不分配新的链表细胞只需要在每个堆栈帧指针存储到不同的表空间。由于递归深度和西塔(log n)的,内存使用率和西塔;(log n)的,以及
另一个为O(n log n)的排序,你可以在链表实现是快速排序的修改。虽然快速排序的链表的版本快(仍然为O(n log n)的预期),这是不是几乎一样快的就地版本的阵列,由于缺乏来自数组元素局部性影响的作品被连续存储。然而,这是适用于列出一个非常漂亮的算法。
快速排序背后的直觉是:
之一的快速排序的连接表的版本好的方面是使分割步骤是比在阵列的情况下基本上更容易。当你选择了一个支点(细节稍后),您可以通过为小于,等于对,和大于列表,然后做了原有的线性扫描链接创建三个空列表做分区步骤名单。然后可以追加/ prePEND每个链接列表节点以链表对应于原始桶
在得到这方面的工作是挑选一个好的支点元素的一个挑战。这是众所周知的快速排序可以退化到为O(n 2 )时间,如果支点的选择是件坏事,但它也知道,如果你选择一个主元随机运行时是O(n日志n)的高概率。在一个阵列,这是很容易(只选择一个随机数组索引),但在链表的情况是棘手。要做到这一点最简单的方法是挑选和0之间的随机数列表的长度,然后选择在O(n)的时间列表的元素。另外,还有一些pretty的清凉方法挑选一个元素随机出的链接列表; 这里描述这样一种算法。
如果你想有一个更简单的算法,该算法只需要O(1)空间,还可以考虑使用插入排序以链接列表进行排序。虽然插入排序更容易实现,这为O(N 2 )在最坏情况下的时间(尽管它也有O(n)的***案例的行为),所以它可能不是一个很好的选择除非您特别想避免归并排序。
插入排序算法背后的想法是,如下所示:
另一个为O(n 2 ),可以适用于链表排序算法是选择排序一>。这可以通过使用这种算法很容易实现的(假设你有一个双向链表):
这也运行在O(N 2 )时,只使用O(1)空间,但实际上它比插入排序慢;特别是,它总是运行在&的Theta;(正 2 )时间
根据如何链表的结构,你也许能逃脱一些非常真棒黑客。特别是,如果你给出的双 - 连接列表,那么你有空间,为您的每一个链表细胞的两个指针。鉴于这种情况,你可以reinter preT这些指针的含义做了一些pretty的荒谬排序的招数。
作为一个简单的例子,让我们来看看使用链表细胞如何我们可以实现树排序。的思想如下。当链表元被存储在一个链表中,下和previous指针具有其原始的含义。然而,我们的目标是迭代地拉链表细胞出该链接的表的,然后reinter $ P $角它们作为二进制搜索树,其中,下一个指针指右子树和previous指针节点a意思是左子树。如果你允许这样做,这里是实现树排序一个非常酷的方式:
这运行在***情况为O(n log n)的时间,最坏情况下为O(n 2 )。在内存使用方面,前两个步骤只需要O(1)内存,因为我们回收的空间,从旧的指针。最后一步可以在O进行使用一些相当聪明的算法(1)空间为好。
您还可以考虑实施堆排序这种方式为好,虽然这是一个有点棘手。
希望这有助于!
I was asked to write a function that takes 3 unsorted linked lists and return one single sorted linked list that combines all three lists. What is the best way you can think of? I dont really have restrictions of memory but what would you do with/without memory restrictions?
One option would be to use merge sort on all three of the linked lists, then use one final merge step to merge them together into an overall sorted list.
Unlike most O(n log n) sorting algorithms, merge sort can run efficiently on linked lists. At a high-level, the intuition behind merge sort on a linked list is as follows:
The merge algorithm on linked lists is really beautiful. The pseudocode works roughly like this:
This can be made to run in O(n) time, so the overall complexity of the merge sort is O(n log n).
Once you've sorted all three lists independently, you can apply the merge algorithm to combine the three lists into one final sorted list. Alternatively, you could consider concatenating together all three linked lists, then using a giant merge sort pass to sort all of the lists at the same time. There's no clear "right way" to do this; it's really up to you.
The above algorithm runs in Θ(n log n) time. It also uses only Θ(log n) memory, since it allocates no new linked list cells and just needs space in each stack frame to store pointers to the various lists. Since the recursion depth is Θ(log n), the memory usage is Θ(log n) as well.
Another O(n log n) sort that you can implement on linked lists is a modification of quicksort. Although the linked list version of quicksort is fast (still O(n log n) expected), it isn't nearly as fast as the in-place version that works on arrays due to the lack of locality effects from array elements being stored contiguously. However, it's a very beautiful algorithm as applied to lists.
The intuition behind quicksort is as follows:
One of the nice aspects of the linked-list version of quicksort is that the partitioning step is substantially easier than in the array case. After you've chosen a pivot (details a bit later), you can do the partitioning step by creating three empty lists for the less-than, equal-to, and greater-than lists, then doing a linear scan over the original linked list. You can then append/prepend each linked list node to the linked list corresponding to the original bucket.
The one challenge in getting this working is picking a good pivot element. It's well known that quicksort can degenerate to O(n2) time if the choice of pivot is bad, but it is also known that if you pick a pivot element at random the runtime is O(n log n) with high probability. In an array this is easy (just pick a random array index), but in the linked list case is trickier. The easiest way to do this is to pick a random number between 0 and the length of the list, then choose that element of the list in O(n) time. Alternatively, there are some pretty cool methods for picking an element at random out of a linked list; one such algorithm is described here.
If you want a simpler algorithm that needs only O(1) space, you can also consider using insertion sort to sort the linked lists. While insertion sort is easier to implement, it runs in O(n2) time in the worst case (though it also has O(n) best-case behavior), so it's probably not a good choice unless you specifically want to avoid merge sort.
The idea behind the insertion sort algorithm is as follows:
Another O(n2) sorting algorithm that can be adapted for linked lists is selection sort. This can be implemented very easily (assuming you have a doubly-linked list) by using this algorithm:
This also runs in O(n2) time and uses only O(1) space, but in practice it's slower than insertion sort; in particular, it always runs in Θ(n2) time.
Depending on how the linked lists are structured, you might be able to get away with some extremely awesome hacks. In particular, if you are given doubly-linked lists, then you have space for two pointers in each of your linked list cells. Given that, you can reinterpret the meaning of those pointers to do some pretty ridiculous sorting tricks.
As a simple example, let's see how we could implement tree sort using the linked list cells. The idea is as follows. When the linked list cells are stored in a linked list, the next and previous pointers have their original meaning. However, our goal will be to iteratively pull the linked list cells out of the linked list, then reinterpret them as nodes a in binary search tree, where the next pointer means "right subtree" and the previous pointer means "left subtree." If you're allowed to do this, here's a really cool way to implement tree sort:
This runs in best-case O(n log n) time and worst-case O(n2). In terms of memory usage, the first two steps require only O(1) memory, since we're recycling space from the older pointers. The last step can be done in O(1) space as well using some particularly clever algorithms.
You could also consider implementing heap sort this way as well, though it's a bit tricky.
Hope this helps!