https://www.quora.com/How-can-I-find-the-median-of-m-sorted-arrays
https://apps.topcoder.com/forums/%3bjsessionid=F76636E158936BC00A5BA7F389A69083?module=Thread&threadID=616929&start=0&mc=5#992341
Find the median value of K sorted arrays.
I know there is an O(N*K) Solution. N being the average number of elements in each Sorted array.
By keeping 2K pointers, I can loop through all values keeping the smaller elements and bigger elements. When this looping ends the last element will be the median.
I was just wondering whether we could use Binary Search to make this into some O(log(N*K)) Complexity.
There is an old paper by Frederickson and Johnson that shows how to solve such problems (selection in K sorted arrays) in optimal time. The complexity was something like K+sum of log(n_i) where n_i are sizes of the arrays. In case you'd like to know more, it is this paper. So, getting O(log(N*K)) seems impossible but you could get O(Klog(N)) complexity (but I don't know if there is any simple algorithm for that).
Maybe this:
assign A=maximum and B=minimum in all data.
check how many values are bigger than M=(A+B)/2. Repeat, setting B=M or A=M, and still remembering how many elements are bigger than A or smaller than B, until A=B. This is O(K log(K)log(N) ) , at least when data is of integer type.
edit: If You store 2 pointers for each of the arrays, You can just choose a random element as the 'pivot' (M in the above example). This is even more like the standard (quicksort-like) median finding algorithm.
http://stackoverflow.com/questions/6182488/median-of-5-sorted-arrays
https://apps.topcoder.com/forums/%3bjsessionid=F76636E158936BC00A5BA7F389A69083?module=Thread&threadID=616929&start=0&mc=5#992341
Find the median value of K sorted arrays.
I know there is an O(N*K) Solution. N being the average number of elements in each Sorted array.
By keeping 2K pointers, I can loop through all values keeping the smaller elements and bigger elements. When this looping ends the last element will be the median.
I was just wondering whether we could use Binary Search to make this into some O(log(N*K)) Complexity.
There is an old paper by Frederickson and Johnson that shows how to solve such problems (selection in K sorted arrays) in optimal time. The complexity was something like K+sum of log(n_i) where n_i are sizes of the arrays. In case you'd like to know more, it is this paper. So, getting O(log(N*K)) seems impossible but you could get O(Klog(N)) complexity (but I don't know if there is any simple algorithm for that).
Maybe this:
assign A=maximum and B=minimum in all data.
check how many values are bigger than M=(A+B)/2. Repeat, setting B=M or A=M, and still remembering how many elements are bigger than A or smaller than B, until A=B. This is O(K log(K)log(N) ) , at least when data is of integer type.
edit: If You store 2 pointers for each of the arrays, You can just choose a random element as the 'pivot' (M in the above example). This is even more like the standard (quicksort-like) median finding algorithm.
http://stackoverflow.com/questions/6182488/median-of-5-sorted-arrays
If you start by looking at the five medians of the five arrays, obviously the overall median must be between the smallest and the largest of the five medians.
Proof goes something like this: If a is the min of the medians, and b is the max of the medians, then each array has less than half of its elements less than a and less than half of its elements greater than b. Result follows.
So in the array containing a, throw away numbers less than a; in the array containing b, throw away numbers greater than b... But only throw away the same number of elements from both arrays.
That is, if a is j elements from the start of its array, and b is k elements from the end of its array, you throw away the first min(j,k) elements from a's array and the last min(j,k) elements from b's array.
Iterate until you are down to 1 or 2 elements total.
Each of these operations (i.e., finding median of a sorted array and throwing away k elements from the start or end of an array) is constant time. So each iteration is constant time.
Each iteration throws away (more than) half the elements from at least one array, and you can only do that log(n) times for each of the five arrays... So the overall algorithm is log(n).
[Update]
As Himadri Choudhury points out in the comments, my solution is incomplete; there are a lot of details and corner cases to worry about. So, to flesh things out a bit...
For each of the five arrays R, define its "lower median" as R[n/2-1] and its "upper median" as R[n/2], where n is the number of elements in the array (and arrays are indexed from 0, and division by 2 rounds down).
Let "a" be the smallest of the lower medians, and "b" be the largest of the upper medians. If there are multiple arrays with the smallest lower median and/or multiple arrays with the largest upper median, choose a and b from different arrays (this is one of those corner cases).
Now, borrowing Himadri's suggestion: Erase all elements up to and including a from its array, and all elements down to and including b from its array, taking care to remove the same number of elements from both arrays. Note that a and b could be in the same array; but if so, they could not have the same value, because otherwise we would have been able to choose one of them from a different array. So it is OK if this step winds up throwing away elements from the start and end of the same array.
Iterate as long as you have three or more arrays. But once you are down to just one or two arrays, you have to change your strategy to be exclusive instead of inclusive; you only erase up to but not including a and down to but not including b. Continue like this as long as both of the remaining one or two arrays has at least three elements (guaranteeing you make progress).
Finally, you will reduce to a few cases, the trickiest of which is two arrays remaining, one of which has one or two elements. Now, if I asked you: "Given a sorted array plus one or two additional elements, find the median of all elements", I think you can do that in constant time. (Again, there are a bunch of details to hammer out, but the basic idea is that adding one or two elements to an array does not "push the median around" very much.)
有一台主机,100台分机,每台分机有1GB整型数据。分机到主机的带宽为1kbps ,设计算法如何找到所有数据中的中位数?
先假设数据都经过排序,意味着查询某个区间中位数数的时间复杂度为O(1)
100个集合是100个区间,我们有最大值max,和最小值min,我们得出区间的中间值mid=(min+max)/2,
然后我们统计mid左边(<mid)有多少个数,右边(>mid)有多少个数,哪边数多,中位数应该落在哪一边。
假设右边多,我们取mid=(mid+max)/2进行迭代,整个过程类似于二分查找,不过迭代过程是要平衡mid左右两边的数据的数量
迭代的最终结果就是整体的中位数
每次传输的数据量也相当小,最差结果迭代37次就可以求出中位数了2^37~137,438,953,472
考虑双向传输2x100x37,整体传输数据估计10KB以内