## Sunday, June 26, 2016

### Finding Top K Frenquent Elements in Large Dataset - 我的博客 - ITeye技术网站

Finding Top K Frenquent Elements in Large Dataset - 我的博客 - ITeye技术网站

Problem: Consider the problem of finding top K frequent numbers from a file with N numbers.

Now consider that N is very large such that it cannot fit in the memory M available to the program.

How do we find the top K frequent elements now with the assumption that K < M < N.

Deterministic solution
The idea is similar to solving the problem for small N (N < M) (see here). For large N, divide the problem into chunks of size <= M, then solve it.

In order to divide the problem, consider a uniform hashing function H that takes a key and returns an integer from the set {1,2,....,ceil(N/M)}. So we get N/M chunks with roughly M unique keys per chunk. Since the number of unique numbers (U) is less than N, we expect each chunk to have less than M unique numbers.

Now for each chunk, we compute the frequency of numbers contained in that chunk. This frequency computation can be done in memory and we can maintain a MIN-HEAP of size K which can be directly updated (follow the steps presented here). As a result only two reads of the dataset and one disk write is required to get the top K frequent items. The complexity of the algorithm is O(N log K).

Probably for a small K (K < M^2/N), we can compute the top K per chunk in O(K log M) and combine the top K for all chunks N*K/M (<M) to get the overall top K. The total complexity of the algorithm is O(N + M log M).

Alternate method
We assume that the disk has a huge capacity which allows us to make a disk based hash table. Imagine that we create a very large file with holes and divide the file into B blocks with a capacity to hold more than N/B numbers and their integer counts. Using a uniform hash function H which takes as input an arbitrary number x and return H(x): the block number from the set {0, 1, ..., B-1}, we can store the numbers and their frequencies on disk map. H would give us the offset to seek within the file and we write number (and their frequency) sequentially once in correct block (or via another hash function H1).

Now we proceed as usual, start counting the numbers in memory using a hash table (former approach). When we encounter a new number that could not be put in memory, we purge some entries from the hash table. The purged entries are written to the disk map. Now to purge the entries, we maintain an array which counts the frequency of the keys that lie within a block. Keys that belong to the most infrequent blocks can be purged first (or blocks that are least recently used or that lead to least disk access, etc).

The following code gives a very basic detail of this approach
`BlockFreq = array(B)  NumberFreq = hashtable(M)  diskwrite = 0  for i = 1:N     x = A[i]     BlockFreq[H[x]] += 1       if NumberFreq.haskey(x)        NumberFreq[x] += 1        continue     end       if NumberFreq.hasspace()        NumberFreq[x] = 1        continue     end       if DiskMap.haskey(x)        DiskMap[x] += 1     else        DiskMap[x] = 1     end       if diskwrite == 10        purge(NumberFreq, BlockFreq)        diskwrite = 0     else        diskwrite += 1     end  end`
Here purge is a procedure to purge some set of keys from NumberFreq based on the BlockFreq. Note that this code omits several key details of this process, so the idea presented here is quite crude.

Single pass probabilistic solution
Solution 1 is quite efficient as it requires only two disk reads of the dataset, but the bottleneck can be the disk writes during the initial chunk formation. We can reduce that bottleneck by considering a data-structure called Bloom filters.

So consider that we have B uniform hash functions H1, H2, ..., HB and each hash function converts a key to a range {1,2,...,R}. Now imagine an array C of size B x R (<M) that represents count of how many times each key is seen. For each number (say x) that we read from the dataset, compute Hi[x] and increment C[i,Hi[x]] by 1. So we maintain B counts of x in different R buckets. We can say that the true count of x is less than min(C[1,H1[x]], ..., C[B,HB[x]]).

Now if the query is to get all the elements with frequency greater than some threshold then we can use bloom filters to get all such numbers (with some false positives though, which can be filtered using another pass on the dataset). This can save a complete write of the data to the disk. (see the paper: Computing Iceberg Queries Efficiently).

But in our case, we are interested in finding the top K frequent numbers. Following modification can be used to estimate the frequency of each number.
`MH = MIN-HEAP(K)  for i = 1:N     x = A[i]     for b = 1:B        C[b,Hb(x)] += Sb(x)     end       if contains(MH,x)        increment count of x in the heap     else         f = median(Hi(x) * Si(x), \forall i)        if f > min(MH)           remove-min(MH)           insert(MH, (x,f))        end     end  end  `
The Sb functions is a {-1,+1} hash function and this data-structure is called CountSketch. More details of the method is available in the paper: Finding Frequent Items in Data Streams.

Note that the above algorithm takes a single passes on the dataset (and no disk write) but it is not guaranteed to give the top K frequent items. It can make some mistakes for some less frequent items. In practice the choice of the hashing functions can be critical for the performance of the algorithm.

Read full article from Finding Top K Frenquent Elements in Large Dataset - 我的博客 - ITeye技术网站

## Labels

GeeksforGeeks (1107) LeetCode (993) Algorithm (795) Review (766) to-do (633) LeetCode - Review (514) Classic Algorithm (324) Dynamic Programming (293) Classic Interview (288) Google Interview (242) Tree (145) POJ (139) Difficult Algorithm (132) LeetCode - Phone (127) EPI (125) Different Solutions (120) Bit Algorithms (118) Lintcode (113) Smart Algorithm (109) Math (107) HackerRank (89) Binary Tree (82) Binary Search (81) Graph Algorithm (74) Greedy Algorithm (72) DFS (67) Interview Corner (61) Stack (60) List (58) BFS (54) Codility (54) ComProGuide (52) USACO (46) Trie (45) ACM-ICPC (41) Interval (41) Data Structure (40) Knapsack (40) Jobdu (39) LeetCode Hard (39) Matrix (38) String Algorithm (38) Backtracking (36) Codeforces (36) Must Known (36) Sort (35) Union-Find (34) Array (33) prismoskills (33) Segment Tree (32) Sliding Window (32) HDU (31) Google Code Jam (30) Permutation (30) Puzzles (30) Array O(N) (29) Company-Airbnb (29) Company-Zenefits (28) Palindrome (28) to-do-must (28) Priority Queue (27) Random (27) Graph (26) GeeksQuiz (25) Logic Thinking (25) Pre-Sort (25) hihocoder (25) Queue (24) Company-Facebook (23) High Frequency (23) TopCoder (23) Algorithm Game (22) Bisection Method (22) Hash (22) DFS + Review (21) Brain Teaser (20) CareerCup (20) Merge Sort (20) O(N) (20) Follow Up (19) Time Complexity (19) Two Pointers (19) UVA (19) Ordered Stack (18) Probabilities (18) Company-Uber (17) Game Theory (17) Topological Sort (17) Codercareer (16) Heap (16) Shortest Path (16) String Search (16) Tree Traversal (16) itint5 (16) Difficult (15) Iterator (15) BST (14) Number (14) Number Theory (14) Amazon Interview (13) Basic Algorithm (13) Codechef (13) Euclidean GCD (13) KMP (13) Majority (13) mitbbs (13) Combination (12) Modify Tree (12) Reconstruct Tree (12) Reverse Thinking (12) 尺取法 (12) AOJ (11) DFS+Backtracking (11) Graph DFS (11) LCA (11) LeetCode - DFS (11) Miscs (11) Princeton (11) Proof (11) Tree DP (11) X Sum (11) 挑战程序设计竞赛 (11) Bisection (10) Bucket Sort (10) Coin Change (10) DFS+Cache (10) HackerRank Easy (10) O(1) Space (10) Rolling Hash (10) SPOJ (10) Theory (10) Tutorialhorizon (10) Mathblog (9) Max-Min Flow (9) Prefix Sum (9) Quick Sort (9) Simulation (9) Stock (9) TreeMap (9) Use XOR (9) Book Notes (8) Bottom-Up (8) DFS+BFS (8) Linked List (8) Prime (8) Suffix Tree (8) Tech-Queries (8) 穷竭搜索 (8) Expression (7) Game Nim (7) Graph BFS (7) Hackerearth (7) Inversion (7) Quick Select (7) Radix Sort (7) n00tc0d3r (7) 蓝桥杯 (7) DFS+DP (6) DP - Tree (6) Dijkstra (6) Dutch Flag (6) How To (6) Manacher (6) One Pass (6) Pruning (6) Rabin-Karp (6) Sampling (6) Schedule (6) Stream (6) Suffix Array (6) Threaded (6) TreeSet (6) Xpost (6) reddit (6) AI (5) Big Data (5) Brute Force (5) Code Kata (5) Coding (5) Convex Hull (5) Crazyforcode (5) Cycle (5) Find Rule (5) Graph Cycle (5) Immutability (5) Java (5) Maze (5) Pre-Sum (5) Quadtrees (5) Quora (5) Subarray Sum (5) Sudoku (5) Sweep Line (5) Word Search (5) jiuzhang (5) 单调栈 (5) 树形DP (5) 1point3acres (4) Abbreviation (4) Anagram (4) Anagrams (4) Chess Game (4) Distributed (4) Flood fill (4) Histogram (4) MST (4) MinMax (4) N Queens (4) Probability (4) Programcreek (4) Subset Sum (4) Subsets (4) Symbol Table (4) Triangle (4) Water Jug (4) algnotes (4) fgdsb (4) to-do-2 (4) 最大化最小值 (4) A Star (3) B Tree (3) Coins (3) Dedup (3) Dequeue (3) Dropbox (3) Easy (3) Github (3) GoLang (3) Joseph (3) Jump Game (3) K (3) LogN (3) Minesweeper (3) NP Hard (3) O(N) Hard (3) Rectangle (3) Scala (3) SegmentFault (3) Shuffle (3) Subtree (3) Trie + DFS (3) Word Ladder (3) bookkeeping (3) codebytes (3) Array Merge (2) BOJ (2) Bellman Ford (2) Bit Counting (2) Bit Mask (2) Bloom Filter (2) Clock (2) Codesays (2) DP - DFS (2) DP-3D Table (2) DP-Classical (2) DP-i-k-j (2) DP-树形 (2) Factor (2) GoHired (2) Graham Scan (2) Huffman Tree (2) Invariant (2) Islands (2) Lucene-Solr (2) Matrix Power (2) Median (2) Parentheses (2) Parser (2) Peak (2) Programming (2) Robot (2) Rosettacode (2) Search (2) SimHash (2) Skyline (2) Summary (2) TV (2) Tile (2) Tree Sum (2) Word Break (2) Word Graph (2) Word Trie (2) 剑指Offer (2) 数位DP (2) 1-X (1) 51Nod (1) Akka (1) BACK (1) BK-Tree (1) BZOJ (1) Basic (1) Bayes (1) Big Integer (1) Big Number (1) Binary (1) Bipartite (1) BitMap (1) BitMap index (1) BitSet (1) BuildIt (1) C/C++ (1) CC Interview (1) Cache (1) Chinese (1) Cloest (1) Clone (1) Code Quality (1) Company-Epic (1) Company-Yelp (1) Concurrency (1) Custom Sort (1) DFS-Matrix (1) DP-Difficult (1) DP-Graph (1) DP-MaxMin (1) Database (1) Diagonal (1) Domino (1) Dr Dobb's (1) Duplicate (1) FST (1) Fraction (1) Funny (1) Game (1) Generation (1) GeoHash (1) Google APAC (1) Gray Code (1) HOJ (1) Hanoi (1) How Hash (1) How to Test (1) Improve It (1) In Place (1) Interview (1) Isomorphic (1) JDK8 (1) Knight (1) Kruskal (1) Kth Element (1) Linkedin (1) Local MinMax (1) Matrix BFS (1) Matrix Graph (1) Matrix+DP (1) MinHash (1) MinMax Heap (1) Monto Carlo (1) Multiple DFS (1) Next Element (1) PAT (1) Parenthesis (1) Partition (1) Path Finding (1) Persistent (1) Power Set (1) PreProcess (1) Python (1) RSA (1) Ranking (1) Rddles (1) ReHash (1) Realtime (1) Region (1) Resources (1) Robin (1) Selection (1) Similarity (1) Square (1) String DP (1) SubMatrix (1) Subsequence (1) TSP (1) Test (1) Test Cases (1) Thread (1) TimSort (1) Top-Down (1) Tournament (1) Tree Rotate (1) Trie vs Hash (1) Triplet (1) Two Stacks (1) UyHiP (1) Valid Tree (1) Vector (1) Wiggle Sort (1) Wikipedia (1) ZOJ (1) ZigZag (1) codevs (1) cos126 (1) javabeat (1) jum (1) sqrt(N) (1) 两次dijkstra (1) 九度 (1) 二进制枚举 (1) 夹逼法 (1) 归一化 (1) 折半枚举 (1) 枚举 (1) 状态压缩DP (1) 男人八题 (1) 英雄会 (1) 逆向思维 (1)