Wednesday, November 23, 2016

LeetCode 460 - LFU Cache


LeetCode 146 - LRU Cache
https://leetcode.com/problems/lfu-cache/
Design and implement a data structure for Least Frequently Used (LFU) cache. It should support the following operations: getand set.
get(key) - Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
set(key, value) - Set or insert the value if the key is not already present. When the cache reaches its capacity, it should invalidate the least frequently used item before inserting a new item. For the purpose of this problem, when there is a tie (i.e., two or more keys that have the same frequency), the least recently used key would be evicted.
Follow up:
Could you do both operations in O(1) time complexity?
Example:
LFUCache cache = new LFUCache( 2 /* capacity */ );

cache.set(1, 1);
cache.set(2, 2);
cache.get(1);       // returns 1
cache.set(3, 3);    // evicts key 2
cache.get(2);       // returns -1 (not found)
cache.get(3);       // returns 3.
cache.set(4, 4);    // evicts key 1.
cache.get(1);       // returns -1 (not found)
cache.get(3);       // returns 3
cache.get(4);       // returns 4

双向链表(Doubly Linked List) + 哈希表(Hash Table)
首先定义双向链表节点:KeyNode(Key节点)与FreqNode(频度节点)。
KeyNode中保存key(键),value(值),freq(频度),prev(前驱),next(后继)

FreqNode中保存freq(频度)、prev(前驱)、next(后继)、first(指向最新的KeyNode),last(指向最老的KeyNode)
在数据结构LFUCache中维护如下属性:
capacity:缓存的容量

keyDict:从key到KeyNode的映射

freqDict:从freq到FreqNode的映射

head:指向最小的FreqNode
整体数据结构设计如下图所示:
head --- FreqNode1 ---- FreqNode2 ---- ... ---- FreqNodeN
              |               |                       |               
            first           first                   first             
              |               |                       |               
           KeyNodeA        KeyNodeE                KeyNodeG           
              |               |                       |               
           KeyNodeB        KeyNodeF                KeyNodeH           
              |               |                       |               
           KeyNodeC         last                   KeyNodeI           
              |                                       |      
           KeyNodeD                                 last
              |
            last
LFUCache操作实现如下:
set(key, value):
如果capacity为0,忽略当前操作,结束

如果keyDict中包含key,则替换其value,更新节点频度,结束

否则,如果当前keyDict的长度 == capcity,移除head.last(频度最低且最老的KeyNode)

新增KeyNode(key, value),加入keyDict,并更新freqDict
get(key):
若keyDict中包含key,则更新节点频度,返回对应的value

否则,返回-1
节点频度的更新:
从keyDict中找到对应的KeyNode,然后通过KeyNode的freq值,从freqDict找到对应的FreqNode

如果FreqNode的next节点不等于freq + 1,则在其右侧插入一个值为freq + 1的新FreqNode节点

将KeyNode的freq值+1后,从当前KeyNode链表转移到新的FreqNode对应的KeyNode链表

如果KeyNode移动之后,原来的FreqNode对应的KeyNode链表为空,则删除原来的FreqNode

在操作完毕后如果涉及到head的变更,则更新head
class KeyNode(object): def __init__(self, key, value, freq = 1): self.key = key self.value = value self.freq = freq self.prev = self.next = None class FreqNode(object): def __init__(self, freq, prev, next): self.freq = freq self.prev = prev self.next = next self.first = self.last = None class LFUCache(object): def __init__(self, capacity): """ :type capacity: int """ self.capacity = capacity self.keyDict = dict() self.freqDict = dict() self.head = None def get(self, key): """ :type key: int :rtype: int """ if key in self.keyDict: keyNode = self.keyDict[key] value = keyNode.value self.inc(key, value) return value return -1 def set(self, key, value): """ :type key: int :type value: int :rtype: void """ if self.capacity == 0: return if key not in self.keyDict and len(self.keyDict) == self.capacity: self.removeKeyNode(self.head.last) self.inc(key, value) def inc(self, key, value): """ Inserts a new KeyNode<key, value> with freq 1. Or increments the freq of an existing KeyNode<key, value> by 1. :type key: str :rtype: void """ if key in self.keyDict: keyNode = self.keyDict[key] keyNode.value = value freqNode = self.freqDict[keyNode.freq] nextFreqNode = freqNode.next keyNode.freq += 1 if nextFreqNode is None or nextFreqNode.freq > keyNode.freq: nextFreqNode = self.insertFreqNodeAfter(keyNode.freq, freqNode) self.unlinkKey(keyNode, freqNode) self.linkKey(keyNode, nextFreqNode) else: keyNode = self.keyDict[key] = KeyNode(key, value) freqNode = self.freqDict.get(1) if freqNode is None: freqNode = self.freqDict[1] = FreqNode(1, None, self.head) if self.head: self.head.prev = freqNode self.head = freqNode self.linkKey(keyNode, freqNode) def delFreqNode(self, freqNode): """ Delete freqNode. :rtype: void """ prev, next = freqNode.prev, freqNode.next if prev: prev.next = next if next: next.prev = prev if self.head == freqNode: self.head = next del self.freqDict[freqNode.freq] def insertFreqNodeAfter(self, freq, node): """ Insert a new FreqNode(freq) after node. :rtype: FreqNode """ newNode = FreqNode(freq, node, node.next) self.freqDict[freq] = newNode if node.next: node.next.prev = newNode node.next = newNode return newNode def removeKeyNode(self, keyNode): """ Remove keyNode :rtype: void """ self.unlinkKey(keyNode, self.freqDict[keyNode.freq]) del self.keyDict[keyNode.key] def unlinkKey(self, keyNode, freqNode): """ Unlink keyNode from freqNode :rtype: void """ next, prev = keyNode.next, keyNode.prev if prev: prev.next = next if next: next.prev = prev if freqNode.first == keyNode: freqNode.first = next if freqNode.last == keyNode: freqNode.last = prev if freqNode.first is None: self.delFreqNode(freqNode) def linkKey(self, keyNode, freqNode): """ Link keyNode to freqNode :rtype: void """ firstKeyNode = freqNode.first keyNode.prev = None keyNode.next = firstKeyNode if firstKeyNode: firstKeyNode.prev = keyNode freqNode.first = keyNode if freqNode.last is None: freqNode.last = keyNode # Your LFUCache object will be instantiated and called as such: # obj = LFUCache(capacity) # param_1 = obj.get(key) # obj.set(key,value)

https://discuss.leetcode.com/topic/69137/java-o-1-accept-solution-using-hashmap-doublelinkedlist-and-linkedhashset
Two HashMaps are used, one to store <key, value> pair, another store the <key, node>.
I use double linked list to keep the frequent of each key. In each double linked list node, keys with the same count are saved using java built in LinkedHashSet. This can keep the order.
Every time, one key is referenced, first find the current node corresponding to the key, If the following node exist and the frequent is larger by one, add key to the keys of the following node, else create a new node and add it following the current node.
All operations are guaranteed to be O(1).

public class LFUCache {
    private Node head = null;
    private int cap = 0;
    private HashMap<Integer, Integer> valueHash = null;
    private HashMap<Integer, Node> nodeHash = null;
    
    public LFUCache(int capacity) {
        this.cap = capacity;
        valueHash = new HashMap<Integer, Integer>();
        nodeHash = new HashMap<Integer, Node>();
    }
    
    public int get(int key) {
        if (valueHash.containsKey(key)) {
            increaseCount(key);
            return valueHash.get(key);
        }
        return -1;
    }
    
    public void set(int key, int value) {
        if ( cap == 0 ) return;
        if (valueHash.containsKey(key)) {
            valueHash.put(key, value);
        } else {
            if (valueHash.size() < cap) {
                valueHash.put(key, value);
            } else {
                removeOld();
                valueHash.put(key, value);
            }
            addToHead(key);
        }
        increaseCount(key);
    }
    
    private void addToHead(int key) {
        if (head == null) {
            head = new Node(0);
            head.keys.add(key);
        } else if (head.count > 0) {
            Node node = new Node(0);
            node.keys.add(key);
            node.next = head;
            head.prev = node;
            head = node;
        } else {
            head.keys.add(key);
        }
        nodeHash.put(key, head);      
    }
    
    private void increaseCount(int key) {
        Node node = nodeHash.get(key);
        node.keys.remove(key);
        
        if (node.next == null) {
            node.next = new Node(node.count+1);
            node.next.prev = node;
            node.next.keys.add(key);
        } else if (node.next.count == node.count+1) {
            node.next.keys.add(key);
        } else {
            Node tmp = new Node(node.count+1);
            tmp.keys.add(key);
            tmp.prev = node;
            tmp.next = node.next;
            node.next.prev = tmp;
            node.next = tmp;
        }

        nodeHash.put(key, node.next);
        if (node.keys.size() == 0) remove(node);
    }
    
    private void removeOld() {
        if (head == null) return;
        int old = 0;
        for (int n: head.keys) {
            old = n;
            break;
        }
        head.keys.remove(old);
        if (head.keys.size() == 0) remove(head);
        nodeHash.remove(old);
        valueHash.remove(old);
    }
    
    private void remove(Node node) {
        if (node.prev == null) {
            head = node.next;
        } else {
            node.prev.next = node.next;
        } 
        if (node.next != null) {
            node.next.prev = node.prev;
        }
    }
    
    class Node {
        public int count = 0;
        public LinkedHashSet<Integer> keys = null;
        public Node prev = null, next = null;
        
        public Node(int count) {
            this.count = count;
            keys = new LinkedHashSet<Integer>();
            prev = next = null;
        }
    }
}
X. using 3 HashMaps and LinkedHashSet
https://discuss.leetcode.com/topic/69737/java-o-1-very-easy-solution-using-3-hashmaps-and-linkedhashset
public class LFUCache {
    HashMap<Integer, Integer> vals;
    HashMap<Integer, Integer> counts;
    HashMap<Integer, LinkedHashSet<Integer>> lists;
    int cap;
    int min = -1;
    public LFUCache(int capacity) {
        cap = capacity;
        vals = new HashMap<>();
        counts = new HashMap<>();
        lists = new HashMap<>();
        lists.put(1, new LinkedHashSet<>());
    }
    
    public int get(int key) {
        if(!vals.containsKey(key))
            return -1;
        int count = counts.get(key);
        counts.put(key, count+1);
        lists.get(count).remove(key);
        if(count==min && lists.get(count).size()==0)
            min++;
        if(!lists.containsKey(count+1))
            lists.put(count+1, new LinkedHashSet<>());
        lists.get(count+1).add(key);
        return vals.get(key);
    }
    
    public void set(int key, int value) {
        if(cap<=0)
            return;
        if(vals.containsKey(key)) {
            vals.put(key, value);
            get(key);
            return;
        } 
        if(vals.size() >= cap) {
            int evit = lists.get(min).iterator().next();
            lists.get(min).remove(evit);
            vals.remove(evit);
        }
        vals.put(key, value);
        counts.put(key, 1);
        min = 1;
        lists.get(1).add(key);
    }
}
https://github.com/laurentluce/lfu-cache/blob/master/lfucache/lfu_cache.py
http://www.laurentluce.com/posts/least-frequently-used-cache-eviction-scheme-with-complexity-o1-in-python/

X. Using TreeSet
https://discuss.leetcode.com/topic/69033/java-solution-using-priorityqueue-with-detailed-explanation/2
We need to implement get() and set() in average O(logn) time, or we will get TLE.
Obviously, we need a hashmap to remember key-value pair.
What we need to do, is to remember (frequency, recentness) for each key; and sort them to get the smallest one.
So, we need to use Collection such as TreeSet or PriorityQueue.
Now, the only question is, how to update?
It is difficult to update (frequency, recentness) in the collection, as we don't know the index.
(Maybe using binary search or hashmap can do this, I haven't tried it.)
The trick is, just override equals() and hashCode() function, in order to use remove.
    class Cache implements Comparable<Cache> {
        int key, f, r;
        public Cache(int k, int f, int r) {key=k;this.f=f;this.r=r;}
        public boolean equals(Object object) {return key==((Cache) object).key;}
        public int hashCode() {return key;}
        public int compareTo(Cache o) {return key==o.key?0:f==o.f?r-o.r:f-o.f;}
    }

    int capacity,id;
    HashMap<Integer, Integer> hashMap;
    HashMap<Integer, Cache> caches;
    TreeSet<Cache> treeSet;

    public LFUCache(int capacity) {
        this.capacity=capacity;
        id=0;
        hashMap=new HashMap<>();
        caches=new HashMap<>();
        treeSet=new TreeSet<>();
    }

    public int get(int key) {
        id++;
        if (hashMap.containsKey(key)) {
            update(key);
            return hashMap.get(key);
        }
        return -1;
    }

    public void set(int key, int value) {
        if (capacity==0) return;
        id++;
        if (hashMap.containsKey(key)) {
            update(key);
            hashMap.put(key, value);
            return;
        }
        if (hashMap.size()==capacity) {
            Cache first=treeSet.pollFirst();
            hashMap.remove(first.key);
            caches.remove(first.key);
        }
        hashMap.put(key, value);
        Cache cache=new Cache(key, 1, id);
        caches.put(key, cache);
        treeSet.add(cache);
    }

    private void update(int key) {
        int f=caches.get(key).f;
        treeSet.remove(caches.get(key));
        Cache cache=new Cache(key, f+1, id);
        caches.put(key, cache);
        treeSet.add(cache);
    }
http://www.deepakvadgama.com/blog/lfu-cache-in-O(1)/
We need 2 things to start with
  1. Map to store key-value pairs
  2. Map to store counts/frequency of access
How do we implement evict method? When size of map reaches max capacity, we need to find item with lowest frequency count. There are 2 problems:
  1. We have to iterate through all values of frequencies map, find lowest count and remove corresponding key from both maps. This will take O(n) time.
  2. Also, what if there are multiple keys with same frequency count? How do we find least recently used? That’s not possible because HashMap does not store the order of insertion.
To solve both of above problems we need to add one more data structure: Sorted map with frequency as map-keys and ‘list of item-keys with same frequency’ as map-values.
  1. We can add new item can to the end of the list with frequency 1.
  2. We can find the list with lowest frequency in O(1), since map is sorted by frequencies.
  3. We can delete the first item of the list (of lowest frequency) since that will be least recently used. Also O(1). 
public class LFUCache {

    private Map<Integer, Integer> values = new HashMap<>();
    private Map<Integer, Integer> counts = new HashMap<>();
    private TreeMap<Integer, List<Integer>> frequencies = new TreeMap<>();
    private final int MAX_CAPACITY;

    public LFUCache(int capacity) {
        MAX_CAPACITY = capacity;
    }

    public int get(int key) {
        if (!values.containsKey(key)) {
            return -1;
        }

        // Move item from one frequency list to next. Not O(1) due to list iteration.
        int frequency = counts.get(key);
        frequencies.get(frequency).remove(new Integer(key)); //O(n)
        if (frequencies.get(frequency).size() == 0) {
            frequencies.remove(frequency);  // remove from map if list is empty
        }
        frequencies.computeIfAbsent(frequency + 1, k -> new LinkedList<>()).add(key);

        counts.put(key, frequency + 1);
        return values.get(key);
    }

    public void set(int key, int value) {
        if (!values.containsKey(key)) {

            if (values.size() == MAX_CAPACITY) {
                // first item from 'list of smallest frequency'
                int lowestCount = frequencies.firstKey();
                int keyToDelete = frequencies.get(lowestCount).remove(0);
                if (frequencies.get(lowestCount).size() == 0) {
                    frequencies.remove(lowestCount); // remove from map if list is empty
                }
                values.remove(keyToDelete);
                counts.remove(keyToDelete);
            }

            values.put(key, value);
            counts.put(key, 1);
            frequencies.computeIfAbsent(1, k -> new LinkedList<>()).add(key); // starting frequency = 1
        }
    }
}
While solving our delete problem, we accidentally increased our access operation time to O(n). How? Note that all of item-keys sharing same frequency are in a list. Now if one of these items is accessed, how do we move it to list of next frequency? We will have to iterate through the list first to find the item, which in worst-case will take O(n) operations.
To solve the problem, we somehow need to jump directly to that item in the list without iteration. If we can do that, it will be easier to delete the item and add it to end of next frequency list.
Unfortunately, this is not possible using our in-built data structures. We need to create a new one (mentioned in the paper).
We need to store each item’s position.
  1. We will create a simple class which stores item’s key, value and its position in the list.
  2. We will convert the linked list to
We need to store each item’s position.
  1. We will create a simple class which stores item’s key, value and its position in the list.
  2. We will convert the linked list to
public class LFUCache {

    private Map<Integer, Node> values = new HashMap<>();
    private Map<Integer, Integer> counts = new HashMap<>();
    private TreeMap<Integer, DoubleLinkedList> frequencies = new TreeMap<>();
    private final int MAX_CAPACITY;

    public LFUCache(int capacity) {
        MAX_CAPACITY = capacity;
    }

    public int get(int key) {
        if (!values.containsKey(key)) {
            return -1;
        }

        Node node = values.get(key);

        // Move item from one frequency list to next. O(1) this time.
        int frequency = counts.get(key);
        frequencies.get(frequency).remove(node);
        removeIfListEmpty(frequency);
        frequencies.computeIfAbsent(frequency + 1, k -> new DoubleLinkedList()).add(node);

        counts.put(key, frequency + 1);
        return values.get(key).value;
    }

    public void set(int key, int value) {
        if (!values.containsKey(key)) {

            Node node = new Node(key, value);

            if (values.size() == MAX_CAPACITY) {

                int lowestCount = frequencies.firstKey();   // smallest frequency
                Node nodeTodelete = frequencies.get(lowestCount).head(); // first item (LRU)
                frequencies.get(lowestCount).remove(nodeTodelete);

                int keyToDelete = nodeTodelete.key();
                removeIfListEmpty(lowestCount);
                values.remove(keyToDelete);
                counts.remove(keyToDelete);
            }

            values.put(key, node);
            counts.put(key, 1);
            frequencies.computeIfAbsent(1, k -> new DoubleLinkedList()).add(node); // starting frequency = 1
        }
    }

    private void removeIfListEmpty(int frequency) {
        if (frequencies.get(frequency).size() == 0) {
            frequencies.remove(frequency);  // remove from map if list is empty
        }
    }

    private class Node {
        private int key;
        private int value;
        private Node next;
        private Node prev;

        public Node(int key, int value) {
            this.key = key;
            this.value = value;
        }

        public int key() {
            return key;
        }

        public int value() {
            return value;
        }
    }

    private class DoubleLinkedList {
        private int n;
        private Node head;
        private Node tail;

        public void add(Node node) {
            if (head == null) {
                head = node;
            } else {
                tail.next = node;
                node.prev = tail;
            }
            tail = node;
            n++;
        }

        public void remove(Node node) {

            if (node.next == null) tail = node.prev;
            else node.next.prev = node.prev;

            if (head.key == node.key) head = node.next;
            else node.prev.next = node.next;

            n--;
        }

        public Node head() {
            return head;
        }

        public int size() {
            return n;
        }
    }
}

Step 4: Remove the counts HashMap

Note that we need the intermediate map called counts to jump to the appropriate list. We can go one step further (code not written) to remove this extra data structure.
  1. Convert frequencies HashMap keys into a doubly linked list
  2. Add variable reference to each item, which points to corresponding frequency
  3. So instead of counts hashmap, we can go get frequency node directly from the item itself.
This is precisely the algorithm implemented in this paper




No comments:

Post a Comment

Labels

GeeksforGeeks (1107) LeetCode (985) Algorithm (795) Review (759) to-do (631) LeetCode - Review (506) Classic Algorithm (324) Dynamic Programming (292) Classic Interview (288) Google Interview (242) Tree (145) POJ (139) Difficult Algorithm (132) LeetCode - Phone (127) EPI (125) Different Solutions (120) Bit Algorithms (118) Lintcode (113) Cracking Coding Interview (110) Smart Algorithm (109) Math (107) HackerRank (89) Binary Search (81) Binary Tree (80) Graph Algorithm (74) Greedy Algorithm (72) DFS (66) LeetCode - Extended (62) Interview Corner (61) Stack (60) List (58) Advanced Data Structure (56) Codility (54) BFS (53) ComProGuide (52) Algorithm Interview (50) Geometry Algorithm (48) Binary Search Tree (46) USACO (46) Trie (45) Mathematical Algorithm (42) ACM-ICPC (41) Interval (41) Data Structure (40) Knapsack (40) Space Optimization (40) Jobdu (39) Recursive Algorithm (39) LeetCode Hard (38) Matrix (38) String Algorithm (38) Backtracking (36) Codeforces (36) Introduction to Algorithms (36) Must Known (36) Beauty of Programming (35) Sort (35) Union-Find (34) Array (33) prismoskills (33) Segment Tree (32) Sliding Window (32) Data Structure Design (31) HDU (31) Google Code Jam (30) Permutation (30) Puzzles (30) Array O(N) (29) Company-Airbnb (29) Company-Zenefits (28) Microsoft 100 - July (28) Palindrome (28) to-do-must (28) Priority Queue (27) Random (27) Graph (26) Company - LinkedIn (25) GeeksQuiz (25) Logic Thinking (25) Pre-Sort (25) hihocoder (25) Queue (24) Company-Facebook (23) High Frequency (23) TopCoder (23) Algorithm Game (22) Hash (22) Post-Order Traverse (22) Binary Indexed Trees (21) Bisection Method (21) DFS + Review (21) Lintcode - Review (21) Brain Teaser (20) CareerCup (20) Company - Twitter (20) Merge Sort (20) Follow Up (19) O(N) (19) Time Complexity (19) Two Pointers (19) UVA (19) Ordered Stack (18) Probabilities (18) Company-Uber (17) Game Theory (17) Topological Sort (17) Codercareer (16) Heap (16) Shortest Path (16) String Search (16) Tree Traversal (16) itint5 (16) Difficult (15) Iterator (15) BST (14) Number (14) Number Theory (14) Amazon Interview (13) Basic Algorithm (13) Codechef (13) Euclidean GCD (13) KMP (13) Long Increasing Sequence(LIS) (13) Majority (13) mitbbs (13) Combination (12) Computational Geometry (12) LeetCode - Classic (12) Modify Tree (12) Reconstruct Tree (12) Reservoir Sampling (12) Reverse Thinking (12) 尺取法 (12) AOJ (11) DFS+Backtracking (11) Fast Power Algorithm (11) Graph DFS (11) LCA (11) LeetCode - DFS (11) Miscs (11) Princeton (11) Proof (11) Tree DP (11) X Sum (11) 挑战程序设计竞赛 (11) Bisection (10) Bucket Sort (10) Coin Change (10) Company - Microsoft (10) DFS+Cache (10) Facebook Hacker Cup (10) HackerRank Easy (10) O(1) Space (10) Rolling Hash (10) SPOJ (10) Theory (10) Tutorialhorizon (10) DP-Multiple Relation (9) DP-Space Optimization (9) Divide and Conquer (9) Kadane - Extended (9) Mathblog (9) Max-Min Flow (9) Prefix Sum (9) Quick Sort (9) Simulation (9) Stack Overflow (9) Stock (9) System Design (9) TreeMap (9) Use XOR (9) Book Notes (8) Bottom-Up (8) Company-Amazon (8) DFS+BFS (8) LeetCode - DP (8) Left and Right Array (8) Linked List (8) Longest Common Subsequence(LCS) (8) Prime (8) Suffix Tree (8) Tech-Queries (8) Traversal Once (8) 穷竭搜索 (8) Algorithm Problem List (7) Expression (7) Facebook Interview (7) Fibonacci Numbers (7) Game Nim (7) Graph BFS (7) HackerRank Difficult (7) Hackerearth (7) Interval Tree (7) Inversion (7) Kadane’s Algorithm (7) Level Order Traversal (7) Math-Divisible (7) Probability DP (7) Quick Select (7) Radix Sort (7) n00tc0d3r (7) 蓝桥杯 (7) Catalan Number (6) Classic Data Structure Impl (6) DFS+DP (6) DP - Tree (6) DP-Print Solution (6) Dijkstra (6) Dutch Flag (6) How To (6) Interviewstreet (6) Knapsack - MultiplePack (6) Manacher (6) Minimum Spanning Tree (6) Morris Traversal (6) Multiple Data Structures (6) One Pass (6) Programming Pearls (6) Pruning (6) Rabin-Karp (6) Randomized Algorithms (6) Sampling (6) Schedule (6) Stream (6) Suffix Array (6) Threaded (6) TreeSet (6) Xpost (6) reddit (6) AI (5) Algorithm - Brain Teaser (5) Art Of Programming-July (5) Big Data (5) Brute Force (5) Code Kata (5) Codility-lessons (5) Coding (5) Company - WMware (5) Convex Hull (5) Crazyforcode (5) Cycle (5) DP-Include vs Exclude (5) Fast Slow Pointers (5) Graph Cycle (5) Hash Strategy (5) Immutability (5) Java (5) Matrix Chain Multiplication (5) Maze (5) Microsoft Interview (5) Pre-Sum (5) Quadtrees (5) Quick Partition (5) Quora (5) SPFA(Shortest Path Faster Algorithm) (5) Subarray Sum (5) Sudoku (5) Sweep Line (5) Word Search (5) jiuzhang (5) 单调栈 (5) 树形DP (5) 1point3acres (4) Abbreviation (4) Anagram (4) Anagrams (4) Approximate Algorithm (4) Backtracking-Include vs Exclude (4) Brute Force - Enumeration (4) Chess Game (4) Consistent Hash (4) Distributed (4) Eulerian Cycle (4) Find Rule (4) Flood fill (4) Graph-Classic (4) HackerRank AI (4) Histogram (4) Kadane Max Sum (4) Knapsack - Mixed (4) Knapsack - Unbounded (4) LeetCode - Recursive (4) LeetCode - TODO (4) MST (4) MinMax (4) N Queens (4) Nerd Paradise (4) Parallel Algorithm (4) Practical Algorithm (4) Probability (4) Programcreek (4) Spell Checker (4) Stock Maximize (4) Subset Sum (4) Subsets (4) Symbol Table (4) Triangle (4) Water Jug (4) algnotes (4) fgdsb (4) to-do-2 (4) 最大化最小值 (4) A Star (3) Algorithm - How To (3) Algorithm Design (3) B Tree (3) Big Data Algorithm (3) Caterpillar Method (3) Coins (3) Company - Groupon (3) Company - Indeed (3) Cumulative Sum (3) DP-Fill by Length (3) DP-Two Variables (3) Dedup (3) Dequeue (3) Dropbox (3) Easy (3) Finite Automata (3) Github (3) GoLang (3) Graph - Bipartite (3) Include vs Exclude (3) Joseph (3) Jump Game (3) K (3) Knapsack-多重背包 (3) LeetCode - Bit (3) Linked List Merge Sort (3) LogN (3) Master Theorem (3) Min Cost Flow (3) Minesweeper (3) Missing Numbers (3) NP Hard (3) O(N) Hard (3) Online Algorithm (3) Pascal's Triangle (3) Pattern Match (3) Project Euler (3) Rectangle (3) Scala (3) SegmentFault (3) Shuffle (3) Sieve of Eratosthenes (3) Stack - Smart (3) State Machine (3) Subtree (3) Transform Tree (3) Trie + DFS (3) Two Pointers Window (3) Warshall Floyd (3) With Random Pointer (3) Word Ladder (3) bookkeeping (3) codebytes (3) Activity Selection Problem (2) Advanced Algorithm (2) AnAlgorithmADay (2) Application of Algorithm (2) Array Merge (2) BOJ (2) BT - Path Sum (2) Balanced Binary Search Tree (2) Bellman Ford (2) Binary Search - Smart (2) Binomial Coefficient (2) Bit Counting (2) Bit Mask (2) Bit-Difficult (2) Bloom Filter (2) Book Coding Interview (2) Branch and Bound Method (2) Clock (2) Codesays (2) Company - Baidu (2) Company-Snapchat (2) Complete Binary Tree (2) DFS+BFS, Flood Fill (2) DP - DFS (2) DP-3D Table (2) DP-Classical (2) DP-Output Solution (2) DP-Slide Window Gap (2) DP-i-k-j (2) DP-树形 (2) Distributed Algorithms (2) Divide and Conqure (2) Doubly Linked List (2) Edit Distance (2) Factor (2) Forward && Backward Scan (2) GoHired (2) Graham Scan (2) Graph BFS+DFS (2) Graph Coloring (2) Graph-Cut Vertices (2) Hamiltonian Cycle (2) Huffman Tree (2) In-order Traverse (2) Include or Exclude Last Element (2) Information Retrieval (2) Interview - Linkedin (2) Invariant (2) Islands (2) Linked Interview (2) Linked List Sort (2) Longest SubArray (2) Lucene-Solr (2) Math-Remainder Queue (2) Matrix Power (2) Median (2) Minimum Vertex Cover (2) Negative All Values (2) Number Each Digit (2) Numerical Method (2) Object Design (2) Order Statistic Tree (2) Parent-Only Tree (2) Parentheses (2) Parser (2) Peak (2) Programming (2) Range Minimum Query (2) Regular Expression (2) Return Multiple Values (2) Reuse Forward Backward (2) Robot (2) Rosettacode (2) Scan from right (2) Search (2) SimHash (2) Simple Algorithm (2) Skyline (2) Spatial Index (2) Strongly Connected Components (2) Summary (2) TV (2) Tile (2) Traversal From End (2) Tree Sum (2) Tree Traversal Return Multiple Values (2) Word Break (2) Word Graph (2) Word Trie (2) Yahoo Interview (2) Young Tableau (2) 剑指Offer (2) 数位DP (2) 1-X (1) 51Nod (1) Akka (1) Algorithm - New (1) Algorithm Series (1) Algorithms Part I (1) Analysis of Algorithm (1) Array-Element Index Negative (1) Array-Rearrange (1) Augmented BST (1) Auxiliary Array (1) Auxiliary Array: Inc&Dec (1) BACK (1) BK-Tree (1) BZOJ (1) Basic (1) Bayes (1) Beauty of Math (1) Big Integer (1) Big Number (1) Binary (1) Binary Sarch Tree (1) Binary String (1) Binary Tree Variant (1) Bipartite (1) Bit-Missing Number (1) BitMap (1) BitMap index (1) BitSet (1) Bug Free Code (1) BuildIt (1) C/C++ (1) CC Interview (1) Cache (1) Calculate Height at Same Recusrion (1) Cartesian tree (1) Check Tree Property (1) Chinese (1) Circular Buffer (1) Cloest (1) Clone (1) Code Quality (1) Codesolutiony (1) Company - Alibaba (1) Company - Palantir (1) Company - WalmartLabs (1) Company-Apple (1) Company-Epic (1) Company-Salesforce (1) Company-Yelp (1) Compression Algorithm (1) Concurrency (1) Cont Improvement (1) Convert BST to DLL (1) Convert DLL to BST (1) Custom Sort (1) Cyclic Replacement (1) DFS-Matrix (1) DP - Probability (1) DP Fill Diagonal First (1) DP-Difficult (1) DP-End with 0 or 1 (1) DP-Fill Diagonal First (1) DP-Graph (1) DP-Left and Right Array (1) DP-MaxMin (1) DP-Memoization (1) DP-Node All Possibilities (1) DP-Optimization (1) DP-Preserve Previous Value (1) DP-Print All Solution (1) Database (1) Detect Negative Cycle (1) Diagonal (1) Directed Graph (1) Do Two Things at Same Recusrion (1) Domino (1) Dr Dobb's (1) Duplicate (1) Equal probability (1) External Sort (1) FST (1) Failure Function (1) Fraction (1) Front End Pointers (1) Funny (1) Fuzzy String Search (1) Game (1) Generating Function (1) Generation (1) Genetic algorithm (1) GeoHash (1) Geometry - Orientation (1) Google APAC (1) Graph But No Graph (1) Graph Transpose (1) Graph Traversal (1) Graph-Coloring (1) Graph-Longest Path (1) Gray Code (1) HOJ (1) Hanoi (1) Hard Algorithm (1) How Hash (1) How to Test (1) Improve It (1) In Place (1) Inorder-Reverse Inorder Traverse Simultaneously (1) Interpolation search (1) Interview (1) Interview - Facebook (1) Isomorphic (1) JDK8 (1) K Dimensional Tree (1) Knapsack - Fractional (1) Knapsack - ZeroOnePack (1) Knight (1) Knuth Shuffle (1) Kosaraju’s algorithm (1) Kruskal (1) Kth Element (1) Least Common Ancestor (1) LeetCode - Binary Tree (1) LeetCode - Coding (1) LeetCode - Detail (1) LeetCode - Related (1) Linked List Reverse (1) Linkedin (1) Linkedin Interview (1) Local MinMax (1) Logic Pattern (1) Longest Common Subsequence (1) Longest Common Substring (1) Longest Prefix Suffix(LPS) (1) Machine Learning (1) Maintain State (1) Manhattan Distance (1) Map && Reverse Map (1) Math - Induction (1) Math-Multiply (1) Math-Sum Of Digits (1) Matrix - O(N+M) (1) Matrix BFS (1) Matrix Graph (1) Matrix Search (1) Matrix+DP (1) Matrix-Rotate (1) Max Min So Far (1) Memory-Efficient (1) MinHash (1) MinMax Heap (1) Monotone Queue (1) Monto Carlo (1) Multi-End BFS (1) Multi-Reverse (1) Multiple DFS (1) Multiple Tasks (1) Next Element (1) Next Successor (1) Offline Algorithm (1) PAT (1) Parenthesis (1) Partition (1) Path Finding (1) Patience Sort (1) Persistent (1) Pigeon Hole Principle (1) Power Set (1) Pratical Algorithm (1) PreProcess (1) Probabilistic Data Structure (1) Python (1) Queue & Stack (1) RSA (1) Ranking (1) Rddles (1) ReHash (1) Realtime (1) Recurrence Relation (1) Recursive DFS (1) Recursive to Iterative (1) Red-Black Tree (1) Region (1) Resources (1) Reverse Inorder Traversal (1) Robin (1) Selection (1) Self Balancing BST (1) Similarity (1) Sort && Binary Search (1) Square (1) Streaming Algorithm (1) String Algorithm. Symbol Table (1) String DP (1) String Distance (1) SubMatrix (1) Subsequence (1) System of Difference Constraints(差分约束系统) (1) TSP (1) Ternary Search Tree (1) Test (1) Test Cases (1) Thread (1) TimSort (1) Top-Down (1) Tournament (1) Tournament Tree (1) Transform Tree in Place (1) Tree Diameter (1) Tree Rotate (1) Tree Without Tree Predefined (1) Trie and Heap (1) Trie vs Hash (1) Trie vs HashMap (1) Triplet (1) Two Data Structures (1) Two Stacks (1) USACO - Classical (1) USACO - Problems (1) UyHiP (1) Valid Tree (1) Vector (1) Virtual Matrix (1) Wiggle Sort (1) Wikipedia (1) ZOJ (1) ZigZag (1) baozitraining (1) codevs (1) cos126 (1) javabeat (1) jum (1) namic Programming (1) sqrt(N) (1) 两次dijkstra (1) 九度 (1) 二进制枚举 (1) 夹逼法 (1) 归一化 (1) 折半枚举 (1) 枚举 (1) 状态压缩DP (1) 男人八题 (1) 英雄会 (1) 逆向思维 (1)

Popular Posts