Huffman coding suffers from the fact that the uncompresser need have some knowledge of the probabilities of the characters in the compressed files. Not only can this add somewhat to the bits needed to encode the file, but, if this crucial piece of knowledge is unavailable, compressing the file requires two passes- one pass to find the frequency of each character and construct the huffman tree and a second pass to actually compress the file. Expanding on the huffman algorithm, Faller and Gallagher, and later Knuth and Vitter, developed a way to perform the huffman algorithm as a one pass procedure.
Java Implementation
https://code.google.com/p/adaptive-huffman-coding/source/browse/trunk/AdaptiveHuffmanCoding/src/com/adaptivehuffman/AdaptiveHuffman.java
Also read http://www.binaryessence.com/dct/en000097.htm
http://nayuki.eigenstate.org/page/huffman-coding-java
Read full article from Adaptive Huffman Coding
Java Implementation
https://code.google.com/p/adaptive-huffman-coding/source/browse/trunk/AdaptiveHuffmanCoding/src/com/adaptivehuffman/AdaptiveHuffman.java
Also read http://www.binaryessence.com/dct/en000097.htm
http://nayuki.eigenstate.org/page/huffman-coding-java
Read full article from Adaptive Huffman Coding