Using a rolling hash to break up binary files - CodeProject
If you've ever had to figure out what changed in a large binary chunk of data, you may have run into a number of the same obstacles I ran into.
A Possible Solution
A rolling hash is one where you pick a window size…let’s say 64 bytes…and then hash every 64-byte-long segment in the file. I don't mean a hash for 0-63, 64-127, 128-191, etc…but for 0-63, 1-64, 2-65, etc.
The hashing algorithm itself was originally described by Richard M. Karp and Michael O.
If you've ever had to figure out what changed in a large binary chunk of data, you may have run into a number of the same obstacles I ran into.
I’m presently working on a system that has to back up data on Windows drives. We store that data in a central repository, and we run the backups frequently. Backups are often huge. The first one can be painful, but after that, if we keep some evidence around, subsequent backups are pretty quick.
Some of the evidence comes from the file system. For example, NTFS keeps a rolling journal of files that have been changed. If the journal hasn’t rolled off the changes we captured in the last backup, then it has details of all the files that have been changed.
A Possible Solution
A rolling hash is one where you pick a window size…let’s say 64 bytes…and then hash every 64-byte-long segment in the file. I don't mean a hash for 0-63, 64-127, 128-191, etc…but for 0-63, 1-64, 2-65, etc.
Assuming we do get random-looking hash values, we can take a regular subset of those hashes and arbitrarily declare that these hashes “qualify” as sentinel hashes.
The smaller the subset of qualifying hashes, the larger the distance between hashes we'll find that qualify, and hence the larger average size of the chunks you’re going to get. The way I figure out if a hash qualifies is:
var matched = ( hash | mask ) == hash;
…where hash
is the rolling hash, and mask
is a bit mask. The more 1-bits in the mask, the more hashes we’ll exclude and the bigger our chunks are. You can use any way you want to get 1-bits into your mask…but an easy, controllable way is to declare it like this:const long int mask = ( 1 << 16 ) - 1;
…where 16 is the number of 1-bits I want. Each 1-bit doubles the size of the average chunk.
This function's beauty lies in the fact that it is very efficient to apply to a running-window of bytes. Unlike a lot of hashes, you can efficiently add or remove a contributing byte from the hash…hence the “rolling” nature of the hash. It uses only integer addition and multiplication, so it’s fairly processor-friendly.
Cooked down, for a window of bytes w of length n and a constant seed p, the hash is computed:
http://blog.teamleadnet.com/2012/10/rabin-karp-rolling-hash-dynamic-sized.html
http://docslide.us/technology/finding-similar-files-in-large-document-repositories.html
Read full article from Using a rolling hash to break up binary files - CodeProject
Cooked down, for a window of bytes w of length n and a constant seed p, the hash is computed:
hash = p^n(w[n]) + p^n-1(w[n-1]) + … + p^0(w[0]);
http://blog.teamleadnet.com/2012/10/rabin-karp-rolling-hash-dynamic-sized.html
public
void
displayChunks() {
String filelocation =
"file.bin"
;
int
mask = 1 << 13;
mask--;
// 13 bit of '1's
File f =
new
File(filelocation);
FileInputStream fs;
try
{
fs =
new
FileInputStream(f);
BufferedInputStream bis =
new
BufferedInputStream(fs);
// BufferedInputStream is faster to read byte-by-byte from
this
.
is
= bis;
long
length = bis.available();
long
curr = length;
// get the initial 1k hash window //
int
hash = inithash(1024);
////////////////////////////////////
curr -= bis.available();
while
(curr < length) {
if
((hash & mask) == 0) {
// window found - hash it,
// compare it to the other file chunk list
}
// next window's hash //
hash = nexthash(hash);
/////////////////////////
curr++;
}
fs.close();
}
catch
(Exception e) {
}
}
Read full article from Using a rolling hash to break up binary files - CodeProject