AOM Hashing: CPU usage vs. disk access?

misc client related stuff

Moderators: AniDB, AniDB API

Locked
Elberet
Posts: 778
Joined: Sat Jul 19, 2003 8:14 pm

AOM Hashing: CPU usage vs. disk access?

Post by Elberet »

While letting AOM hash one of my anime storage HDDs, I noticed that the program consumes all my CPU yet only achives a rather low hdd transfer rate... At the same time, ed2k_hash reads the same data (and yes, I also tried hashing with ed2k_hash first and AOM second, so caching did not have an influence) at almost twice the speed - which seems to be the hardware limit - and "only" pushes the CPU usage to 70% of my Athlon XP 2,2Ghz with 1GB RAM.

Now I wonder if that's something I should be worried about, or if it's just AOM being alpha-stage software... ;)
exp
Site Admin
Posts: 2438
Joined: Tue Oct 01, 2002 9:42 pm
Location: Nowhere

Post by exp »

well,

comparing AoM and ed2k_hash isn't really fair as AoM does also compute the crc, md5 and sha1 hashes and not only the md4 hash.
so you'd have to look for another tool which creates all those 4 hashes in one go to compare AoM to.

BYe!
EXP
Elberet
Posts: 778
Joined: Sat Jul 19, 2003 8:14 pm

Post by Elberet »

Such as fsum? Same result...
Calling it with "fsum -crc32 -sha1 -md4 -ed2k *" in a directory produces these four hashes for all files in the directory. It yields the same maximum transfer rates as ed2k_hash and only drops between files which is IMO a problem with inefficient cache usage.
Skywalka
Posts: 889
Joined: Tue Sep 16, 2003 7:57 pm

Post by Skywalka »

fsum calculates the hashes one after the other and not at the same time. the options just tell it to DO the hashes, not to do them at once.

You will notice that when you order it to write the results into a file or if you run a hash that takes a long time to calculate. You will notice that if you have it display the results in the cmd window that it will take significantly longer for it first calculates all hashes and then puts them out on screen.

So in the end in my humble opinion AoM is the first utility that actually calculates four hashes at the same time without using up four time the cpu time since if I run fsum that program uses 100% cpu also and I can just work nomally because i got a P4 with hyperthreading which presents itself like two seperate cpus to the OS.

So at the moment I don't think that the current behaviour is in any way strange. AoM does more and is not dramatically slower nor stalling the computer. I have it running on my HDD machine and running VNC server on it make it hash via AoM and I can still work with the VNC software and have the machine hash files in the background and have it scan for viruses. And that machine is an Athlon 1200.

So I really don't see the problem. AoM is fast and does not stall the computer. It works fine.
Elberet
Posts: 778
Joined: Sat Jul 19, 2003 8:14 pm

Post by Elberet »

Skywalka wrote:fsum calculates the hashes one after the other and not at the same time. the options just tell it to DO the hashes, not to do them at once.
Then you have a highly outdated version of fsum... Mine calculates all requested hashes in one file read.

Look, I'm not saying that AOM is broken. I've compared it's hashing efficiency to fsum's and I've made sure that both were working on the same terms and conditions, and at that, fsum slightly outperformed AOM on my Windows and my hardware. That's all. I posted it here because I'm not sure if it's a worth a bug report or not. :roll:
exp
Site Admin
Posts: 2438
Joined: Tue Oct 01, 2002 9:42 pm
Location: Nowhere

Post by exp »

Elberet wrote:Look, I'm not saying that AOM is broken. I've compared it's hashing efficiency to fsum's and I've made sure that both were working on the same terms and conditions, and at that, fsum slightly outperformed AOM on my Windows and my hardware. That's all. I posted it here because I'm not sure if it's a worth a bug report or not. :roll:
now that is a strange way to put it.
I seriously doubt that a slight difference in performance might ever warrant a bug report, a feature request maybe, but a bug report? x_X

BYe!
EXP
Elberet
Posts: 778
Joined: Sat Jul 19, 2003 8:14 pm

Post by Elberet »

Well, I can't tell what's going wrong, of course, but if e.g. AoM uses a too small memory buffer on my system, that'd be a bug, no?
Skywalka
Posts: 889
Joined: Tue Sep 16, 2003 7:57 pm

Post by Skywalka »

I guess I understand what Elberet is trying to say but I destinctly remember PetriW saying that AoM uses about 80 MB RAM, on my system it's currently at 90 MB.

CPU usage varies but I never actually notice any hashing in the background.
PetriW
AniDB Staff
Posts: 1522
Joined: Sat May 24, 2003 2:34 pm

Post by PetriW »

Hmm no Skywalka, that's not what he means actually.

Current buffer is 32k, I'll ask BennieB to fiddle with it but it's not THAT slow. ;)
PetriW
AniDB Staff
Posts: 1522
Joined: Sat May 24, 2003 2:34 pm

Post by PetriW »

Well, upped buffer to 512kb and got a ~8% speed increase. Tried upping it further and found a delphi bug. ;)
Will look into that.

Oh yeah, and if you hint to windows that you'll read the file sequentially from the start to the end it'll lower io throughput by about... 60%.... wtf... :(
Skywalka
Posts: 889
Joined: Tue Sep 16, 2003 7:57 pm

Post by Skywalka »

Hehe ok then because I got it all wrong I have to say I am sorry for not beeing able to understand what you meant Elberet :-)

Frankly I still don't really get it :)

What I understand from what PetriW wrote is that if you up the buffer it will hash faster and if you tell Windows you read the file sequentially it will lower the speed of the disk access (IO) which makes it slower - so I currently guess that the upping of the buffer would have a more significant effect if the disc access is changed. Or something like that. :?:
Elberet
Posts: 778
Joined: Sat Jul 19, 2003 8:14 pm

Post by Elberet »

Well, in order to understand what's going on, you need to realize what's happening while AOM is hashing a file. Basically:

- AOM tells Windows to read some data.
- Windows asks NTFS what block the data requested by AOM is in.
- Windows asks the harddisk to read that block. (t1)
- Windows copies the relevant data from that block into a buffer provided by AOM.
- AOM waits for windows to finish reading and then pipes the data read through the four separate hashing algorithms. (t2)
- AOM then repeats that process until the file has been read.

For any given implementation of a hashing algorithm, these tasks will be the same - but how efficient the algorithm is depends on how you order them. For example, in the sketch above, the time t2 depends solely on the CPU and scales linear (means: if there's twice as much data to be processed, it takes twice as long). The time t1, however, depends primarily on the harddisk. Due to noteable overhead (especially seek times, but also the time required to lookup block positions in NTFS's master file table), it does not scale that easily: Assume that it takes 1 unit of time to read as much data as fits into one buffer. The time required to read 0.1 buffers would be greater then 0.1 time units, and the time to read 2 buffers would be smaller then 2 time units. In other words, if you can manage to read the file in one go, without interrupting to process the data inbetween, you'll save time - at the same time, however, you're forced to read the file in small chunks since you can't assume that the entire file fits into memory...

Thus, the reason why a larger buffer increases performance is that there's less "dead time" spent waiting for harddisk seeks and Windows' management.

One implementation suggestion I've seen somewhere on Google recently is multithreaded:

Thread 1 reads the data into N separate buffers that are allocated at runtime to be twice the size of the volume's block size, but at least 32KB. Thread 2 empties these buffers, piping their content through the hashing algorithms. With clever interlocking (I figure you need at most two semaphores, one for each thread to represent the status of the three buffers, and one ordinary lock to stop thread 2 until thread 1 has pre-filled all buffers the first time), the slower of the two processes will set the speed for the other automagically.
PetriW wrote:Oh yeah, and if you hint to windows that you'll read the file sequentially from the start to the end it'll lower io throughput by about... 60%.... wtf... :(
That is indeed weird, but what do you mean with "hinting Windows"? I'm not aware of any special function for sequential reads in the C/C++ Win32 API, maybe it's a problem with Delphi's libraries instead?
PetriW
AniDB Staff
Posts: 1522
Joined: Sat May 24, 2003 2:34 pm

Post by PetriW »

Check the documentation for the CreateFile Windows API call and yeah, I know the above. 8)
Locked