-
Jan 10th, 2015, 06:40 PM
#1
Thread Starter
Frenzied Member
[RESOLVED] Why is reading a File in chunks quicker than in one go?
Hello,
I stumbled upon something that's making me think now..
Previously, the program was Loading chunks from the files by
opening them as Binary, and reading chunks off:
Code:
Get #fFile, xPos + 1, RAMBuffer
This was pretty fast. 90 files took 5 secs to load.
But the problem was, since its using the Native OPEN statement it doesn't seem to handle Unicode paths.
So if a user is using German, Chinese Windows etc... the Path to the File would contain Unicode, hence error!
So I thought, why don't I instead Load the File all at Once [via CreateFile] in to a String Buffer and then using mid$() to split chunks out to strings. Now this takes almost 3x the time 16 seconds to complete :/
I thought doing things in Memory should be quicker than reading off of the HD in a loop ?
_____________________________________________________________________
----If this post has helped you. Please take time to Rate it.
----If you've solved your problem, then please mark it as RESOLVED from Thread Tools.
-
Jan 10th, 2015, 07:29 PM
#2
Re: Why is reading a File in chunks quicker than in one go?
Hmmm... I'd be willing to bet it's the use of mid$() that's actually the problem, not the reading chunks vs whole file. String manipulation in general is always slower (your first snip insinuates binary access, the second indicates string manipulations). And it has more to do with how string manipulation happens - they're immutable, which means in order to change a string, a new memory area needs to be allocated, and the string copied. A Byte-based buffer on the other hand doesn't have that limitation.
-tg
-
Jan 10th, 2015, 08:09 PM
#3
Thread Starter
Frenzied Member
-
Jan 10th, 2015, 08:27 PM
#4
Re: [RESOLVED] Why is reading a File in chunks quicker than in one go?
That doesn't sound quite right either. Binary comparison should be far faster than text comparison. With binary comparison, either byte equality exist or not. For text comparison, byte can bet this or that or maybe this other thing. I'd guess that looping and/or loading bytes into strings (vice versa) may be the bottleneck.
-
Jan 10th, 2015, 08:41 PM
#5
Thread Starter
Frenzied Member
Re: [RESOLVED] Why is reading a File in chunks quicker than in one go?
Yes, binary comparison should be faster than text, but if the compared input is of Binary ?
Well, I literally analysed it line by line and even skipping WHOLE chunk so it goes alll the way to the end of the loop to check speeds with Gettickcount and skipping all mid() functions literally had no effect on speed, until the instr() call.
As soon as changing
Code:
FPos = InStr(1, strGetNames, "RFG", vbBinaryCompare)
to
FPos = InStr(1, strGetNames, "RFG")
Everything is super fast
_____________________________________________________________________
----If this post has helped you. Please take time to Rate it.
----If you've solved your problem, then please mark it as RESOLVED from Thread Tools.
-
Jan 10th, 2015, 08:46 PM
#6
Re: [RESOLVED] Why is reading a File in chunks quicker than in one go?
vbBinaryCompare is the default so you did not really change anything
-
Jan 10th, 2015, 11:52 PM
#7
Re: [RESOLVED] Why is reading a File in chunks quicker than in one go?
For all we know he managed to get something called "vbBinaryCompare" defined with a non-zero value. Probably as a side-effect of not using Option Explicit.
Tags for this Thread
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|