I am trying to speed up reads of database files created by a non-matlab package.
Each record in these files is formated with a date stored as yymmdd text, followed by 10 integers apparently stored binary. Since these files are many, big, and frequently updated by the other package's system, storing them reformatted as matlab files is not practical.
The problem is that my current approach is painfully slow. Right now, I have a while loop, with the core as:
Mymatrix has been pre-dimensioned before entering this loop, but this still takes almost five seconds to read 2000 records, and I have over 400 such files. The package which originally created the file (a custom C++ database app, source code not available) loads the same file in an instant. I'm using matlab 4.2 on a Win95-166Pentium, with tons of RAM and HD.
I have a hunch that I can speed this up by reading a whole record at a time using fscanf plus a suitable format argument, ie %6d%10s or something. Unfortunately, the matlab docs are pretty thin on these format codes, my library of C texts are all C++, which has gone over to fstream, and experimentation has yielded nothing but garble.
Another idea is just a byte-wise read into one long temporary vector using some fread or fscanf command and "eof' control structure which would be real fast, then sorting out the format into Mymatrix all in ram using matlab matrix operations. But I haven't discovered any faster byte-wise read loop so I haven't experimented with the tradeoffs here either.
I'd be grateful if someone can offer either fscan guidance or some other ideas. Or is my current approach about as fast as I can expect.