Author | Message | Time |
---|---|---|
FrostWraith | I was just curious in all programming languages around the board, if a data source is to be read byte-per-byte, what sized chunks should be used for certain ranged data sizes. For example: 1 byte - 1024 bytes: fgets(res, 255) 1025 bytes - 1048576 bytes: fgets(res, 2550) etc... I know that the larger chunks you read at a time, the faster it is, but what seems to be appropriate if you don't want to slow your processor with multiple requests? | October 2, 2007, 3:01 AM |
Yegg | I'd imagine there wouldn't be much of a difference in speed if you read a file doing 1024 bytes per read as opposed to 1024 * 2 bytes per read or so on, unless the file was incredibly large, in which case you may actually notice lag regardless of how much you read per request. I don't think it should be something to worry about, I usually go with 1024 bytes. | October 2, 2007, 3:15 AM |
St0rm.iD | I believe there is an optimal buffer size which varies per OS (and per OS settings?), but this is not my area of expertise. If I were in this situation I'd do a quick Google, and if it didn't find anything, write a quick test application that benchmarks various chunk sizes against each other. | October 2, 2007, 3:20 AM |
Yegg | [quote author=Banana fanna fo fanna link=topic=17077.msg173446#msg173446 date=1191295201] I believe there is an optimal buffer size which varies per OS (and per OS settings?), but this is not my area of expertise. If I were in this situation I'd do a quick Google, and if it didn't find anything, write a quick test application that benchmarks various chunk sizes against each other. [/quote] I was going to suggest a benchmark also. However, he'll need to create a very large application and the difference in speed probably wouldn't be worth it even for a file 1GB in size. I could be wrong, but I think my idea is pretty accurate. | October 2, 2007, 3:22 AM |
squiggly | 4K is pretty standard on NTFS, best to go with that | October 23, 2007, 1:12 AM |