Fragmented Files

I've noticed that files downloaded by SmartFTP, especially those downloaded simultaneously with other files, become extremely fragmented (100-300 fragments) quite quickly. I understand that this is a side effect of the way files are written to the drive but, would it not be possible to allocate the required before downloading each file? The only reason I can conceive of for not doing so would be a hindrance of performance, which shouldn't be of any great significance. It could always be de/activated at the user's discretion.

This has been my only complaint during the last 6 months of using the application -- overall, I have been exceptionally pleased with it.

Regards,
Jim W.

I second that. I would also like to see an option to specify a maximum size to preallocate, or set a chunk size for stepwise preallocation.

I frequently transfer files between my university and home PCs, and because my ISP seems to limit the speed of one thread (worker) to ~35K/s I usually break up everything into 15M sized RARs, then I use 10-30 threads for transfer. I've also noticed a high level of fragmentation which slows down integrity checks and further processing in general. The first extra option (max file size to prealloc) has sense if preallocation of large files takes a lot of time. For example, on slow HDDs I wouldn't want to preallocate a >700M large download. The second option I mentioned seems more practical to me, ie files could be preallocated in chunks of 10M-100M or whatever is set, this way small files would be preallocated in 1 piece, as for large files the process to preallocate wouldn't take a lot of time at once, still they'd be much less fragmented than before.

As for HDD access, I'll post another req for queueing integrity checks. For the same small 15M RARs I download, if 5-10 finish at once SmartFTP tries to verify their integrity all at the same time. This simply kills HDD access by thrashing and I had to switch off integrity checks because of it. Oh, I'm off topic, I'll post this in another req...

A disk defragmenter can solve such problems in less than 5 minutes. Also I believe the operating system is already pre-allocating larger blocks in such situations (constant appending of new data to a file) to avoid disk fragmentations. I haven't done my own tests but I believe the problem described by you is minor.

Regards,
-Mat

Well, I think in some filesharing programs there's the option to preallocate files, obviously they didn't put it in there just for fun. As for the OS, I don't think it works efficiently in this case, at least that's not what I see. Defragmenting? But that's somehow beats the purpose - if I have to defragment then it's about 2x the HDD access time I have to spend for something that could've been avoided totally. Where's the efficiency in that?

Anyway, I can't argue this, what you consider minor is up to you and you probably see user complaints from a higher perspective than I do. If you have no resources to deal with such problems that's your choice, I'm still happy with a free SmartFTP. But if you find the suggestion at least a bit feasible then please make a note of it for future times, don't discard it totally.

I recently downloaded approx. 14gb split into 50gb rar files overnight with 10 concurrent threads. The files were so fragmented that I got a max sequential throughput of 40mb/sec scanning them with SFV to verify the CRC's. Just as an experiment I defragmented the files by making a complete copy of the directory - this took 15 minutes (!!) and I get over 180mb/sec scanning the copied folder.

The sane thing to do would be to grow the file once after creating it.

The file cannot be easily grown to the max size without using a file map (specifies which blocks of the file are valid).

Why don't you use a software like Diskeeper which runs in the background defragmenting your files?

Regards,
Mat

I understand. Solving the issue is not as simple as calling SetEndOfFile() because if the file were pre-allocated then SmartFTP would need some other way to store its position instead of always writing to the end.

Diskeeper's "Frag Shield" works somewhat well to defragment the files as soon as SmartFTP closes them but this feature is only available in the Professional version starting at $49.95. Defragmenting all of the files at once after the download is complete takes much much longer than just using them (if you're just unpacking them) with the dimished io performance.

Please consider this enhancement request for SmartFTP