multipart transfer default

Thank you for adding this! It's a dream come true.

I only wish it was easy to default to doing multipart transfers. It's too cumbersome to manually change every file's properties.

Perhaps a per favorite Queue setting that will set the default type of transfer (normal/multipart), and default number of connections used for the multipart transfer.

The reason it is not easily accessible is because we think it should only used in exceptional cases. E.g. if you have a single 10GB file and want to transfer it with 8 workers. For normal transfers with smaller files it does not offer any benefits.

Regards,
Mat

I see your reasoning but it would be extremely helpful for people that routinely transfer many large files. At least we could have the option.

Other than the frequent large queues of big files, there's another situation that could warrant this: Even on ~100MB files, multipart transfer makes the most sense for my situation. I have to hash check every file transferred (XMD5 etc.). Running many workers causes many hash checks to run simultaneously, creating chaos on the server's disks. It's much better to run one file at a time (queue with 1 worker), with multipart transfer for each.

Yes, ideally a single connection would be fast enough to get transfers done. Unfortunately the cheapo bulk bandwidth my provider gives me is restricted per connection somehow. Multipart is the only way I can transfer large backups etc. quickly.

Hello ..

Would it help if you can change the number of parts of multiple files in one step? This is what we planned to implemented. The number of parts is inherited. For example if you have a folder and set the number of parts to 8 all files will have the number of parts set to 8 as well. So basically since you are very likely backing up the same folder over and over just save the transfer queue with this folder and you have to set the number of parts only once.

Regards,
Mat

Just a small update. In the latest version you can change the number of Multi Parts of multiple files/folders in one step.
https://www.smartftp.com/download

Regards
Mat

Nice! This is very helpful, thanks. Any chance having the ability to set an arbitrary number of workers, or at least some higher numbers to choose from?

There is no plan to increase the maximum number of workers. 16 should be enough in my opinion. What makes you think that you can transfer your files faster with more workers? The chance is very likely that the server will block you because you open too many connections. That's exactly the reason why the 16 worker limit is in place. To prevent the user from being blocked/banned from the server.

Regards,
Mat

Yes, I am being greedy. I agree that 16 is normally more than enough, even just from a safety standpoint.

I've been using a different product with multipart functionality for a while now to pull files off of my dedicated server (even though I'd prefer SmartFTP in all other respects). Depending on where I'm downloading from and the time of day, each connection can crawl to 60KB/s. I'd sometimes use 20+ connection to even get 10Mb/s going. I don't do this often, and not for long when I do, but sometimes I need a large file from it quickly and this is the only way.

Okay. The following number of workers will be possible: 2,4,8,16,32. Please don't tell me you want 20 now as well ;-)

Regards,
Mat

Okay. The following number of workers will be possible: 2,4,8,16,32. Please don't tell me you want 20 now as well ;-)

Regards,
Mat

Haha, well... thats quite a gap between 16 and 32 I'm guessing this is easier to code?

Please try the new version and tell me if it's better. You can set the number of works from 2 to 32 now with an increment of 2.

Regards,
Mat

Thanks, this is perfect