Non-Linear Queue Transfers, Global / Per Site Default / Per Site Thread and Bandwidth, and Remote Refresh Settings

See the full HTML version of this post here (will show you tables as they were originally written):

http://www.jaybaldwin.com/x/temp/Smart_ ... ations.htm




Feature Suggestions / Requests:

Non-Linear transfers in Transfer Queue when multiple sites are in the queue.

Right now, as your software stands, your transfer queue exists on a linear basis. That is, when I have my upload method in “queue” mode rather than direct, and I drag, let’s say 5 files 1.0 GB each to a remote location (lets call it Site 1) in order to transfer them to that location… they are added to the queue. If I have another site open (Site 2), and I want to drag… this time lets say 3 files 250 MB each… to site 2 in order to transfer them, they are also added to the queue.

Lets say, for the sake of argument, that I have 3 “Worker” processes or threads running in my queue. The first 3 files going to Site 1 will start transferring. When 1 of those completes, file 4 will start, and when another completes, file 5. While this is happening, the 3 files en route to Site 2 really aren’t en route to anything… they’re simply sitting there untouched, waiting for all the files going to Site 1 to complete. Why is this? Because the queue operates in a linear fashion. What if I need to transfer these files to both locations at the same time as fast as possible? I’d have to open two instances of Smart FTP. And to tell you the truth, that answer isn’t good enough. Your software is good… It should support the ability to transfer files to multiple sites simultaneously, in spite of the queue order.

That leads me to my next suggestion…


The ability to configure max connections & bandwidth limitations on the following basis: globally, per site defaults, and per site individually. All of this should be separate.

Right now, I have the ability in my software queue to say that I want to have X worker processes, or threads. If the feature above is successfully implemented, it will need a governor. This feature is it. I’d like to see the software give me the ability to configure a maximum number of “Worker” processes, or threads, for the program as a whole on a global basis. I would also like to be able to set this number individually per site… and have a “default” setting for sites that don’t have a specially configured setting. Each of these settings should also be able to configure a maximum bandwidth allocation... globally, per site defaults, and per site individually.

For instance, I’d like to be able to specify globally, Smart FTP’s queue cannot have more than 7 worker processes, or threads,… and it cannot use more than 1.5 MB/s in upload bandwidth, 10 MB/s in download bandwidth. Again, this is globally. Then, I’d like to be able to configure a per site default (that affects all sites that do not have customized values for these definitions) to be able to say that by default, each site is only allowed to have 3 worker processes, or threads, and is able only to use 200 KB/s in upload traffic, and 2 MB/s in download traffic. Then, I’d want to be able to go into the settings for a site of my choice, and say, “but you’re an exception”, and configure that site for 5 worker processes, with bandwidth allocation of 1 MB/s upload and 5 MB/s download.

How would this work in practice? To continue with our 5 files 1.0 GB each to Site 1 and 3 files 250 MB each to Site 2 situation from above, I want to add 6 files to Site 3, each file being 100 MB each.

If I have the following settings:


Setting Profile Setting Value
Global Max Worker Processes (Threads) 8
Global Max Upload 3.0 MB/s
Global Max Download 10.0 MB/s
Per Site Default Max Worker Processes 3
Per Site Default Max Upload 600 KB/s
Per Site Default Max Download 2.0 MB/s
Site 2 Max Worker Processes 5
Site 2 Max Upload 2.0 MB/s
Site 2 Max Download 5.0 MB/s


I’d like to see the queue be able to transfer the files as follows (in a world where bandwidth is unlimited – we have a gigabit connection):


Position Destination File Size Is Transferring Current Speed
1 Site 1 1,048,576.00 KB TRUE 200 KB/s
2 Site 1 1,048,576.00 KB TRUE 200 KB/s
3 Site 1 1,048,576.00 KB TRUE 200 KB/s
4 Site 1 1,048,576.00 KB -- --
5 Site 1 1,048,576.00 KB -- --
6 Site 2 256,000.00 KB TRUE 682.66666 KB/s
7 Site 2 256,000.00 KB TRUE 682.66666 KB/s
8 Site 2 256,000.00 KB TRUE 682.66666 KB/s
9 Site 3 102,400.00 KB TRUE 212 KB/s
10 Site 3 102,400.00 KB TRUE 212 KB/s
11 Site 3 102,400.00 KB -- --
12 Site 3 102,400.00 KB -- --
13 Site 3 102,400.00 KB -- --
14 Site 3 102,400.00 KB -- --

Why should it work like this? According to the rules, you can have 8 global connections. This is the master umbrella for everything. No site can individually, nor can the per site default, instantiate a number of worker processes or use bandwidth in a way that would exceed the global max values. Because we have 3 Site using the bandwidth allocation, we are able to use a max of 3.0 MB/s of upload traffic. That is why although the Max Upload setting for Site 3 is 600 KB/s, it is unable to use all of that bandwidth – the global max has been reached as a result of the transfers to Site 1 and Site 2.

Notice that the Max upload for Site 1 was also 600 KB/s (because it wasn’t explicitly specified, it followed the per site default according to the above settings). Because of this setting, and the fact that it had 3 files transferring, it divided max available bandwidth (600 KB/s) by the number of transfers to THAT site currently happening (3), and the result (200 KB/s) was the maximum allowed for each of the transfers to that site. It did the same for Site 2, only because it had an explicitly specified Max values that differed from the other sites, the outcome was different. 2.0 MB/s divided by 3 file currently being transferred is 682.666666 KB/s. So that was the max transfer for files to site 2. This was able to happen because the global max still exceeded the bandwidth usage properties of site 1 and site 2. But that isn’t the case for site 3. Site 3 was limited to 212 KB/s in spite of the fact that its own settings allow for it to use 600 KB/s on a per site basis. Why were they limited to 212 KB/s rather than 300 KB/s each? Because the global max upload bandwidth allocation had been met. The 2.0 MB from site 2, and the 600 KB from site 1 means that only 424 KB/s were available for Site 3. Site dividing that value by 2 current transfers gives you 212 KB/s each for files transferring to site 3.

Priority, as now, should be determined by queue order, but only after all contained sites have been evaluated to follow the global / per site default / per site settings.

Good news is, for the people that don’t like this behavior, they can keep the way it currently works very simply. If I set the global default worker process to 3, and let the transfer upload / download be unlimited globally, and set the per site default max worker processes to “global max” as well as the upload / download settings… and do not configure any site settings individually, the current behavior remains intact completely.

The example I used above only accounts for upload traffic. The same should hold true for download traffic – according to the settings configured by the user.
And finally…


“Remote Browser” sites should automatically refresh after a file is deleted, or a transfer to a remote site is started / completed.

When I upload any of these files in the queue… and I have all 3 remote sites open in my remote browser tabs… after each file starts, the remote browser tab for that site should refresh, showing me that part of the file is there… It should also refresh when it completes, so that when I am looking, I can see that the file is there and was successfully transferred (according to viewing the file size of that file in the size column). It should not refresh to a specific directory, but rather simply refresh whatever view I’m in, even if that view of the remote pane is unchanged by the file transfer.

Currently, nothing happens at all, and you have to manually refresh in order to get this functionality. This should be able to be configured on a global, per site default, and per site individually basis also.


Again, I congratulate you on the quality of your software. I think if you are able to quickly implement these changes, I will, for one, be even more impressed in your software, and for two, be very excited and happy as I uninstall any software made by your competitors in favor of the newly improved best FTP software available. :)

Hello Jay ..

First I would like say that it is a bit unfortunate that you have tested version 2.5 of SmartFTP instead of the new version 3.0.

>• Non-Linear transfers in Transfer Queue when multiple sites are in the queue.
You can configure that by setting the number of workers on a per favorite/site basis. The following article should help you on how to set it up:
https://www.smartftp.com/support/kb/how- ... f2600.html

>The ability to configure max connections & bandwidth limitations on the following basis: globally, per site defaults, and per site individually. All of this should be separate.
Related to the first point. You can set the connection/transfer limits globally, on a per favorite/site limit with defaults. Not sure why you didn't find the settings or the related articles in the KB.

Global Settings
Menu: Tools->Settings then the Transfer dialog for the global speed limits. And the Queue dialog for the total number of workers.

Per favorite/site settings
To access the favorite settings please see:
https://www.smartftp.com/support/kb/how- ... -f117.html

Default favorite settings
To edit the default favorite click on the "Edit Default Favorite" link at the bottom left of the favorite properties dialog or you can do this in the menu: Favorites->Edit Favorites. Then Tools->Edit Default Favorite as well.

> “Remote Browser” sites should automatically refresh after a file is deleted, or a transfer to a remote site is started / completed.
This has been implemented in version 3.0 of SmartFTP.

Regards,
Mat