0 votes
by (1.2k points)

Using Ftp.Upload Method (FileSet, String, TransferMethod, ActionOnExistingFiles) to nightly backup around 300 files with each around 200 MB.

ftp.Upload(
    localPath, 
    remotePath, 
    TransferMethod.Copy, 
    ActionOnExistingFiles.OverwriteAll);

Recently, I'm randomly getting "Timeout exceeded" errors from the FTP server.

I then retry multiple times.

Unfortunately my try…catch…retry construct is placed around the whole monolitic upload call (see above code).

I.e. when it fails, I do not retry for the currently uploaded file, but instead for the whole set of files.

First, I thought to use the ProblemDetected event, but unfortunately:

  1. The documentation says it is only supported for the Upload method overload with two parameters (I'm using the 4 parameter overload, see above code).
  2. In this answer you say that "…For general errors (e.g. networking errors - server closed the connection, invalid arguments, invalid operation exceptions, etc.) we always throw exceptions and we never raise the ProblemDetected event.…".

So it seems I'm unable to resume one single file only.

My question:

Is there any way to resume single file upload errors (even timeout errors) during a multiple file upload call?

Applies to: Rebex FTP/SSL

1 Answer

+1 vote
by (73.6k points)
selected by
 
Best answer

At first, I will comment points 1 and 2:

  1. The ProblemDetected event is raised during any Download method, not only the one with two parameters overload.
    It is limitation of our documentation system. We had to either specify cref to one of the overloads or not use cref at all. We decided to use cref of the shortest overload.

  2. If the error causes the Ftp object to be unusable anymore, there is no need to raise ProblemDetected event and the exception is thrown immediately.
    The ProblemDetected event is designed to solve the problem of the current file so the whole process can continue. If the Ftp object is unusable (e.g. connection is lost) the event has no meaning.

Now, to your "Timeout exceeded" errors:

You can increase the timeout value by Ftp.Timeout property. However, I don't think it will solve your problem (but you can try).

I think, that the problem is at the server side. The FTP protocol uses two connections: control and data. Control connection is used for commands, data connection is used for sending data of files.
When a long file is transferred, there is no communication on the control connection and this will cause the timeout at the server side.
To keep the control connection alive, set the Ftp.Settings.KeepAliveDuringTransfer to true. Alternatively change the interval like this Ftp.Settings.KeepAliveDuringTransferInterval = 45;

If this doesn't help, please send us communication log for analysis to support@rebex.net, we hopefully spot something. You can create it as described here.

And finally, to your question:

To resume previously aborted transfer, you can use ActionOnExistingFiles.ResumeIfPossible value as last parameter of the Upload method. However, this will not work as expected in your case.

You are calling the Upload method for the first time with ActionOnExistingFiles.OverwriteAll. This means, replace content of all existing files. The ResumeIfPossible works this way:

  • if a file doesn't exist on target, upload whole file
  • if a file exists on target and has smaller length than the original file, upload only remaining part
  • if a file exists on target and has length equal or larger than the original file, do nothing (skip the file)

However, you would like to do something little bit different:

  • skip successfully transferred files
  • resume aborted file
  • overwrite all remaining files

This can be done, but you have to remember, which files you already transferred.
With a little effort you can do it like this:

// holds progress of the transfer
var progress = new Dictionary<string, bool>();

// initialize client object
var client = new Rebex.Net.Ftp();

// updates progress dictionary
client.TransferProgressChanged += (s, e) =>
{
    switch (e.TransferState)
    {
        case TransferProgressState.FileTransferring:
            if (!progress.ContainsKey(e.TargetPath))
                progress[e.TargetPath] = false; // uncompleted file
            break;
        case TransferProgressState.FileTransferred:
            progress[e.TargetPath] = true; // completed file
            break;
    }
};

// solves problems of existing files
client.ProblemDetected += (s, e) =>
{
    // handle only FileExists problems
    if (e.ProblemType == TransferProblemType.FileExists)
    {
        bool state;
        // determine target path
        string targetPath = (e.Action == TransferAction.Uploading) ? e.RemotePath : e.LocalPath;
        if (progress.TryGetValue(targetPath, out state))
        {
            if (state)
                e.Skip(); // completed file
            else if (e.IsReactionPossible(TransferProblemReaction.Resume))
                e.Resume(); // uncompleted file
            else
                e.Overwrite(); // suspicious for skip, but rather overwrite
        }
        else
        {
            e.Overwrite(); // unknown file
        }
    }
};

With this, your try…catch…retry construct will work as expected.

by (1.2k points)
Wow, that's an incredible awesome and detailed answer. Thanks a million times for putting so much effort into helping me!
...