yes, you have to implement a simple stream-like adapter for your backend.
Uploads can be handled similarly.
In your FileSystem provider.
protected override NodeContent GetContent(NodeBase node, NodeContentParameters contentParameters)
//Handle read-only streams if needed.
var tempFilePath = ... //create temp file path (used for "local" upload);
var mySpecialStream = new MySpecialStream(tempFilePath); //Your special stream for partial download/upload
NodeContent content = NodeContent.CreateDelayedWriteContent(mySpecialStream);
protected override NodeBase SaveContent(NodeBase node, NodeContent content)
//Upload error occured
var myStream = (MySpecialStream) content.GetStream();
var tempFileStream = myStream.LocalStream;
//Upload data from tempFileStream now or offload the upload of the file to another task, use an upload queue in another process...
where MySpecialStream handles write operation.
public MySpecialStream : Stream
public MySpecialStream(string tempUploadFilePath)
_localStream = File.Open(tempUploadFilePath, FileMode.CreateNew);
public override void Write (byte buffer, int offset, int count)
_myLocalStream.Write(buffer, offset, count);
protected virtual void Dispose (bool disposing)
public Stream LocalStream
//Implement other Stream members.
It is possible to use NodeContent.CreateImmediateWriteContent (method SaveContent method is then not called) and initiate upload of the "temp file" to the "backend" in the MySpecialStream.Dispose(bool) method, but this solution has the following drawbacks.
1) You don't know if the upload finished successfully (Stream does not expose property WasStreamClosedForcefully).
2) Using the delayed upload masked by the Create*Immediate*WriteContent method leads to a dirty and opaque code.
3) Hijacking Dispose method (typically used for clean-up) for upload leads to an opaque code.