- Creative Projects for Rust Programmers
- Carlo Milanesi
- 1664字
- 2021-06-18 19:02:00
Building a complete web service
The file_transfer project completes the file_transfer_stub project, by filling in the missing features.
The features were omitted in the previous project for the following reasons:
- To have a very simple service that actually does not really access the filesystem
- To have only synchronous processing
- To ignore any kind of failure, and keep the code simple
Here, these restrictions have been removed. First of all, let's see what happens if you compile and run the file_transfer project, and then test it using the same commands as in the previous section.
Downloading a file
Let's try the following steps on how to download a file:
- Type the following command into the console:
curl -X GET http://localhost:8080/datafile.txt
- If the download is successful, the server prints the following line to the console:
Downloading file "datafile.txt" ... Downloaded file "datafile.txt"
In the case of an error, the service prints the following:
Downloading file "datafile.txt" ... Failed to read file "datafile.txt": No such file or directory (os error 2)
We have now seen how our web service can be used by curl to download a file. In the next sections, we'll learn how our web service can perform other operations on remote files.
Uploading a string to a specified file
Here is the command to upload a string into a remote file with a specified name:
curl -X PUT http://localhost:8080/datafile.txt -d "File contents."
If the upload is successful, the server prints the following to the console:
Uploading file "datafile.txt" ... Uploaded file "datafile.txt"
If the file already existed, it is overwritten. If it didn't exist, it is created.
In the case of an error, the web service prints the following line:
Uploading file "datafile.txt" ... Failed to create file "datafile.txt"
Alternatively, it prints the following line:
Uploading file "datafile.txt" ... Failed to write file "datafile.txt"
This is how our web service can be used by curl to upload a string into a remote file while specifying the name of the file.
Uploading a string to a new file
Here is the command to upload a string into a remote file with a name chosen by the server:
curl -X POST http://localhost:8080/data -d "File contents."
If the upload is successful, the server prints to the console something similar to the following:
Uploading file "data*.txt" ... Uploaded file "data917.txt"
This output shows that the name of the file contains a pseudo-random number— for this example, this is 917, but you'll probably see some other number.
In the console of the client, curl prints the name of that new file, as the server has sent it back to the client.
In the case of an error, the server prints the following line:
Uploading file "data*.txt" ... Failed to create new file with prefix "data", after 100 attempts.
Alternatively, it prints the following line:
Uploading file "data*.txt" ... Failed to write file "data917.txt"
This is how our web service can be used by curl to upload a string into a new remote file, leaving the task of inventing a new name for that file to the server. The curl tool receives this new name as a response.
Deleting a file
Here is the command to delete a remote file:
curl -X DELETE http://localhost:8080/datafile.txt
If the deletion is successful, the server prints the following line to the console:
Deleting file "datafile.txt" ... Deleted file "datafile.txt"
Otherwise, it prints this:
Deleting file "datafile.txt" ... Failed to delete file "datafile.txt": No such file or directory (os error 2)
This is how our web service can be used by curl to delete a remote file.
Examining the code
Let's now examine the differences between this program and the one described in the previous section. The Cargo.toml file contains two new dependencies, as illustrated in the following code snippet:
futures = "0.1"
rand = "0.6"
The futures crate is needed for asynchronous operations, and the rand crate is needed for randomly generating the unique names of the uploaded files.
Many new data types have been imported from the external crates, as can be seen in the following code block:
use actix_web::Error;
use futures::{
future::{ok, Future},
Stream,
};
use rand::prelude::*;
use std::fs::{File, OpenOptions};
The main function has just two changes, as follows:
.route(web::put().to_async(upload_specified_file))
.route(web::post().to_async(upload_new_file)),
Here, two calls to the to function have been replaced by calls to the to_async function. While the to function is synchronous (that is, it keeps the current thread busy until that function is completed), the to_async function is asynchronous (that is, it can be postponed until the expected events have happened).
This change was required by the nature of upload requests. Such requests can send large files (several megabytes), and the TCP/IP protocol sends such files split into small packets. If the server, when it receives the first packet, just waits for the arrival of all the packets, it can waste a lot of time. Even with multithreading, if many users upload files concurrently, the system will dedicate as many threads as possible to handle such uploads, and this is rather inefficient. A more performant solution is asynchronous processing.
The to_async function, though, cannot receive as an argument a synchronous handler. It must receive a function that returns a value having the impl Future<Item = HttpResponse, Error = Error> type, instead of the impl Responder type, returned by synchronous handlers. This is actually the type returned by the two upload handlers: upload_specified_file and upload_new_file.
The object returned is of an abstract type, but it must implement the Future trait. The concept of a future, used also in C++ since 2011, is similar to JavaScript promises. It represents a value that will be available in the future, and in the meantime, the current thread can handle some other events.
Futures are implemented as asynchronous closures, meaning that these closures are put in a queue in an internal futures list, and not run immediately. When no other task is running in the current thread, the future at the top of the queue is removed from the queue and executed.
If two futures are chained, the failure of the first chain causes the second future to be destroyed. Otherwise, if the first future of the chain succeeds, the second future has the opportunity to run.
Going back to the two upload functions, another change for their signature is the fact that they now get two arguments. In addition to the argument of the Path<(String,)> type, containing the filename, there is an argument of the Payload type. Remember that the contents can arrive piece-wise, and so such a Payload argument does not contain the text of the file, but it is an object to get the contents of the uploaded file asynchronously.
Its use is somewhat complex.
First, for both upload handlers, there is the following code:
payload
.map_err(Error::from)
.fold(web::BytesMut::new(), move |mut body, chunk| {
body.extend_from_slice(&chunk);
Ok::<_, Error>(body)
})
.and_then(move |contents| {
The call to map_err is required to convert the error type.
The call to fold receives from the network one chunk of data at a time and uses it to extend an object of the BytesMut type. Such a type implements a kind of extensible buffer.
The call to and_then chains another future to the current one. It receives a closure that will be called when the processing of fold will be finished. Such a closure receives all the uploaded contents as an argument. This is a way to chain two futures—any closure invoked in this way is executed asynchronously, after the previous one is finished.
The contents of the closure simply write the received contents into a file with the specified name. This operation is synchronous.
The last line of the closure is ok(HttpResponse::Ok().finish()). This is the way to return from a future. Notice the lowercase ok.
The upload_new_file function is similar to the previous one, in terms of the web programming concepts. It is more complex, just because of the following:
- Instead of having a complete filename, only a prefix is provided, and the rest must be generated as a pseudo-random number.
- The resulting filename must be sent to the client.
The algorithm to generate a unique filename is the following:
- A three-digit pseudo-random number is generated, and it is concatenated to the prefix.
- The name obtained is used to create a file; this avoids overwriting an existing file with that name.
- If a collision happens, another number is generated until a new file is created, or until 100 failed attempts have been tried.
Of course, this assumes that the number of uploaded files will always be significantly less than 1,000.
Other changes have been made to consider the chance of failure.
The final part of the delete_file function now looks like this:
match std::fs::remove_file(&filename) {
Ok(_) => {
println!("Deleted file \"{}\"", filename);
HttpResponse::Ok()
}
Err(error) => {
println!("Failed to delete file \"{}\": {}", filename, error);
HttpResponse::NotFound()
}
}
This code handles the case of a failure in the deletion of the file. Notice that in the case of an error, instead of returning the success status code HttpResponse::Ok() representing the number 200, a HttpResponse::NotFound() failure code is returned, representing the number 404.
The download_file function now contains a local function to read the whole contents of a file into a string, as follows:
fn read_file_contents(filename: &str) -> std::io::Result<String> {
use std::io::Read;
let mut contents = String::new();
File::open(filename)?.read_to_string(&mut contents)?;
Ok(contents)
}
The function ends with some code to handle the possible failure of the function, as follows:
match read_file_contents(&filename) {
Ok(contents) => {
println!("Downloaded file \"{}\"", filename);
HttpResponse::Ok().content_type("text/plain").body(contents)
}
Err(error) => {
println!("Failed to read file \"{}\": {}", filename, error);
HttpResponse::NotFound().finish()
}
}