官术网_书友最值得收藏!

Built-in micro-benchmark harness

Rust's built-in benchmarking framework measures the performance of code by running it through several iterations and reports the average time taken for the operation in question. This is facilitated by two things:

  • The #[bench] annotation on a function. This marks the function as a benchmark test.
  • The internal compiler crate libtest with a Bencher type, which the benchmark function uses for running the same benchmark code in several iterations. This type resides under the test crate, which is internal to the compiler.

Now, we'll write and run a simple benchmark test. Let's create a new Cargo project by running cargo new --lib bench_example. No changes to Cargo.toml are needed for this. The contents of src/lib.rs is as follows:

 

// bench_example/src/lib.rs

#![feature(test)]
extern crate test;

use test::Bencher;

pub fn do_nothing_slowly() {
print!(".");
for _ in 1..10_000_000 {};
}

pub fn do_nothing_fast() {
}

#[bench]
fn bench_nothing_slowly(b: &mut Bencher) {
b.iter(|| do_nothing_slowly());
}

#[bench]
fn bench_nothing_fast(b: &mut Bencher) {
b.iter(|| do_nothing_fast());
}

Note that we had to specify the internal crate test with the external crate declaration, along with the #[feature(test)] attribute. The extern declaration is needed for crates internal to the compiler. In future versions of the compiler, this might not be needed and you will be able to use them like normal crates.

If we run our benchmarks by running cargo bench, we will see the following:

Unfortunately, benchmark tests are an unstable feature, so we'll have to use the nightly compiler for these. Fortunately, with rustup, moving between different release channels of the Rust compiler is easy. First, we'll make sure that the nightly compiler is installed by running rustup update nightly. Then, within our bench_example directory, we will override the default toolchain for this directory by running rustup override set nightly. Now, running cargo bench will give the following output:

Those are nanoseconds per iteration, with the figure inside the parentheses showing the variation between each run. Our slower implementation was quite slow and variable in running time (as shown by the large +/- variation).

Inside our functions marked with #[bench], the parameter to iter is a closure with no parameters. If the closure had parameters, they would be inside ||. This essentially means that iter is passed a function that the benchmark test can run repeatedly. We print a single dot in the function so that Rust won't optimize the empty loop away. If the println!() was not there, then the compiler would have optimized away the loop to a no-op, and we would get false results. There are ways to get around this, and this is done by using the black_box function from the test module. However, even using that does not guarantee that the optimizer won't optimize your code. Now, we also have other third-party solutions for running benchmarks on stable Rust.

主站蜘蛛池模板: 新竹市| 阿拉尔市| 亚东县| 上栗县| 台南县| 基隆市| 葫芦岛市| 惠州市| 包头市| 伊吾县| 大港区| 夏邑县| 汝城县| 馆陶县| 内丘县| 青浦区| 佛山市| 开化县| 六枝特区| 福贡县| 曲麻莱县| 瓮安县| 荔浦县| 永康市| 汝城县| 六安市| 鹤庆县| 双鸭山市| 潜江市| 滦平县| 海盐县| 庆云县| 呼图壁县| 如皋市| 南康市| 巫山县| 宜阳县| 长岛县| 电白县| 梁平县| 西乡县|