官术网_书友最值得收藏!

Built-in micro-benchmark harness

Rust's built-in benchmarking framework measures the performance of code by running it through several iterations and reports the average time taken for the operation in question. This is facilitated by two things:

  • The #[bench] annotation on a function. This marks the function as a benchmark test.
  • The internal compiler crate libtest with a Bencher type, which the benchmark function uses for running the same benchmark code in several iterations. This type resides under the test crate, which is internal to the compiler.

Now, we'll write and run a simple benchmark test. Let's create a new Cargo project by running cargo new --lib bench_example. No changes to Cargo.toml are needed for this. The contents of src/lib.rs is as follows:

 

// bench_example/src/lib.rs

#![feature(test)]
extern crate test;

use test::Bencher;

pub fn do_nothing_slowly() {
print!(".");
for _ in 1..10_000_000 {};
}

pub fn do_nothing_fast() {
}

#[bench]
fn bench_nothing_slowly(b: &mut Bencher) {
b.iter(|| do_nothing_slowly());
}

#[bench]
fn bench_nothing_fast(b: &mut Bencher) {
b.iter(|| do_nothing_fast());
}

Note that we had to specify the internal crate test with the external crate declaration, along with the #[feature(test)] attribute. The extern declaration is needed for crates internal to the compiler. In future versions of the compiler, this might not be needed and you will be able to use them like normal crates.

If we run our benchmarks by running cargo bench, we will see the following:

Unfortunately, benchmark tests are an unstable feature, so we'll have to use the nightly compiler for these. Fortunately, with rustup, moving between different release channels of the Rust compiler is easy. First, we'll make sure that the nightly compiler is installed by running rustup update nightly. Then, within our bench_example directory, we will override the default toolchain for this directory by running rustup override set nightly. Now, running cargo bench will give the following output:

Those are nanoseconds per iteration, with the figure inside the parentheses showing the variation between each run. Our slower implementation was quite slow and variable in running time (as shown by the large +/- variation).

Inside our functions marked with #[bench], the parameter to iter is a closure with no parameters. If the closure had parameters, they would be inside ||. This essentially means that iter is passed a function that the benchmark test can run repeatedly. We print a single dot in the function so that Rust won't optimize the empty loop away. If the println!() was not there, then the compiler would have optimized away the loop to a no-op, and we would get false results. There are ways to get around this, and this is done by using the black_box function from the test module. However, even using that does not guarantee that the optimizer won't optimize your code. Now, we also have other third-party solutions for running benchmarks on stable Rust.

主站蜘蛛池模板: 彭水| 广水市| 乌恰县| 大荔县| 孟州市| 荔波县| 河津市| 穆棱市| 米易县| 临夏县| 杭州市| 余干县| 麻城市| 南城县| 云梦县| 兴仁县| 财经| 哈密市| 启东市| 福安市| 崇州市| 平乐县| 从江县| 定陶县| 文化| 万盛区| 松江区| 清远市| 紫云| 河西区| 岑巩县| 凉城县| 泸州市| 堆龙德庆县| 太仓市| 桓台县| 道孚县| 旅游| 高雄县| 青田县| 汉川市|