From 34aa1eb66b7ec7edbf52f6663390ec0445571994 Mon Sep 17 00:00:00 2001 From: Vectornaut Date: Tue, 13 Aug 2024 20:30:58 +0000 Subject: [PATCH] Write up Rust benchmark variants --- Language-benchmarks.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/Language-benchmarks.md b/Language-benchmarks.md index 9cee1b3..6d549c3 100644 --- a/Language-benchmarks.md +++ b/Language-benchmarks.md @@ -14,6 +14,8 @@ To evaluate the performance cost, Aaron wrote a benchmark program in Rust and Ja - Find the eigenvalues of $A,\;\ldots\;T^{R-1}A$. To validate the computation, the benchmark program displays the eigenvalues of $T^r A$, with $r \in \{0, \ldots, R\}$ controlled by a slider. Displaying the eigenvalues isn't part of the benchmark computation, so it isn't timed. + +The language comparison benchmark uses 64-bit floating point matrices of size $N = 60$. Other variants of the benchmark, used to compare different design decisions within Rust, are described at the end. ## Running the benchmark ### Rust - To build and run, call `trunk serve --release` from the `rust-benchmark` folder and go to the URL that Trunk is serving. @@ -48,4 +50,11 @@ The Rust version typically ran 6–11 times as fast as the Scala version, and it ### Chromium The Rust version typically ran 5–7 times as fast as the Scala version, with comparable consistency. - Rust 80–90 ms -- Scala: 520–590 ms \ No newline at end of file +- Scala: 520–590 ms +## Rust benchmark variants +### Low-precision variant +- For matrices of size $N = 50$, using 32-bit floating point instead of 64-bit made the computation about 15% faster (60 ms instead of 70 ms). However, for $N \ge 54$, the 32-bit floating point variant would hang indefinitely! Maybe the target precision doesn't change to accommodate the lower-precision data type? +### Statically sized variant +- For matrices of size $N = 60$, using statically sized matrices instead of dynamically sized ones made the computation about 10% *slower* (125–130 ms instead of 110–120 ms). +- For matrices of size $N = 50$, using statically sized matrices made the computation about 15% *slower* (80 ms instead of 70 ms). +- For matrices of size $N = 20$, statically and dynamically sized matrices gave comparable run times (12–15 ms). \ No newline at end of file