time-to-botec

Benchmark sampling in different programming languages
Log | Files | Refs | README

commit 4419798c1847314b9c58a14822f9d9be5e4c034c
parent 8174e8a49e7ecc0650e9fad79959ced6b4d3bb31
Author: NunoSempere <nuno.sempere@protonmail.com>
Date:   Sun, 21 May 2023 12:29:44 -0400

README: performance => comparison

Diffstat:
MREADME.md | 6+++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/README.md b/README.md @@ -27,9 +27,7 @@ As of now, it may be useful for checking the validity of simple estimations. The - [x] Python - [x] Nim -## Performance table - -With the [time](https://man7.org/linux/man-pages/man1/time.1.html) tool, using 1M samples: +## Comparison table | Language | Time | Lines of code | |----------------------|-----------|---------------| @@ -40,6 +38,8 @@ With the [time](https://man7.org/linux/man-pages/man1/time.1.html) tool, using 1 | R | 0m7,000s | 49 | | Python (CPython) | 0m16,641s | 56 | +Time measurements taken with the [time](https://man7.org/linux/man-pages/man1/time.1.html) tool, using 1M samples: + ## Notes I was really happy trying [Nim](https://nim-lang.org/), and as a result the Nim code is a bit more optimized and engineered: