- cross-posted to:
- science_memes@mander.xyz
- cross-posted to:
- science_memes@mander.xyz
A few calculations I did last time I saw this meme (over at !programmer_humor@programming.dev):
- There are 9592 prime numbers less than 100,000. Assuming the test suite only tests numbers 1-99999, the accuracy should actually be only 90.408%, not 95.121%
- The 1 trillionth prime number is 29,996,224,275,833. This would mean even the first 29 trillion primes would only get you to 96.667% accuracy.
In response to the question of how long it would take to round up to 100%:
- The density of primes can be approximated using the Prime Number Theorem:
1/ln(x)
. Solving99.9995 = 100 - 100 / ln(x)
for x givese^200000
or7.88 × 10^86858
. In other words, the universe will end before any current computer could check that many numbers.
Edit: Fixed community link
I think a more concise answer to the second one would be; it depends on where you decide to round, but as you run it, it approaches 100%, or 99.99 repeating (which is 100%)
The screenshot displays 3 decimal places, which is the the precision I used. As it turns out, even just rounding to the nearest integer still requires checking more numbers than we even have the primes enumerated for (e^200 or 7x10^86)
Wait till you include floating numbers. “There are an infinite numbe of numbers between any two natural numbers” So technically you could increase that percentage to 99.9999…%
You don’t even need floats for that. Just increase the amount of tests.
It would be very easy to increase that to 100%, if you’re prepared to ignore enough data…
Actually it would approach 100% without ignoring data wouldn’t it?
By the prime number theorem, if the tests go from 1 to N, the accuracy will be 1 - 1 / ln(N). They should have kept going for better accuracy.