tag:blogger.com,1999:blog-4667528901979447244.post422299277135084642..comments2017-03-22T04:02:46.419-07:00Comments on Math and Hats: How to efficiently enumerate prime numbersSimon Spicernoreply@blogger.comBlogger7125tag:blogger.com,1999:blog-4667528901979447244.post-43024450096231794022014-07-30T13:05:22.422-07:002014-07-30T13:05:22.422-07:00Yes, there already is an implementation in Sage to...Yes, there already is an implementation in Sage to enumerate primes in the interval [M,M+N]. However, it's not optimized: when M=0 it just calls PARI's isprime() function, and is thus accordingly slow. There are two sieving methods implemented in PARI (at least, given the digging I did for my post): one to enumerate primes up to a given bound, and one to enumerate primes in an interval. The former is wrapped in Sage, but the latter isn't. What really should be done is to change Sage's prime_range() function to use the PARI sieving method.<br /><br />- SimonSimon Spicerhttps://www.blogger.com/profile/14637909868809683079noreply@blogger.comtag:blogger.com,1999:blog-4667528901979447244.post-31195004231280388292014-07-30T12:49:05.055-07:002014-07-30T12:49:05.055-07:00I do not exactly know the application, but being a...I do not exactly know the application, but being able to enumerate primes in an interval [M,M+N] is probably more useful. For instance, with M=10^15 and N=10^8, this should be not much longer (maybe 2x) than all the primes up to 10^8. I had thought there was some code in Sage do this already?CatacycleUmbertohttps://www.blogger.com/profile/12487461289494442541noreply@blogger.comtag:blogger.com,1999:blog-4667528901979447244.post-84039352476582266912014-07-24T14:58:52.728-07:002014-07-24T14:58:52.728-07:00Fastest way to list all primes below N in python. ...<a href="https://stackoverflow.com/questions/2068372/fastest-way-to-list-all-primes-below-n-in-python" rel="nofollow">Fastest way to list all primes below N in python.</a> I wonder how hard would be to turn one of these list returning functions into a generator.Vít Tučekhttps://www.blogger.com/profile/05289656520361260691noreply@blogger.comtag:blogger.com,1999:blog-4667528901979447244.post-44392998775064370822014-07-23T18:39:56.214-07:002014-07-23T18:39:56.214-07:00An efficient sieve implemented in a compiled langu...An efficient sieve implemented in a compiled language can enumerate primes as fast as you can do anything useful with them (a few CPU cycles per prime), so there is little reason to use huge tables. See http://primesieve.org for instance.Fredrik Johanssonhttps://www.blogger.com/profile/01465400860530971858noreply@blogger.comtag:blogger.com,1999:blog-4667528901979447244.post-34170018768273074292014-07-23T08:18:51.496-07:002014-07-23T08:18:51.496-07:00That is a very interesting point. It is a waste of...That is a very interesting point. It is a waste of CPU for every user to be doing the same calculation, but it might be better than the waste of bandwidth that would come from every user downloading the same database. And it would be around 28GB of RAM space for primes less than 10^11.<br /><br />For SMC, would it be feasible to store the database of files in each of the sage servers, a load from within the same server? And for local Sage installation it could produce the list once, I guess warning the use that she/he is about to lose 28GB of storage =)JAAMhttps://www.blogger.com/profile/16754266863680935417noreply@blogger.comtag:blogger.com,1999:blog-4667528901979447244.post-64013129007361942402014-07-22T08:34:43.847-07:002014-07-22T08:34:43.847-07:00Yep, I've considered this, and it's someth...Yep, I've considered this, and it's something I might end up doing. There are two obstacles, however:<br /><br />1. In order for reading the list of primes from file to be faster than enumerating them from scratch, the file of primes would have to be stored locally i.e. on a local disk, or better yet in RAM. However, I'm writing my code for inclusion in Sage, and bundling it with 7GB worth of text files isn't feasible; the current SAGE source tarball is only 400MB in size (note that I could include the text files as an optional spkg download, so this option is stil possible, albeit a clunky one).<br /><br />2. Scalability. If I have a list of primes up to ~10^11, then I can only run my code with Delta parameters up to about 4. One day I might want to use a larger Delta value to get better rank estimates, but because this would require even more primes I'd be out of luck. Ultimately I want to write code that can be pushed as far as computational power allows, and from a philosophical point of view, hard coding in an artificial limit is undesireable.<br /><br />But you do have a point. I should do some crunching and see if precomputing a list of primes will end up being faster/more practical.Simon Spicerhttps://www.blogger.com/profile/14637909868809683079noreply@blogger.comtag:blogger.com,1999:blog-4667528901979447244.post-43051316549875080062014-07-22T08:07:59.241-07:002014-07-22T08:07:59.241-07:00This might be a naive question, but why can't ...This might be a naive question, but why can't you produce once (or download from the internet) the list of primes less than say 10^11, store them in files, and when the computation is needed load the files one at the time, iterate over that part of the list and continue with the next file.<br />The list (or better the numpy array) would be of about 10^11/log(10^11) < 4*10^9 elements, in my computer running<br />test = numpy.random.random(10^9)<br />takes less than 7GB of RAM, so even with a modest 2GB computer, having the whole prime list split into 16 files would be enough.JAAMhttps://www.blogger.com/profile/16754266863680935417noreply@blogger.com