Mostly handwavy nonsense himself (Score: 2, Informative) by brendank310@pipedot.org on 2014-03-10 20:27 (#D9) The author has a bit of a grasp on the structure of the Linux RNG, but he is missing the large scheme of things. Entropy isn't as handwavy as he makes it out to be, he just claims that it is, so handwave yourself past the blocking to the unblocking read. A guy on StackOverflow agrees too! Blocking does have it's problems, and really ought to be used as a seed for a quality generator function if you need enormous amounts of secure random data (in all likelihood you don't). The facts he lines up don't have anything to do with each other for the most part. The fact that random blocks while urandom doesn't is a performance issue, not a security issue. However, it is always good to have topic brought up every now and then (I personally enjoy it).Estimating the amount of entropy in a given pool is difficult, Bruce Schneir says that is the real difficulty in creating a CSPRNG. Some designs don't rely on pools (see Schneir's Yarrow). In reality the issue is going to become more and more moot, as hardware instructions are available for latter processors to produce fast streams of random numbers that are adequate for reseeding a CSPRNG quite often, or use the output by itself. Intel's project was called Blue Mountain, and the instruction is available on post-Sandy Bridge architectures (It's non-priveleged, assembly x86 instruction rdrand). I think it would be an interesting patch to allow that to be used within the kernel RNG. It relies on instability of (semi-)digital circuits to create random bits at the speed of the clock. Some people may yell it's NSA backdoored, but using it with dabs of your input, disk and network entropy is better than anything we currently have.Is using /dev/urandom probably good enough? Yeah probably, but I don't see using /dev/random as such a huge issue in the first place. Want the best of both worlds? Read from /dev/random and write it into /dev/urandom, then do reads from urandom. Don't rely on this guys blog post, because even his improved diagram of the Linux RNG is incorrect. He knows enough to be dangerous, not enough to know that he's dangerous. Re: Mostly handwavy nonsense himself (Score: 0) by Anonymous Coward on 2014-03-10 21:34 (#DH) For the record, he does point out himself that his improved diagram of Linux's RNG is "a pretty rough simplification".Besides the "a guy on StackOverflow" (who, by chance, happens to be an expert in cryptography), there is also the interesting point made by Daniel Bernstein, and I'd love to hear your knowledgeable insights on that one.You were saying ? Re: Mostly handwavy nonsense himself (Score: 1) by brendank310@pipedot.org on 2014-03-11 00:20 (#E0) He says both are CSPRNGs. Part of the requirement of a CSPRNG is keeping the seed secret. Not using actual entropy sources makes state compromise extension attacks possible. Both tools have their place. You may want to use /dev/urandom for generating noise in image filters, where there is a need for a bulk of data. You can use the entropy collected by the pools of the blocking source as a seed to create a CSPRNG, but using just urandom results in some somewhat unrealistic, yet possible opportunities to attack. See "cryptanalytic attacks on pseudorandom number generators" by schneir and company. The definition of random I like for RNG (and other stuff in general) is that a uniform distribution is formed for outputs. Often a linear congruential generator function is used, and if parameters are chosen carefully, you can choose any input (besides zero, that is problematic value in crypto in many cases) and it will go ahead and create a provably uniform distribution across a given range. It's possible to do it and get the period to be 2^32 - 1 for a 32-bit seed, longer if you pick nice primes (I believe the Linux RNG uses a mersenne twister generator) that has a period of much higher than that. That's actually too perfect for randomness however (sometimes you roll back to back 7s, it happens even for a 2^32 faced die), that's why reseeding is important, and why you want to mix in actual random sources.His whole article is riddled with "I believe" and "I don't think that's important". His pretty rough simplification is wrong. That's the argument I'm making. The pools are completely separate buffers. Berstein goes on to say in his post:"I'm not saying that /dev/urandom has a perfect API. It's disappointinglycommon for vendors to deploy devices where the randomness pool has neverbeen initialized; BSD /dev/urandom catches this configuration bug byblocking, but Linux /dev/urandom (unlike Linux /dev/random) spewspredictable data, causing (e.g.) the widespread RSA security failuresdocumented on http://factorable.net. But fixing this configuration bughas nothing to do with the /dev/random superstitions."So using /dev/urandom may be all fine and dandy (I disagree but I've got time to wait for blocking 256 bits from /dev/random), unless it's not initialized or there is some other implementation problem. With the blocking source, at least you get some bits from source that are local and largely not reproducible (time between keyboard interrupts sampled possibly in the gigahertz), you're already ahead when you get through the hash function. So why not raise the bar when generating keys, if you're this concerned with security and have a high performance need for key generation there are solutions out there also.tl;dr Blog author doesn't care about some of the theoretical attacks, they're too hard and other crypto will probably break first so why bother
Re: Mostly handwavy nonsense himself (Score: 0) by Anonymous Coward on 2014-03-10 21:34 (#DH) For the record, he does point out himself that his improved diagram of Linux's RNG is "a pretty rough simplification".Besides the "a guy on StackOverflow" (who, by chance, happens to be an expert in cryptography), there is also the interesting point made by Daniel Bernstein, and I'd love to hear your knowledgeable insights on that one.You were saying ? Re: Mostly handwavy nonsense himself (Score: 1) by brendank310@pipedot.org on 2014-03-11 00:20 (#E0) He says both are CSPRNGs. Part of the requirement of a CSPRNG is keeping the seed secret. Not using actual entropy sources makes state compromise extension attacks possible. Both tools have their place. You may want to use /dev/urandom for generating noise in image filters, where there is a need for a bulk of data. You can use the entropy collected by the pools of the blocking source as a seed to create a CSPRNG, but using just urandom results in some somewhat unrealistic, yet possible opportunities to attack. See "cryptanalytic attacks on pseudorandom number generators" by schneir and company. The definition of random I like for RNG (and other stuff in general) is that a uniform distribution is formed for outputs. Often a linear congruential generator function is used, and if parameters are chosen carefully, you can choose any input (besides zero, that is problematic value in crypto in many cases) and it will go ahead and create a provably uniform distribution across a given range. It's possible to do it and get the period to be 2^32 - 1 for a 32-bit seed, longer if you pick nice primes (I believe the Linux RNG uses a mersenne twister generator) that has a period of much higher than that. That's actually too perfect for randomness however (sometimes you roll back to back 7s, it happens even for a 2^32 faced die), that's why reseeding is important, and why you want to mix in actual random sources.His whole article is riddled with "I believe" and "I don't think that's important". His pretty rough simplification is wrong. That's the argument I'm making. The pools are completely separate buffers. Berstein goes on to say in his post:"I'm not saying that /dev/urandom has a perfect API. It's disappointinglycommon for vendors to deploy devices where the randomness pool has neverbeen initialized; BSD /dev/urandom catches this configuration bug byblocking, but Linux /dev/urandom (unlike Linux /dev/random) spewspredictable data, causing (e.g.) the widespread RSA security failuresdocumented on http://factorable.net. But fixing this configuration bughas nothing to do with the /dev/random superstitions."So using /dev/urandom may be all fine and dandy (I disagree but I've got time to wait for blocking 256 bits from /dev/random), unless it's not initialized or there is some other implementation problem. With the blocking source, at least you get some bits from source that are local and largely not reproducible (time between keyboard interrupts sampled possibly in the gigahertz), you're already ahead when you get through the hash function. So why not raise the bar when generating keys, if you're this concerned with security and have a high performance need for key generation there are solutions out there also.tl;dr Blog author doesn't care about some of the theoretical attacks, they're too hard and other crypto will probably break first so why bother
Re: Mostly handwavy nonsense himself (Score: 1) by brendank310@pipedot.org on 2014-03-11 00:20 (#E0) He says both are CSPRNGs. Part of the requirement of a CSPRNG is keeping the seed secret. Not using actual entropy sources makes state compromise extension attacks possible. Both tools have their place. You may want to use /dev/urandom for generating noise in image filters, where there is a need for a bulk of data. You can use the entropy collected by the pools of the blocking source as a seed to create a CSPRNG, but using just urandom results in some somewhat unrealistic, yet possible opportunities to attack. See "cryptanalytic attacks on pseudorandom number generators" by schneir and company. The definition of random I like for RNG (and other stuff in general) is that a uniform distribution is formed for outputs. Often a linear congruential generator function is used, and if parameters are chosen carefully, you can choose any input (besides zero, that is problematic value in crypto in many cases) and it will go ahead and create a provably uniform distribution across a given range. It's possible to do it and get the period to be 2^32 - 1 for a 32-bit seed, longer if you pick nice primes (I believe the Linux RNG uses a mersenne twister generator) that has a period of much higher than that. That's actually too perfect for randomness however (sometimes you roll back to back 7s, it happens even for a 2^32 faced die), that's why reseeding is important, and why you want to mix in actual random sources.His whole article is riddled with "I believe" and "I don't think that's important". His pretty rough simplification is wrong. That's the argument I'm making. The pools are completely separate buffers. Berstein goes on to say in his post:"I'm not saying that /dev/urandom has a perfect API. It's disappointinglycommon for vendors to deploy devices where the randomness pool has neverbeen initialized; BSD /dev/urandom catches this configuration bug byblocking, but Linux /dev/urandom (unlike Linux /dev/random) spewspredictable data, causing (e.g.) the widespread RSA security failuresdocumented on http://factorable.net. But fixing this configuration bughas nothing to do with the /dev/random superstitions."So using /dev/urandom may be all fine and dandy (I disagree but I've got time to wait for blocking 256 bits from /dev/random), unless it's not initialized or there is some other implementation problem. With the blocking source, at least you get some bits from source that are local and largely not reproducible (time between keyboard interrupts sampled possibly in the gigahertz), you're already ahead when you get through the hash function. So why not raise the bar when generating keys, if you're this concerned with security and have a high performance need for key generation there are solutions out there also.tl;dr Blog author doesn't care about some of the theoretical attacks, they're too hard and other crypto will probably break first so why bother