Valhalla Legends Forums Archive | Advanced Programming | RSA Encryption

AuthorMessageTime
CupHead
As many of you know, the RSA encryption algorithm is based upon the apparent inability to factor a large number in a reasonable amount of time. For those of you who don't know RSA, I'll give a quick synopsis...

Person A has a public and private key. Data encrypted with Person A's public key can only be decrypted by Person A's private key. Person A's private key consists of two prime numbers (p and q), the more digits, the more secure. Person A's public key consists of those two primes multiplied together (N). Person B encrypts the data using Person A's public key. No one else can read the data unless they have Person A's p and q in order to break down the data encrypted with N.

Anyway, finally to the programming aspect... With computers that are able to perform billions of calculations per second, why is it not possible to write a factoring algorithm that would be able to crack an RSA key with a modicum of speed?
June 25, 2003, 9:40 AM
Grok
About two months ago, Slashdot posted a link to a report published by the US Department of Justice. The report issued statements about the previous years computer criminal activity, and the DOJs investigations. Part of the data was:

Number of instances encryption was encountered: (some number)

Instances where encryption prevented access to the plain-text: 0

While I don't remember how many times they had run across encryption, the crux of the slashdot discussions was that the current public encryptions used by alleged criminals is apparently inadequate to stop the United States government from viewing the plaintext messages.

Since for most people that is RSA, using good security judgment, wouldn't you have to assume RSA is broken, or the NSA has found efficient means to solve it?
June 25, 2003, 11:24 AM
CupHead
Maybe, but first of all, it's innuendo. Secondly, I'm interested in the means by which it's done moreso than simply whether or not it has been. Also, for all you know, the plaintext was acquired using other methods (i.e. tempest attacks).
June 25, 2003, 5:59 PM
Adron
OK, an attempt at an answer... This may not be 100% accurate, but the idea should be valid.

The complexity of factoring a number is proportional to the number. To prove that a number is a prime you would normally attempt to factor it.

Now, let's say that to produce the number p for your key, you spend X cycles. To produce the number q for your key you also spend X cycles, because p and q are approximately of the same size. Now, to find these two numbers from the public key N, you would have to spend X*X cycles, since N = p*q.

This is what gives you the security - you spend a certain amount of time making your key, and this causes an attacker to have to spend extremely much more time. As if this wasn't enough, there are now cheap tricks in use so you don't actually try to factor your numbers p and q. You just do some tests on them which with a high probability "prove" them to be prime. This way you can pick very much larger p's and q's and there aren't any as good shortcuts found for actually factoring them yet.

I'm sure Yoni will be able to give you a full mathematical explanation to it once he stumbles across this post :)
June 25, 2003, 7:20 PM
CupHead
However, numbers that have a high probability of being prime run the chance (albeit small) that the attacker will be able to derive a much smaller set of factors that will be more easily found and just as easily exploitable. Yoni and I were discussing attacks on RSA this morning, he mentioned a new algorithm (AKS) which I've not looked in to as of yet. Anyway, the method which I was asking Yoni about had to deal with a means of identifying a range within which the prime number would be situated. You can effectively eliminate 60% of all numbers (if N mod 5 = 0 or if N mod 2 = 0 [Credit: Yoni]) basically meaning numbers ending in 0, 2, 4, 5, 6, or 8 will not be p or q. However, the most serious factor in finding these "primes" is to identify the range that they lie within. Restricted to the order of say 10[sup]100[/sup] is much better than say 10[sup]308[/sup] (Which used to be the standard length for business transactions.). We also discussed the possibility of using the Riemann zeta function (which is as of yet unproven/unsolved) which provides a function describing the distribution of primes. If this were accurate (read: usable) with primes of n digits, one could further restrict the range of numbers to be tested.
June 25, 2003, 8:29 PM
Adron
Indeed. It's cheating, but probabilistic tests are apparently considered good enough these days. Eliminating ranges of numbers is one of the good old basic methods of finding primes. The problem with that particular method is that you require huge amounts of storage. Eliminating 60% of all number may sound good, but it's really worth almost nothing. Add 2 bits to the prime numbers and you'll have 4 times as many numbers - 2 bits out of 1024, 2048 or more is very little. I'm not sure exactly how you'd identify the range for one of the prime factors? In most cases you can count on each factor of a 1024 bit key to have 512 bits, but knowing that the prime is in the range 2^512 to 2^513-1 and not in the range of 0 to 2^513-1 won't help much either?

June 26, 2003, 2:49 AM
Raven
Any sequence of prime numbers will theoretically always have some sort of pattern. If one is clever enough to figure that pattern out (we're talking about a heck of a mathematical mind and recognition of patterns, possibly an autistic person), they can restrict the range to the point where a computer would be able to connect the p and q values within 24-72 hours.

http://www.fermi.franken.de/wschildbach/primes.html
July 4, 2003, 1:02 AM
Camel
Graph Electrical Conductivity vs Atomic Number.
July 5, 2003, 11:33 PM

Search