Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Crypto: Archive with three different crypto programs - how safe? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Crypto: Archive with three different crypto programs - how safe?

2»

Comments

  • This is/was/maybe my plan:

    All three container files is created independently with their respectably software as one and one file = total 3 files. They have nothing to do with each other, beside I will copy C into B and B into A. To open C (there the files is) you have to mount A, then mount B, then mount C. If A is cracked, they still have to crack B, then C, meaning they have to crack three different crypto software, three different crypto and three different passwords.

  • emgemg Veteran
    edited September 2016

    Follow-up:

    YOU CAN'T COMPRESS ENCRYPTED DATA:

    As others have pointed out, if your data is encrypted, then you can't compress it. If you find that you can compress your encrypted data, then your encryption algorithm is not secure. One of the first lessons that budding cryptographers learn is: "Compress, then encrypt".

    I suggested "zip then encrypt" as much to bundle the files together into one package for convenience as to save on storage, but I wrote it mostly to demonstrate the correct order as I explained in the paragraph above. @myhken (Kenneth) can bundle up his archive however he wants. The tar command might be more appropriate. I should have been more clear about that. Sorry.

    HOW MULTIPLE LAYERS OF ENCRYPTION CAN DECREASE SECURITY

    Kenneth asks how multiple layers of encryption could weaken his security. In most reasonable cases it won't, other than wasting CPU cycles. I will try to explain how it can make a difference and reduce security, and it isn't just the encryption part.

    Consider ROT13.

    ROT13 might be good enough to keep your kid sister's diary safe from your kid brother. If your kid sister double-encrypts the diary with ROT13, then even your kid brother will be able to read it. (That's because the second "encryption" actually decrypts the text back to its original form.)

    To cite an example that is more relevant to Kenneth's proposal, look at Triple DES.

    (Single) DES is a 64-bit block cipher (encryption algorithm) with a 56-bit key size. It was a US government standard, now deprecated because it is not secure. There are several reasons why DES is not secure, but that doesn't matter. To strengthen DES for high security use, the government approved Triple DES. Triple DES is where you perform the DES "encryption" operation three times, each with a different, independent key. Thus, the Triple DES key size is 168 bits, which is considered very strong, even by today's standards.

    A brute force attack is one where you try every possible key until you find the right one. You might get lucky on the first try, or you might be unlucky and try all possible keys, but on average, you would try about half the keys before you find the right one. To brute force an ideal encryption algorithm with a 128-bit key, you would try 2^127 keys on average. (2^127 is half of 2^128.)

    Based on what I wrote above, you would think that the average "work factor" to break Triple DES with the 168-bit key would be 2^167. In fact, it is only 2^112 plus a hair more. (That is still very strong, but not as strong as you would expect.)

    I have left out many important details, including memory requirements, but the essential point is that even though you are performing three separate encryptions using three independent keys, the resulting security is roughly as strong as two encryptions, not three. (Extra credit: Search for "Triple DES meet in the middle attack.")

    APPLYING THE LESSON TO KENNETH's PROPOSAL

    Let us assume that the only way to attack Kenneth's proposed solution is a brute force attack on each product, and that the products are roughly equivalent. Basically: Try every possible password until you get it right. I will assume that the passwords are completely random combinations of uppercase, lowercase, numbers, and special characters.

    Per the GRC website (https://www.grc.com/haystack.htm):

    • First Password is 16 characters = 10^31 = 105 bits
    • Second Password is 22 characters = 10^43 = 145 bits
    • Third Password is 32 characters = 10^63 = 210 bits

    You would think that the strength of Kenneth's three program scheme is 16 + 22 + 32 = 70 characters = 10^138 = 459 bits. That is not true. The actual strength of Kenneth's scheme is "only" 16 + 22 = 38 characters = 10^67 = 224 bits.

    Kenneth would have stronger security if he used one encryption, but added seven more characters to his longest password (32 + 7 = 39 characters). In that case, 39 characters = 10^69 = 229 bits.

    In other words, a single encryption with a 39 character password is 32 times stronger than Kenneth's three-pass encryption with the 70 character combined password (16 + 22 + 32 characters).

    NOT THE ONLY EXAMPLE

    I can cite many other examples where multiple encryptions are weaker than a single pass. To be honest, many of them rely on poor software implementations by programmers with less than ideal security programming experience, or weak algorithms. Typical examples involve (re)using related keys and/or IVs, certain block cipher mode issues, stream ciphers, etc.

    WHY KENNETH'S PROPOSAL WEAKENS SECURITY

    I will concede that as long as the passwords (keys) are strong, independent, and not reused, using multiple products to perform encryptions is very unlikely to weaken the security related to encryption below the strength of the longest password that he uses (32 characters). As we have seen above, it doesn't really help, either.

    That's not the only aspect of security that should be important to Kenneth. People often think about the "CIA" triad - confidentiality (encryption), integrity, and availability.

    Complexity is the enemy of security, and Kenneth's proposed solution is a poor one, because of its operational complexity. If Kenneth makes any mistakes, then his data is lost - not available when he needs it. The more processing operations that are performed, the more likely that a bit will flip and put the integrity of his data at risk, too. Some products may detect integrity issues and fail ungracefully, losing all of the data, not just a small amount. The probability of a bit error in the processing is far higher than the key strength of one of his three encryption products put together, let alone all three in sequence.

    RECOMMENDATION

    I still believe that Kenneth should find one good solid product that meets his needs, and rely on it. Choose a long, strong password, and it does not have to be 39 characters, that's for sure. That's what I would do. KISS.

    Okay, I wrote waaay more than I wanted (or expected), but I hope I got my points across.

  • emgemg Veteran
    edited September 2016

    P.S. I hope everybody realizes that once key sizes reach a certain point, adding bits to a key length does not improve security in a practical sense.

    If it takes longer than the lifetime of the universe to brute force a key, then what does lengthening the key by another bit mean? (Doubling the time to brute force the key.)

    Thanked by 1myhken
  • raindog308 said: If you can compress after encryption, your encryption isn't very good. By definition.

    You would be right for perfect encryption, and compressing an encrypted file is fucked up - but worse than encrypting a highly compressed file.

    But no encryption method is perfect, it's all about the trade-offs you make. I don't recommend mixing encryption and compression, or layering/stacking encryption one on top of another, but if you want to do it there are ways to minimize the damage. For instance, if you want to compress a file-system image you could do run-length encoding to skip all the zeroes and reduce file-size, and then do encryption on that.

    myhken said: All three container files is created independently with their respectably software as one and one file = total 3 files.

    Individual encryption methods work by squeezing in entropy in different ways, and understanding how they interact is very complex - just because one is AES and another is Serpent doesn't mean they are independent/orthogonal. Encryption strength also depends on your input file, where a method can encrypt some byte sequences better than others. Any security guarantees are probabilistic, over many files.

    Adding many passwords just makes you feel safer, it doesn't change the fundamental strength of any of these methods.

    I would recommend using a single strong encryption method - along the lines of PGP/GPG. PCP1 is a variation of PGP that uses elliptic curves instead of AES.

    Thanked by 1myhken
  • When nobody support my theory it's time to do what the expert says. Use one of my crypto software, choose a crypto and then use a long(er) password.

  • rincewindrincewind Member
    edited September 2016

    emg said: budding cryptographers learn is: "Compress, then encrypt".

    Here is a counter example: Let's say a file has only combinations of "true" or "false". Eg - "true false true"

    Case1: Compress then Encrypt

    A perfect compression would whittle down each word to a single bit : either 0 or 1. So "true false true" = "101" - 3 bits. Then you encrypt that 3-bit sequence. Doesn't look good to me.

    Case2: Encrypt then Compress

    Encrypt your text file of 16 characters. Since the input has higher entropy than 3 bits, you would get better encrypted strings, but maybe negligible compression.

    Thanked by 1myhken
  • raindog308raindog308 Administrator, Veteran

    rincewind said: You would be right for perfect encryption, and compressing an encrypted file is fucked up - but worse than encrypting a highly compressed file.

    But no encryption method is perfect, it's all about the trade-offs you make. I don't recommend mixing encryption and compression, or layering/stacking encryption one on top of another,

    You may not, but pros in the field do:

    "10.6 Compression, Encoding, and Encryption

    Using a data compression algorithm together with an encryption algorithm makes sense for two reasons:

    • Cryptanalysis relies on exploiting redundancies in the plaintext; compressing a file before encryption reduces these redundancies.

    • Encryption is time-consuming; compressing a file before encryption speeds up the entire process.

    The important thing to remember is to compress before encryption. If the encryption algorithm is any good, the ciphertext will not be compressible; it will look like random data. (This makes a reasonable test of an encryption algorithm; if the ciphertext can be compressed, then the algorithm probably isn’t very good.)"

    -- Bruce Schneier, Applied Cryptography, 2nd Edition

    "Cryptanalysts use the natural redundancy of language to reduce the number of possible plaintexts. The more redundant the language, the easier it is to cryptanalyze. This is the reason that many real-world cryptographic implementations use a compression program to reduce the size of the text before encrypting it. Compression reduces the redundancy of a message as well as the work required to encrypt and decrypt."

    Same book, page 333

    Thanked by 3myhken yomero emg
  • yomeroyomero Member
    edited September 2016

    Afer reading all this discussion, the only point that I want to add is, what a waste of resources and time to do this. If that can be an extra negative point to do this.

  • rincewindrincewind Member
    edited September 2016

    raindog308 said: Bruce Schneier, Applied Cryptography, 2nd Edition

    Hard to argue against Bruce Schneier without the context, but things have changed from 1996.

    Side-channel attacks based on the compressor can be used to break encryption as discussed here.

    The “folk wisdom” in the cryptographic community is that adding compression
    to a system that does encryption adds to the security of the system, e.g.,
    makes it less likely that an attacker might learn anything about the data being
    encrypted. This belief is generally based on concerns about unicity distance,
    keysearch difficulty, or ability of known- or chosen-plaintext attacks. We believe
    that this folk wisdom, though often repeated in a variety of sources, is not generally
    true; adding compression to a competently designed encryption system has
    little real impact on its security. 
    

    Cases in point, the CRIME and BREACH attacks that use information leakage in HTTP compression to break SSL and TLS.

    The point is that both encryption and compression have different goals when it comes to transforming entropy. You need to balance your priorities.

    EDIT: Added quote from paper

  • My thanks to @raindog308 and @rincewind for their time and helpful contributions.

    In my opinion, @rincewind's comments about potential security issues with compression are valid in some contexts (e.g., HTTPS), but do not apply in the context of @myhken's specific problem.

  • joepie91 said: I'm disputing that it's legitimate to advertise it as "military-grade".

    I think it's perfectly legitimate and in no way misleading to do so, but each to their own.

    joepie91 said: it does assume that each input under 128 bits of entropy results in precisely one unique output, without collisions in that space. I don't think that's necessarily a valid assumption.

    Salt adds entropy.

    joepie91 said: the origins of the issue are not really relevant when looking at a more recent state of the code

    But I believe it is, which is why I mentioned it.

    joepie91 said: It can be. Here is one example.

    I really can't be bothered reading through a massive page to find something that's relevant to the topic at hand, so I'll apologise that I won't be getting back to you on that one.

    rincewind said: Noooo. Encrypt and then compress! Both have diametrically opposite goals. Compression decreases the entropy of a stream to approach the entropy of the source (perfect/Hamming codes), while encryption increases it to make it appear more random. Compressed files have predictable regions

    Excuse me for saying this, but are you honestly just throwing out buzzwords hoping that it somehow makes sense? You're so far off the mark that it's not even worth responding to.

    But since I was stupid enough to enter the conversation anyway, here I go:

    • compression increases entropy
    • encryption also increases entropy. In a number of ways, compression and encryption have similar goals
    • Hamming codes are a form of parity/FEC scheme, and have little relevance to compression or encryption
    • as pointed out, you can't compress encrypted data if your encryption scheme is any good
    • compression and predictability are 'diametrically opposite goals'. If something can be predicted, then compression should remove it.
    • as pointed out, compression can improve the effectiveness of encryption by increasing the source's entropy as well as reducing the amount of 'vulnerable data'. In practice, the difference probably isn't really noteworthy though. Note that if the attacker can influence the source data (i.e. a chosen plaintext attack), attacks such as 'CRIME' are problematic as it can cause leaks due to length changes, but this isn't an issue if the attacker cannot do this

    raindog308 said: You don't know that

    I do know that. What of it?

  • If you believe that encryption and compression have similar goals then it makes a conversation difficult.

    Compression removes the redundancy in the input to represent the input in the fewest number of bytes. Eg - "true false" to "10"

    Encryption uses the redundancy to make the input less predictable. Eg - "true false" to "T%#DJ#F4s2" - i.e. same number of bytes but the memory capacity of the 10-byte string is filled up with more entropy extracted from /dev/urandom, salts and passwords.

    I understand your skepticism on the relevance of Hamming codes, but compression can be viewed as a packing problem. I'll just refer to the Wikipedia entry on Coding Theory under the section on Perfect codes and leave it at that.

    On the question of compression improving effectiveness of encryption, I have already addressed it in my previous post.

  • If I recall, the requirements were:

    (Paraphrased by @emg): I have an archive and I want to encrypt it so that only I can read it. I want to store the encrypted archive overseas, in a place where attackers will have access to the encrypted archive. I want to feel confident that my encrypted archive will remain secure no matter what, even if the attackers have virtually unlimited time and resources.

    Kenneth (@myhken) proposed a solution that uses three different encryption tools in succession. He names the three tools, along with the respective encryption algorithms and password sizes that he intends to use.

    The discussions about entropy, compression, etc. are academically interesting, but are not very useful to Kenneth unless you show him how your conclusions apply to his problem.

    Here is my summary:

    • We have demonstrated why using multiple tools weakens security. (I failed to mention that if you use three encryption products, you triple the risk that one of your chosen tools might lose support, backwards compatibility, etc., so I will mention it now.)

    • We gave Kenneth a way to measure the strength of a random password.

    • I hope we made it clear to Kenneth that once the strength of his encryption reaches a certain point, increasing the key size or adding another encryption layer will not improve security. At best, it will be just as secure, and at worst, it will weaken it. In my opinion, a well-written encryption tool that uses a full 128-bit key is sufficient.

    • We have explained that if you are going to compress the archive, you must do it before encryption. Some encryption tools include a compression capability.

    • I believe that we have shown Kenneth that compressing his archive does not weaken security for his use case. There are other situations where compression should not be used, but they do not apply to Kenneth's archive question.

    I recommend that Kenneth find a single encryption product that he trusts. Kenneth should use it to encrypt his archive. He should choose a strong password that is the equivalent of a 128-bit key (that's a very long password!). He can safely compress his archive (for bundling or size-reduction purposes) before encryption, without degrading security for this use case.

    Does anyone disagree with that?

    Thanked by 1myhken
  • @emg said:

    Why do you keep calling him Kenneth?

  • raindog308raindog308 Administrator, Veteran

    xyz said: I do know that. What of it?

    You may, but my point was that 99.999% of people on the planet (hell, it might be 99.999999%) don't have the math background to understand encryption. Yet many people try to roll their own encryption.

    I don't argue with my dentist about the right way to fill a cavity. I don't argue with professional cryptographers about encryption theory. Some things just require a specialist and it's folly to ignore their advice.

  • rincewindrincewind Member
    edited September 2016

    As an alternate to dd + gzip, you could do e2image -Qap to generate a QCOW2 image, before encrypting it.

  • emgemg Veteran
    edited September 2016

    @ManofServer said:

    Why do you keep calling him Kenneth?

    Umm ... because that's his name?

    Did you read his signature on the top post of this thread ... and every one of his posts after that? Don't respond; it is a rhetorical question. We already know the answer.

  • @ManofServer said:

    @emg said:

    Why do you keep calling him Kenneth?

    Hello, my name is Kenneth. Whats yours?

  • raindog308raindog308 Administrator, Veteran

    @myhken said:
    Hello, my name is Kenneth.

    What's the frequency, btw?

    Thanked by 3emg Microlinux myhken
  • @emg said:

    @ManofServer said:

    Why do you keep calling him Kenneth?

    Umm ... because that's his name?

    Did you read his signature on the top post of this thread ... and every one of his posts after that? Don't respond; it is a rhetorical question. We already know the answer.

    It still is weird to cite his username and first name several times in the same post, no one else does it? I didn't understand the fundamental purpose behind it.

  • emgemg Veteran
    edited September 2016

    @ManofServer said:

    It still is weird to cite his username and first name several times in the same post, no one else does it? I didn't understand the fundamental purpose behind it.

    I tried to cite the username once to tag the post, and use his real name thereafter. It was my (misguided) attempt to make the comments more personable - with the goal of answering his question and solving the problem.

    ON TOPIC:
    Are there any disagreements with my summary and recommendation eight posts above this one?

    Thanked by 1ManofServer
  • Thanked by 1emg
  • rincewind said: Eg - "true false" to "T%#DJ#F4s2" - i.e. same number of bytes

    And if I got a 10 byte string, I know that the source is either "true false" or "false true". "false false" would be 11 bytes and "true true" would be 9. In other words, the encrypted data is leaking information to me.
    On the other hand, if all I have is an encrypted, perfectly compressed, two bits, I'd have no idea whether the original was 00, 01, 10, or 11; in other words, the encrypted data does not leak any additional information that I didn't already know.

    The situation would be worse if it was only a single true/false state - if I got a 4 byte encrypted string, regardless of how strong the encryption is, I know that it decrypts to 'true'.

    I strongly suggest revising your understanding of data theory and the notion of entropy and what it exactly is, before making any further comments.

    raindog308 said: Yet many people try to roll their own encryption.

    I don't believe that is what is being attempted here though. Existing well known encryption algorithms are being nested - no-one is inventing their own crypto system.

    You don't need a strong background in coding theory to understand the basic 'guarantees' (or rather, what it attempts to guarantee) that a strong encryption system provides.

  • rincewindrincewind Member
    edited September 2016

    xyz said: On the other hand, if all I have is an encrypted, perfectly compressed, two bits, I'd have no idea whether the original was 00, 01, 10, or 11; in other words, the encrypted data does not leak any additional information that I didn't already know.

    You just demonstrated again that you don't know the difference between compression and encryption.

    A compressed file comes along with a dictionary (either explicit or implicit) that tells you how to decompress the file. In this case the dictionary would be (0 => "false", 1=> "true"). Constructing the dictionary is how the redundancy is extracted from the system. In the simplest example of decryption you just replace every 0 with the string "false" and 1 with "true". Dynamically constructed dictionaries are embedded into your compressed file, and static dictionaries are inferred from the compression type (in the file header). So it's trivial to uncompress "00" to "false false".

    Look at this link for examples on how dictionaries are built in a few popular compression algorithms.

    Huffman coding is among the easier compression algorithms to implement by hand. Read through it's wiki entry, and work through a few examples of compressing and decompressing strings by hand.

  • I must say, you certainly have quite some imagination there. Not like your lack of knowledge of the truths matter I suppose, but pointing out inconsistencies is getting rather tiresome and clearly a waste of my (and probably anyone else who peers into this thread) time.

    Believe what you will, I'm done here.

    Thanked by 1yomero
  • Insults wont get you anywhere. You just believe compressed files are encrypted - like in the perfectly compressed 2-bit case. Anyone who intercepts a compressed file can easily decompress it - there is no expectation of secrecy.

  • @rincewind said:

    You just believe compressed files are encrypted - like in the perfectly compressed 2-bit case.

    I don't think he said anything like that.

  • rincewindrincewind Member
    edited September 2016

    yomero said: I don't think he said anything like that.

    At 2 bits to represent one of four possibilities, how would you encrypt it? The best you can do is a substitution cipher which is the same as modifying the dictionary of your compression.

    To be fair, I brought in some pre-existing bias from his previous post about compression and encryption having similar goals - and compression increasing entropy.

Sign In or Register to comment.