The Electronic Frontier Foundation has released a Secure Messaging Scorecard that examines dozens of messaging technologies and rates each of them on a range of security best practices.
Our campaign is focused on communication technologies -- including chat clients, text messaging apps, email applications, and video calling technologies. These are the tools everyday users need to communicate with friends, family members, and colleagues, and we need secure solutions for them.
Notably, iMessage and FaceTime received checkmarks in five out of the seven areas examined. That's better than competing messaging platforms like Skype, WhatsApp, Viber, Snapchat, Kik, Google Hangouts, and BlackBerry Messenger.
As expected, less popular messengers that focus on security topped the list including: ChatSecure, CryptoCat, Signal, Silent, and TextSecure.
Here's a look at the methodology used to rank messengers...
Methodology
1. Is your communication encrypted in transit?
This criterion requires that all user communications are encrypted along all the links in the communication path. Note that we are not requiring encryption of data that is transmitted on a company network, though that is ideal. We do not require that metadata (such as user names or addresses) is encrypted.
2. Is your communication encrypted with a key the provider doesn't have access to?
This criterion requires that all user communications are end-to-end encrypted. This means the keys necessary to decrypt messages must be generated and stored at the endpoints (i.e. by users, not by servers). The keys should never leave endpoints except with explicit user action, such as to backup a key or synchronize keys between two devices. It is fine if users' public keys are exchanged using a centralized server.
3. Can you independently verify your correspondent's identity?
This criterion requires that a built-in method exists for users to verify the identity of correspondents they are speaking with and the integrity of the channel, even if the service provider or other third parties are compromised. Two acceptable solutions are:
● An interface for users to view the fingerprint (hash) of their correspondent's public keys as well as their own, which users can verify manually or out-of-band.
● A key exchange protocol with a short-authentication-string comparison, such as the Socialist Millionaire's protocol.
Other solutions are possible, but any solution must verify a binding between users and the cryptographic channel which has been set up. For the scorecard, we are simply requiring that a mechanism is implemented and not evaluating the usability and security of that mechanism.
4. Are past communications secure if your keys are stolen?
This criterion requires that the app provide forward-secrecy, that is, all communications must be encrypted with ephemeral keys which are routinely deleted (along with the random values used to derive them). It is imperative that these keys cannot be reconstructed after the fact by anybody even given access to both parties' long-term private keys, ensuring that if users choose to delete their local copies of correspondence, they are permanently deleted. Note that this criterion requires criterion 2, end-to-end encryption.
5. Is the code open to independent review?
This criterion requires that sufficient source-code has been published that a compatible implementation can be independently compiled. Although it is preferable, we do not require the code to be released under any specific free/open source license. We only require that all code which could affect the communication and encryption performed by the client is available for review in order to detect bugs, back doors, and structural problems.
Note: when tools are provided by an operating system vendor, we only require code for the tool and not the entire OS. This is a compromise, but the task of securing OSes and updates to OSes is beyond the scope of this project.
6. Is the crypto design well-documented?
This criterion requires clear and detailed explanations of the cryptography used by the application. Preferably this should take the form of a white-paper written for review by an audience of professional cryptographers. This must provide answers to the following questions:
● Which algorithms and parameters (such as key sizes or elliptic curve groups) are used in every step of the encryption and authentication process
● How keys are generated, stored, and exchanged between users
● The life-cycle of keys and the process for users to change or revoke their key
● A clear statement of the properties and protections the software aims to provide (implicitly, this tends to also provide a threat model, though it's good to have an explicit threat model too). This should also include a clear statement of scenarios in which the protocol is not secure.
7. Has there been an independent security audit?
This criterion requires an independent security review has been performed within the 12 months prior to evaluation. This review must cover both the design and the implementation of the app and must be performed by a named auditing party that is independent of the tool's main development team. Audits by an independent security team within a large organization are sufficient. Recognizing that unpublished audits can be valuable, we do not require that the results of the audit have been made public, only that a named party is willing to verify that the audit took place.
Take a look at the scorecard below...
Read More [via MacRumors]
Our campaign is focused on communication technologies -- including chat clients, text messaging apps, email applications, and video calling technologies. These are the tools everyday users need to communicate with friends, family members, and colleagues, and we need secure solutions for them.
Notably, iMessage and FaceTime received checkmarks in five out of the seven areas examined. That's better than competing messaging platforms like Skype, WhatsApp, Viber, Snapchat, Kik, Google Hangouts, and BlackBerry Messenger.
As expected, less popular messengers that focus on security topped the list including: ChatSecure, CryptoCat, Signal, Silent, and TextSecure.
Here's a look at the methodology used to rank messengers...
Methodology
1. Is your communication encrypted in transit?
This criterion requires that all user communications are encrypted along all the links in the communication path. Note that we are not requiring encryption of data that is transmitted on a company network, though that is ideal. We do not require that metadata (such as user names or addresses) is encrypted.
2. Is your communication encrypted with a key the provider doesn't have access to?
This criterion requires that all user communications are end-to-end encrypted. This means the keys necessary to decrypt messages must be generated and stored at the endpoints (i.e. by users, not by servers). The keys should never leave endpoints except with explicit user action, such as to backup a key or synchronize keys between two devices. It is fine if users' public keys are exchanged using a centralized server.
3. Can you independently verify your correspondent's identity?
This criterion requires that a built-in method exists for users to verify the identity of correspondents they are speaking with and the integrity of the channel, even if the service provider or other third parties are compromised. Two acceptable solutions are:
● An interface for users to view the fingerprint (hash) of their correspondent's public keys as well as their own, which users can verify manually or out-of-band.
● A key exchange protocol with a short-authentication-string comparison, such as the Socialist Millionaire's protocol.
Other solutions are possible, but any solution must verify a binding between users and the cryptographic channel which has been set up. For the scorecard, we are simply requiring that a mechanism is implemented and not evaluating the usability and security of that mechanism.
4. Are past communications secure if your keys are stolen?
This criterion requires that the app provide forward-secrecy, that is, all communications must be encrypted with ephemeral keys which are routinely deleted (along with the random values used to derive them). It is imperative that these keys cannot be reconstructed after the fact by anybody even given access to both parties' long-term private keys, ensuring that if users choose to delete their local copies of correspondence, they are permanently deleted. Note that this criterion requires criterion 2, end-to-end encryption.
5. Is the code open to independent review?
This criterion requires that sufficient source-code has been published that a compatible implementation can be independently compiled. Although it is preferable, we do not require the code to be released under any specific free/open source license. We only require that all code which could affect the communication and encryption performed by the client is available for review in order to detect bugs, back doors, and structural problems.
Note: when tools are provided by an operating system vendor, we only require code for the tool and not the entire OS. This is a compromise, but the task of securing OSes and updates to OSes is beyond the scope of this project.
6. Is the crypto design well-documented?
This criterion requires clear and detailed explanations of the cryptography used by the application. Preferably this should take the form of a white-paper written for review by an audience of professional cryptographers. This must provide answers to the following questions:
● Which algorithms and parameters (such as key sizes or elliptic curve groups) are used in every step of the encryption and authentication process
● How keys are generated, stored, and exchanged between users
● The life-cycle of keys and the process for users to change or revoke their key
● A clear statement of the properties and protections the software aims to provide (implicitly, this tends to also provide a threat model, though it's good to have an explicit threat model too). This should also include a clear statement of scenarios in which the protocol is not secure.
7. Has there been an independent security audit?
This criterion requires an independent security review has been performed within the 12 months prior to evaluation. This review must cover both the design and the implementation of the app and must be performed by a named auditing party that is independent of the tool's main development team. Audits by an independent security team within a large organization are sufficient. Recognizing that unpublished audits can be valuable, we do not require that the results of the audit have been made public, only that a named party is willing to verify that the audit took place.
Take a look at the scorecard below...
Read More [via MacRumors]