Category Archives: Encryption
Key Features to Look for When Buying Encryption Software?
PerformanceIf your encryption software is difficult to use, you may not use it at all. The programs we reviewed are simple and intuitive, particularly Folder Lock and Secure IT they both guide you through the encryption and decryption processes step by step. Secure IT integrates with Windows, so all you have to do is right-click on a file and choose to encrypt it in the menu.
We found that programs typically compress files as they encrypt them, though only to a small degree for example, from 128MB down to 124MB. It can make a difference when you encrypt large data files, so programs that protect and compress are preferable.SecurityEncryption software uses different types of ciphers to scramble your data, and each has its own benefits. Advanced Encryption Standard, or 256-bit key AES, is used by the U.S. government, including the National Security Agency (NSA), and is one of the strongest ciphers available. Blowfish and Twofish, the latter being a newer version of the former, are encryption algorithms that use block ciphers they scramble blocks of text or several bits of information at once, rather than one bit at a time.
The main differences between these algorithms are performance and speed, and the average user wont notice those disparities. Although any of these ciphers could be broken given enough time and computing power, they are considered practically unbreakable. AES has long been recognized as the superior algorithm, so we preferred programs that use it.Version CompatibilityIf your computer runs an older version of Windows, such as Vista or XP, make sure the encryption program supports your operating system. On the flip side, you need to make sure you choose software that has changed with the times and supports the latest versions of Windows, like 7, 8 and 10.
While all the programs we tested are compatible with every version of Windows, we feel that SensiGuard is a good choice for older computers because it only has the most essential tools and wont bog down your PC. Plus, it is easy to move to a new computer if you choose to upgrade. However, it takes a while to encrypt and decrypt files.
If you have a Mac computer, you need a program that is designed specifically for that operating system none of the programs we tested are compatible with both Windows and Mac machines. We believe Concealer is the best option for Macs, but Espionage 3 is also a good choice.
Mac encryption software doesnt have as many extra security features as Windows programs. They typically lack virtual keyboards, self-extracting file creators and password recovery tools. Mac programs also take a lot more time to secure files compared to Windows software.
Read the rest here:
The Best Encryption Software – TopTenReviews
Most sensitive web transactions are protected by public-key cryptography, a type of encryption that lets computers share information securely without first agreeing on a secret encryption key.
Public-key encryption protocols are complicated, and in computer networks, theyre executed by software. But that wont work in the internet of things, an envisioned network that would connect many different sensors embedded in vehicles, appliances, civil structures, manufacturing equipment, and even livestock tags to online servers. Embedded sensors that need to maximize battery life cant afford the energy and memory space that software execution of encryption protocols would require.
MIT researchers have built a new chip, hardwired to perform public-key encryption, that consumes only 1/400 as much power as software execution of the same protocols would. It also uses about 1/10 as much memory and executes 500 times faster. The researchers describe the chip in a paper theyre presenting this week at the International Solid-State Circuits Conference.
Like most modern public-key encryption systems, the researchers chip uses a technique called elliptic-curve encryption. As its name suggests, elliptic-curve encryption relies on a type of mathematical function called an elliptic curve. In the past, researchers including the same MIT group that developed the new chip have built chips hardwired to handle specific elliptic curves or families of curves. What sets the new chip apart is that it is designed to handle any elliptic curve.
Cryptographers are coming up with curves with different properties, and they use different primes, says Utsav Banerjee, an MIT graduate student in electrical engineering and computer science and first author on the paper. There is a lot of debate regarding which curve is secure and which curve to use, and there are multiple governments with different standards coming up that talk about different curves. With this chip, we can support all of them, and hopefully, when new curves come along in the future, we can support them as well.
Joining Banerjee on the paper are his thesis advisor, Anantha Chandrakasan, dean of MITs School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science; Arvind, the Johnson Professor in Computer Science Engineering; and Andrew Wright and Chiraag Juvekar, both graduate students in electrical engineering and computer science.
To create their general-purpose elliptic-curve chip, the researchers decomposed the cryptographic computation into its constituent parts. Elliptic-curve cryptography relies on modular arithmetic, meaning that the values of the numbers that figure into the computation are assigned a limit. If the result of some calculation exceeds that limit, its divided by the limit, and only the remainder is preserved. The secrecy of the limit helps ensure cryptographic security.
One of the computations to which the MIT chip devotes a special-purpose circuit is thus modular multiplication. But because elliptic-curve cryptography deals with large numbers, the chips modular multiplier is massive. Typically, a modular multiplier might be able to handle numbers with 16 or maybe 32 binary digits, or bits. For larger computations, the results of discrete 16- or 32-bit multiplications would be integrated by additional logic circuits.
The MIT chips modular multiplier can handle 256-bit numbers, however. Eliminating the extra circuitry for integrating smaller computations both reduces the chips energy consumption and increases its speed.
Another key operation in elliptic-curve cryptography is called inversion. Inversion is the calculation of a number that, when multiplied by a given number, will yield a modular product of 1. In previous chips dedicated to elliptic-curve cryptography, inversions were performed by the same circuits that did the modular multiplications, saving chip space. But the MIT researchers instead equipped their chip with a special-purpose inverter circuit. This increases the chips surface area by 10 percent, but it cuts the power consumption in half.
The most common encryption protocol to use elliptic-curve cryptography is called the datagram transport layer security protocol, which governs not only the elliptic-curve computations themselves but also the formatting, transmission, and handling of the encrypted data. In fact, the entire protocol is hardwired into the MIT researchers chip, which dramatically reduces the amount of memory required for its execution.
The chip also features a general-purpose processor that can be used in conjunction with the dedicated circuitry to execute other elliptic-curve-based security protocols. But it can be powered down when not in use, so it doesnt compromise the chips energy efficiency.
They move a certain amount of functionality that used to be in software into hardware, says Xiaolin Lu, director of the internet of things (IOT) lab at Texas Instruments. That has advantages that include power and cost. But from an industrial IOT perspective, its also a more user-friendly implementation. For whoever writes the software, its much simpler.
Android 7.0 and later supports file-based encryption (FBE). File-basedencryption allows different files to be encrypted with different keys that canbe unlocked independently.
This article describes how to enable file-based encryption on new devicesand how system applications can be updated to take full advantage of the newDirect Boot APIs and offer users the best, most secure experience possible.
Warning: File-based encryption cannotcurrently be used together with adoptable storage. On devices usingfile-based encryption, new storage media (such as an SD card) must be used astraditional storage.
File-based encryption enables a new feature introduced in Android 7.0 called DirectBoot. Direct Boot allows encrypted devices to boot straight to the lockscreen. Previously, on encrypted devices using full-diskencryption (FDE), users needed to provide credentials before any data couldbe accessed, preventing the phone from performing all but the most basic ofoperations. For example, alarms could not operate, accessibility services wereunavailable, and phones could not receive calls but were limited to only basicemergency dialer operations.
With the introduction of file-based encryption (FBE) and new APIs to makeapplications aware of encryption, it is possible for these apps to operatewithin a limited context. This can happen before users have provided theircredentials while still protecting private user information.
On an FBE-enabled device, each user of the device has two storage locationsavailable to applications:
This separation makes work profiles more secure because it allows more than oneuser to be protected at a time as the encryption is no longer based solely on aboot time password.
The Direct Boot API allows encryption-aware applications to access each of theseareas. There are changes to the application lifecycle to accommodate the need tonotify applications when a users CE storage is unlocked in response tofirst entering credentials at the lock screen, or in the case of work profileproviding aworkchallenge. Devices running Android 7.0 must support these new APIs andlifecycles regardless of whether or not they implement FBE. Although, withoutFBE, DE and CE storage will always be in the unlocked state.
A complete implementation of file-based encryption on an Ext4 file system isprovided in the Android Open Source Project (AOSP) and needs only be enabled ondevices that meet the requirements. Manufacturers electing to use FBE may wishto explore ways of optimizing the feature based on the system on chip (SoC)used.
All the necessary packages in AOSP have been updated to be direct-boot aware.However, where device manufacturers use customized versions of these apps, theywill want to ensure at a minimum there are direct-boot aware packages providingthe following services:
Android provides a reference implementation of file-based encryption, in whichvold (system/vold)provides the functionality for managing storage devices andvolumes on Android. The addition of FBE provides vold with several new commandsto support key management for the CE and DE keys of multiple users. In additionto the core changes to use the ext4 Encryptioncapabilities in kernel many system packages including the lockscreen and theSystemUI have been modified to support the FBE and Direct Boot features. Theseinclude:
* System applications that use the defaultToDeviceProtectedStoragemanifest attribute
More examples of applications and services that are encryption aware can befound by running the command mangrep directBootAware in theframeworks or packages directory of the AOSPsource tree.
To use the AOSP implementation of FBE securely, a device needs to meet thefollowing dependencies:
Note: Storage policies are applied to a folder and all of itssubfolders. Manufacturers should limit the contents that go unencrypted to theOTA folder and the folder that holds the key that decrypts the system. Mostcontents should reside in credential-encrypted storage rather thandevice-encrypted storage.
First and foremost, apps such as alarm clocks, phone, accessibility featuresshould be made android:directBootAware according to DirectBoot developer documentation.
The AOSP implementation of file-based encryption uses the ext4 encryptionfeatures in the Linux 4.4 kernel. The recommended solution is to use a kernelbased on 4.4 or later. Ext4 encryption has also been backported to a 3.10 kernelin the Android common repositories and for the supported Nexus kernels.
The android-3.10.y branch in the AOSP kernel/common git repository mayprovide a good starting point for device manufacturers that want to import thiscapability into their own device kernels. However, it is necessary to applythe most recent patches from the latest stable Linux kernel (currently linux-4.6)of the ext4 and jbd2 projects. The Nexus device kernels already include many ofthese patches.
Note that each of these kernels uses a backport to 3.10. The ext4and jbd2 drivers from linux 3.18 were transplanted into existing kernels basedon 3.10. Due to interdependencies between parts of the kernel, this backportbreaks support for a number of features that are not used by Nexus devices.These include:
In addition to functional support for ext4 encryption, device manufacturers mayalso consider implementing cryptographic acceleration to speed up file-basedencryption and improve the user experience.
FBE is enabled by adding the flagfileencryption=contents_encryption_mode[:filenames_encryption_mode]to the fstab line in the final column for the userdatapartition. contents_encryption_mode parameter defines whichcryptographic algorithm is used for the encryption of file contents andfilenames_encryption_mode for the encryption of filenames.contents_encryption_mode can be only aes-256-xts.filenames_encryption_mode has two possible values: aes-256-ctsand aes-256-heh. If filenames_encryption_mode is not specifiedthen aes-256-cts value is used.
Whilst testing the FBE implementation on a device, it is possible to specify thefollowing flag:forcefdeorfbe=”
This sets the device up with FDE but allows conversion to FBE for developers. Bydefault, this behaves like forceencrypt, putting the device intoFDE mode. However, it will expose a debug option allowing a device to be putinto FBE mode as is the case in the developer preview. It is also possible toenable FBE from fastboot using this command:
This is intended solely for development purposes as a platform for demonstratingthe feature before actual FBE devices are released. This flag may be deprecatedin the future.
The generation of keys and management of the kernel keyring is handled byvold. The AOSP implementation of FBE requires that the devicesupport Keymaster HAL version 1.0 or later. There is no support for earlierversions of the Keymaster HAL.
On first boot, user 0s keys are generated and installed early in the bootprocess. By the time the on-post-fs phase of initcompletes, the Keymaster must be ready to handle requests. On Nexus devices,this is handled by having a script block:
Note: All encryption is based on AES-256 inXTS mode. Due to the way XTS is defined, it needs two 256-bit keys; so ineffect, both CE and DE keys are 512-bit keys.
Ext4 encryption applies the encryption policy at the directory level. When adevices userdata partition is first created, the basic structuresand policies are applied by the init scripts. These scripts willtrigger the creation of the first users (user 0s) CE and DE keys as well asdefine which directories are to be encrypted with these keys. When additionalusers and profiles are created, the necessary additional keys are generated andstored in the keystore; their credential and devices storage locations arecreated and the encryption policy links these keys to those directories.
In the current AOSP implementation, the encryption policy is hardcoded into thislocation:
It is possible to add exceptions in this file to prevent certain directoriesfrom being encrypted at all, by adding to the directories_to_excludelist. If modifications of this sort are made then the devicemanufacturer should include SELinux policies that only grant access to theapplications that need to use the unencrypted directory. This should exclude alluntrusted applications.
The only known acceptable use case for this is in support of legacy OTAcapabilities.
To facilitate rapid migration of system apps, there are two new attributes thatcan be set at the application level. ThedefaultToDeviceProtectedStorage attribute is available only tosystem apps. The directBootAware attribute is available to all.
The directBootAware attribute at the application level is shorthand for markingall components in the app as being encryption aware.
The defaultToDeviceProtectedStorage attribute redirects the defaultapp storage location to point at DE storage instead of pointing at CE storage.System apps using this flag must carefully audit all data stored in the defaultlocation, and change the paths of sensitive data to use CE storage. Devicemanufactures using this option should carefully inspect the data that they arestoring to ensure that it contains no personal information.
When running in this mode, the following System APIs areavailable to explicitly manage a Context backed by CE storage when needed, whichare equivalent to their Device Protected counterparts.
Each user in a multi-user environment gets a separate encryption key. Every usergets two keys: a DE and a CE key. User 0 must log into the device first as it isa special user. This is pertinent for DeviceAdministration uses.
Crypto-aware applications interact across users in this manner:INTERACT_ACROSS_USERS and INTERACT_ACROSS_USERS_FULLallow an application to act across all the users on the device. However, thoseapps will be able to access only CE-encrypted directories for users that arealready unlocked.
An application may be able to interact freely across the DE areas, but one userunlocked does not mean that all the users on the device are unlocked. Theapplication should check this status before trying to access these areas.
Each work profile user ID also gets two keys: DE and CE. When the work challengeis met, the profile user is unlocked and the Keymaster (in TEE) can provide theprofiles TEE key.
The recovery partition is unable to access the DE-protected storage on theuserdata partition. Devices implementing FBE are strongly recommended to supportOTA using A/B system updates. Asthe OTA can be applied during normal operation there is no need for recovery toaccess data on the encrypted drive.
When using a legacy OTA solution, which requires recovery to access the OTA fileon the userdata partition:
To ensure the implemented version of the feature works as intended, employ themany CTS encryption tests.
Once the kernel builds for your board, also build for x86 and run under QEMU inorder to test with xfstest by using:
In addition, device manufacturers may perform these manual tests. On a devicewith FBE enabled:
Additionally, testers can boot a userdebug instance with a lockscreen set on theprimary user. Then adb shell into the device and usesu to become root. Make sure /data/data containsencrypted filenames; if it does not, something is wrong.
This section provides details on the AOSP implementation and describes howfile-based encryption works. It should not be necessary for device manufacturersto make any changes here to use FBE and Direct Boot on their devices.
The AOSP implementation uses ext4 encryption in kernel and is configured to:
Disk encryption keys, which are 512-bit AES-XTS keys, are stored encryptedby another key (a 256-bit AES-GCM key) held in the TEE. To use this TEE key,three requirements must be met:
The auth token is a cryptographically authenticated token generated byGatekeeperwhen a user successfully logs in. The TEE will refuse to use the key unless thecorrect auth token is supplied. If the user has no credential, then no authtoken is used nor needed.
The stretched credential is the user credential after salting andstretching with the scrypt algorithm. The credential is actuallyhashed once in the lock settings service before being passed tovold for passing to scrypt. This is cryptographicallybound to the key in the TEE with all the guarantees that apply toKM_TAG_APPLICATION_ID. If the user has no credential, then nostretched credential is used nor needed.
The secdiscardable hash is a 512-bit hash of a random 16 KB filestored alongside other information used to reconstruct the key, such as theseed. This file is securely deleted when the key is deleted, or it is encryptedin a new way; this added protection ensures an attacker must recover every bitof this securely deleted file to recover the key. This is cryptographicallybound to the key in the TEE with all the guarantees that apply toKM_TAG_APPLICATION_ID. See the KeystoreImplementer’s Reference.
Secure recorded delivery and response
Mailock employs a unique process allowing you to authenticate the identity of your intended recipient before granting them access; only when they have proven their identity to you is access permitted to any of the message content.
We call this ‘Identity Assured Communication’.
But being able to read your secure emails is only half of the story; with Mailock, your customers are also able to reply securely, thus ensuring that conversations containing sensitive details remain protected, secure and private.
We know that it is important to reach your audience so Mailock has been designed to allow just that. Whether your customer reads your secure email in a web browser, on a mobile device or from within their existing desktop email system, we have all the bases covered.
Using a unique light touch registration and challenge process, Mailock allows your customers to authenticate their identity and read your secure email within seconds of receiving it.
Mobile Apps for iPhone and Android and plug-ins for email programs such as Microsoft Outlook and Apple Mail are all freely available for download from the App Stores and our website to create a truly easy to use and integrated user experience.
The storage location and control of confidential customer data is crucial to organisations seeking to meet regulatory requirements. With Mailock, your encrypted email data may be held in data stores owned and managed by you and our unique challenge process allows you to control when this data is released and to whom.
The Mailock system is free to all recipients and consumers of the service are encouraged to link their Mailock identity to both business and personal email addresses. Through return data, this provides a ground-breaking opportunity to assist in the maintenance of your important contact data meaning that you need never lose the ability to stay in touch with your customers again.
At Mailock, we know that regulatory compliance is of paramount importance to your business and its customers and we have designed the system so that it may be readily integrated with your incumbent systems.
Contact us for further details of how this may be achieved with your existing tools and processes.
We all have a duty to reduce our carbon footprint and Mailock offers an unprecedented opportunity to cut cost whilst improving operating efficiencies and reducing emissions.
Every secure Mailock message delivered provides you, the business user, with a targeted marketing message opportunity. Contact us for further details of how Mailock can spread the word for your business and enhance your promotional activities.
Read the original here:
Beyond Encryption | Secure Enterprise email using existing …
Enterprise security requires a comprehensive approach for defense in depth. Effective immediately, Azure Search now supports encryption at rest for all incoming data indexed on or after January 24, 2018, in all regions and SKUs including shared (free) services. With this announcement, encryption now extends throughout the entire indexing pipeline from connection, through transmission, and down to indexed data stored in Azure Search.
At query time, you can implement user-identity access controls that trim search results of documents that the requestor is not authorized to see. Enhancements to filters enable integration with third-party authentication providers, as well as integration with Azure Active Directory.
All indexing includes encryption on the backend automatically with no measurable impact on indexing workloads or size. This applies to newly indexed documents only. For existing content, you have to re-index to gain encryption. Encryption status of any given index is not visible in the portal, nor available through the API. However, if you indexed after January 24, 2018, data is already encrypted.
In the context of Azure Search, all aspects of encryption, decryption, and key management are internal. You cannot turn it on or off, manage or substitute your own keys, or view encryption settings in the portal or programmatically. Internally, encryption is based on Azure Storage Service Encryption, using 256-bit AES encryption, one of the strongest block ciphers available.
Read the original:
Azure Search enterprise security: Data encryption and user …
Since its inception, Skype has been notable for its secretive, proprietary algorithm. It’s also long had a complicated relationship with encryption: encryption is used by the Skype protocol, but the service has never been clear exactly how that encryption was implemented or exactly which privacy and security features it offers.
That changes today in a big way. The newest Skype preview now supports the Signal protocol: the end-to-end encrypted protocol already used by WhatsApp, Facebook Messenger, Google Allo, and, of course, Signal. Skype Private Conversations will support text, audio calls, and file transfers, with end-to-end encryption that Microsoft, Signal, and, it’s believed, law enforcement agencies cannot eavesdrop on.
Presently, Private Conversations are only available in the Insider builds of Skype. Naturally, the Universal Windows Platform version of the appthe preferred version on Windows 10isn’t yet supported. In contrast, the desktop version of the app, along with the iOS, Android, Linux, and macOS clients, all have compatible Insider builds. Private Conversations aren’t the default and don’t appear to yet support video calling. The latter limitation shouldn’t be insurmountable (Signal’s own app offers secure video calling). We hope to see the former change once updated clients are stable and widely deployed.
We’ve criticized Skype’s failure to provide this kind of security in the past. Skype still has valuable features, such as its interoperability with traditional phone networks and additional tools for TV and radio broadcasters. But its tardiness at adopting this kind of technology left Skype behind its peers. The adoption of end-to-end security is very welcome, and the decision to do so using the Signal protocol, rather than yet another proprietary Skype protocol, marks a change from the product’s history.
Although Skype remains widely used, mobile-oriented upstarts like WhatsApp and Facebook Messenger rapidly surpassed it. Becoming secure and trustworthy is a necessary development, but whether or not it’s going to be sufficient to reinvigorate the application is far from clear.
Wray urged the private sector to work with the government in finding “a way forward quickly,” insisting that the FBI isn’t interested in peeking into ordinary citizens’ devices. The bureau just wants access to the ones owned by suspects. That pretty much echoes Comey’s position during his time — if you’ll recall the FBI asked tech titans to create a backdoor into their software and phones in order to give authorities a way to open them during investigations. Apple chief Tim Cook said the request had “chilling” and “dangerous” implications, warning that companies wouldn’t be able to control how that backdoor is used.
Wray told the audience at the event that authorities face an increasing number of cases that rely on electronic evidence. He doesn’t buy companies claims that it’s impossible to find a way for encryption to be more law enforcement-friendly, so to speak. Not that the FBI can’t do anything if it absolutely has to: when Apple refused to cooperate with authorities to unlock the San Bernardino shooter’s iPhone, the agency paid a third party almost a million to get the job done.
See original here:
FBI chief says phone encryption is a ‘major public safety issue’
On August 15, 2017 the Wassenaar Arrangement 2016 Plenary Agreements Implementation was published in the Federal Register.
Here is a summary of the changes made to Category 5, Part 2.
The U.S. Commerce Control List (CCL) is broken in to 10 Categories 0 9 (see Supplement No. 1 to part 774 of the EAR). Encryption items fall under Category 5, Part 2 for Information Security. Cat. 5, Part 2 covers:
1) Cryptographic Information Security; (e.g., items that use cryptography)
2) Non-cryptographic Information Security (5A003); and
3) Defeating, Weakening of Bypassing Information Security (5A004)
You can find a Quick Reference Guide to Cat. 5, Part 2 here.
The controls in Cat. 5, Part 2 include multilateral and unilateral controls. The multilateral controls in Cat. 5, Part 2 of the EAR (e.g., 5A002, 5A003, 5A004, 5B002, 5D002, 5E002) come from the Wassenaar Arrangement List of Dual Use Goods and Technologies. Changes to the multilateral controls are agreed upon by the participating members of the Wassenaar Arrangement. Unilateral controls in Cat. 5, Part 2 (e.g., 5A992.c, 5D992.c, 5E992.b) of the EAR are decided on by the United States.
The main license exception that is used for items in Cat. 5, Part 2 is License Exception ENC (Section 740.17). License exception ENC provides a broad set of authorizations for encryption products (items that implement cryptography) that vary depending on the item, the end-user, the end-use, and the destination. There is no “unexportable” level of encryption under license exception ENC. Most encryption products can be exported to most destinations under license exception ENC, once the exporter has complied with applicable reporting and classification requirements. Some items going to some destinations require licenses.
This guidance does not apply to items subject to the exclusive jurisdiction of another agency. For example, ITAR USML Categories XI(b),(d), and XIII(b), (l) control software, technical data, and other items specially designed for military or intelligence applications.
The following 2 flowcharts lay out the analysis to follow for determining if and how the EAR and Cat.5 Part 2 apply to a product incorporating cryptography:
Flowchart 1: Items Designed to Use Cryptography Including Items NOT controlled under Category 5 Part 2 of the EAR Flowchart 2: Classified in Category 5, Part 2 of the EAR
Similarly, the following written outline provides the analysis to follow for determining if and how the EAR and Cat.5 Part 2 apply to a product incorporating cryptography. Although Category 5 Part 2 controls more than just cryptography, most items that are in Category 5 Part 2 fall under 5A002.a, 5A002.b, 5A004, or 5A992 or their software and technology equivalents.
1. Encryption items that are NOT subject to the EAR (publicly available)2. Items subject to Cat. 5, Part 2:
a. 5A002.a (and equivalent software under 5D002 c.1) applies to items that:
i. Use cryptography for data confidentiality; and
ii. Have in excess of 56 bits of symmetric key length, or equivalent; and
iii. Have cryptography described in 1 and 2 above that is useable without cryptographic activation or has already been activated; and
iv. Are described under 5A002 a.1 a.4; and
v. Are not described by Decontrol notes.
b. 5A992.c (and software equivalence controlled under 5D992.c) is also known as mass market. These items meet all of the above descried under 5A002.a and Note 3 to Category 5, Part 2. See the MASS MARKET section for more information.
c. 5A002.b (and software equivalence controlled under 5D002.b) applies to items designed or modified to enable, by means of cryptographic activation, an item to achieve/exceed the controlled performance levels for functionality specified by 5A002.a not otherwise enabled (e.g., license key to enable cryptography).
d. 5A004 (and equivalent software controlled under 5D002.c.3) applies to items designed or modified to perform cryptanalytic functions including by means of reverse engineering.
e. The following are less commonly used entries:
3. License Exception ENC and mass market
If you’ve gone through the steps above and your product is controlled in Cat. 5, Part 2 under an ECCN other than 5A003 (and equivalent or related software and technology), then it is eligible for at least some part of license exception ENC. The next step is to determine which part of License Exception ENC the product falls under. Knowing which part of ENC the product falls under will tell you what you need to do to make the item eligible for ENC, and where the product can be exported without a license.
Types of authorization available for license exception ENC:
a. Mass Market b. 740.17(a) c. 740.17(b)(2) d. 740.17(b)(3)/Mass market e. 740.17(b)(1)/ Mass market
4. Once you determine what authorization applies to your product, then you may have to file a classification request, annual self-classification report, and/or semi-annual sales report. The links below provide instructions on how to submit reports and Encryption Reviews:
a. How to file an Annual Self-Classification Report b. How to file a Semi-annual Report c. How to Submit an ENC or Mass market classification review
5. After you have submitted the appropriate classification and/or report, there may be some instances in which a license is still required. Information on when a license is required, types of licenses available, and how to submit are below:
a. When a License is Required b. Types of licenses available c. How to file a license application6. FAQs7. Contact us
Read the original here:
Encryption and Export Administration Regulations (EAR)
In cryptography, a key is a piece of information (a parameter) that determines the functional output of a cryptographic algorithm. For encryption algorithms, a key specifies the transformation of plaintext into ciphertext, and vice versa for decryption algorithms. Keys also specify transformations in other cryptographic algorithms, such as digital signature schemes and message authentication codes.
In designing security systems, it is wise to assume that the details of the cryptographic algorithm are already available to the attacker. This is known as Kerckhoffs’ principle “only secrecy of the key provides security”, or, reformulated as Shannon’s maxim, “the enemy knows the system”. The history of cryptography provides evidence that it can be difficult to keep the details of a widely used algorithm secret (see security through obscurity). A key is often easier to protect (it’s typically a small piece of information) than an encryption algorithm, and easier to change if compromised. Thus, the security of an encryption system in most cases relies on some key being kept secret.
Trying to keep keys secret is one of the most difficult problems in practical cryptography; see key management. An attacker who obtains the key (by, for example, theft, extortion, dumpster diving, assault, torture, or social engineering) can recover the original message from the encrypted data, and issue signatures.
Keys are generated to be used with a given suite of algorithms, called a cryptosystem. Encryption algorithms which use the same key for both encryption and decryption are known as symmetric key algorithms. A newer class of “public key” cryptographic algorithms was invented in the 1970s. These asymmetric key algorithms use a pair of keys or keypair a public key and a private one. Public keys are used for encryption or signature verification; private ones decrypt and sign. The design is such that finding out the private key is extremely difficult, even if the corresponding public key is known. As that design involves lengthy computations, a keypair is often used to exchange an on-the-fly symmetric key, which will only be used for the current session. RSA and DSA are two popular public-key cryptosystems; DSA keys can only be used for signing and verifying, not for encryption.
Part of the security brought about by cryptography concerns confidence about who signed a given document, or who replies at the other side of a connection. Assuming that keys are not compromised, that question consists of determining the owner of the relevant public key. To be able to tell a key’s owner, public keys are often enriched with attributes such as names, addresses, and similar identifiers. The packed collection of a public key and its attributes can be digitally signed by one or more supporters. In the PKI model, the resulting object is called a certificate and is signed by a certificate authority (CA). In the PGP model, it is still called a “key”, and is signed by various people who personally verified that the attributes match the subject.
In both PKI and PGP models, compromised keys can be revoked. Revocation has the side effect of disrupting the relationship between a key’s attributes and the subject, which may still be valid. In order to have a possibility to recover from such disruption, signers often use different keys for everyday tasks: Signing with an intermediate certificate (for PKI) or a subkey (for PGP) facilitates keeping the principal private key in an offline safe.
Deleting a key on purpose to make the data inaccessible is called crypto-shredding.
For the one-time pad system the key must be at least as long as the message. In encryption systems that use a cipher algorithm, messages can be much longer than the key. The key must, however, be long enough so that an attacker cannot try all possible combinations.
A key length of 80 bits is generally considered the minimum for strong security with symmetric encryption algorithms. 128-bit keys are commonly used and considered very strong. See the key size article for a more complete discussion.
The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher. Elliptic curve cryptography may allow smaller-size keys for equivalent security, but these algorithms have only been known for a relatively short time and current estimates of the difficulty of searching for their keys may not survive. As of 2004, a message encrypted using a 109-bit key elliptic curve algorithm had been broken by brute force. The current rule of thumb is to use an ECC key twice as long as the symmetric key security level desired. Except for the random one-time pad, the security of these systems has not (as of 2008[update]) been proven mathematically, so a theoretical breakthrough could make everything one has encrypted an open book. This is another reason to err on the side of choosing longer keys.
To prevent a key from being guessed, keys need to be generated truly randomly and contain sufficient entropy. The problem of how to safely generate truly random keys is difficult, and has been addressed in many ways by various cryptographic systems. There is a RFC on generating randomness (RFC 4086, Randomness Requirements for Security). Some operating systems include tools for “collecting” entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high quality randomness.
For most computer security purposes and for most users, “key” is not synonymous with “password” (or “passphrase”), although a password can in fact be used as a key. The primary practical difference between keys and passwords is that the latter are intended to be generated, read, remembered, and reproduced by a human user (although nowadays the user may delegate those tasks to password management software). A key, by contrast, is intended for use by the software that is implementing the cryptographic algorithm, and so human readability etc. is not required. In fact, most users will, in most cases, be unaware of even the existence of the keys being used on their behalf by the security components of their everyday software applications.
If a password is used as an encryption key, then in a well-designed crypto system it would not be used as such on its own. This is because passwords tend to be human-readable and,hence, may not be particularly strong. To compensate, a good crypto system will use the password-acting-as-key not to perform the primary encryption task itself, but rather to act as an input to a key derivation function (KDF). That KDF uses the password as a starting point from which it will then generate the actual secure encryption key itself. Various methods such as adding a salt and key stretching may be used in the generation.
Key (cryptography) – Wikipedia
Well, you could look it up in Wikipedia… But since you want an explanation, I’ll do my best here:
They provide a mapping between an arbitrary length input, and a (usually) fixed length (or smaller length) output. It can be anything from a simple crc32, to a full blown cryptographic hash function such as MD5 or SHA1/2/256/512. The point is that there’s a one-way mapping going on. It’s always a many:1 mapping (meaning there will always be collisions) since every function produces a smaller output than it’s capable of inputting (If you feed every possible 1mb file into MD5, you’ll get a ton of collisions).
The reason they are hard (or impossible in practicality) to reverse is because of how they work internally. Most cryptographic hash functions iterate over the input set many times to produce the output. So if we look at each fixed length chunk of input (which is algorithm dependent), the hash function will call that the current state. It will then iterate over the state and change it to a new one and use that as feedback into itself (MD5 does this 64 times for each 512bit chunk of data). It then somehow combines the resultant states from all these iterations back together to form the resultant hash.
Now, if you wanted to decode the hash, you’d first need to figure out how to split the given hash into its iterated states (1 possibility for inputs smaller than the size of a chunk of data, many for larger inputs). Then you’d need to reverse the iteration for each state. Now, to explain why this is VERY hard, imagine trying to deduce a and b from the following formula: 10 = a + b. There are 10 positive combinations of a and b that can work. Now loop over that a bunch of times: tmp = a + b; a = b; b = tmp. For 64 iterations, you’d have over 10^64 possibilities to try. And that’s just a simple addition where some state is preserved from iteration to iteration. Real hash functions do a lot more than 1 operation (MD5 does about 15 operations on 4 state variables). And since the next iteration depends on the state of the previous and the previous is destroyed in creating the current state, it’s all but impossible to determine the input state that led to a given output state (for each iteration no less). Combine that, with the large number of possibilities involved, and decoding even an MD5 will take a near infinite (but not infinite) amount of resources. So many resources that it’s actually significantly cheaper to brute-force the hash if you have an idea of the size of the input (for smaller inputs) than it is to even try to decode the hash.
They provide a 1:1 mapping between an arbitrary length input and output. And they are always reversible. The important thing to note is that it’s reversible using some method. And it’s always 1:1 for a given key. Now, there are multiple input:key pairs that might generate the same output (in fact there usually are, depending on the encryption function). Good encrypted data is indistinguishable from random noise. This is different from a good hash output which is always of a consistent format.
Use a hash function when you want to compare a value but can’t store the plain representation (for any number of reasons). Passwords should fit this use-case very well since you don’t want to store them plain-text for security reasons (and shouldn’t). But what if you wanted to check a filesystem for pirated music files? It would be impractical to store 3 mb per music file. So instead, take the hash of the file, and store that (md5 would store 16 bytes instead of 3mb). That way, you just hash each file and compare to the stored database of hashes (This doesn’t work as well in practice because of re-encoding, changing file headers, etc, but it’s an example use-case).
Use a hash function when you’re checking validity of input data. That’s what they are designed for. If you have 2 pieces of input, and want to check to see if they are the same, run both through a hash function. The probability of a collision is astronomically low for small input sizes (assuming a good hash function). That’s why it’s recommended for passwords. For passwords up to 32 characters, md5 has 4 times the output space. SHA1 has 6 times the output space (approximately). SHA512 has about 16 times the output space. You don’t really care what the password was, you care if it’s the same as the one that was stored. That’s why you should use hashes for passwords.
Use encryption whenever you need to get the input data back out. Notice the word need. If you’re storing credit card numbers, you need to get them back out at some point, but don’t want to store them plain text. So instead, store the encrypted version and keep the key as safe as possible.
Hash functions are also great for signing data. For example, if you’re using HMAC, you sign a piece of data by taking a hash of the data concatenated with a known but not transmitted value (a secret value). So, you send the plain-text and the HMAC hash. Then, the receiver simply hashes the submitted data with the known value and checks to see if it matches the transmitted HMAC. If it’s the same, you know it wasn’t tampered with by a party without the secret value. This is commonly used in secure cookie systems by HTTP frameworks, as well as in message transmission of data over HTTP where you want some assurance of integrity in the data.
A key feature of cryptographic hash functions is that they should be very fast to create, and very difficult/slow to reverse (so much so that it’s practically impossible). This poses a problem with passwords. If you store sha512(password), you’re not doing a thing to guard against rainbow tables or brute force attacks. Remember, the hash function was designed for speed. So it’s trivial for an attacker to just run a dictionary through the hash function and test each result.
Adding a salt helps matters since it adds a bit of unknown data to the hash. So instead of finding anything that matches md5(foo), they need to find something that when added to the known salt produces md5(foo.salt) (which is very much harder to do). But it still doesn’t solve the speed problem since if they know the salt it’s just a matter of running the dictionary through.
So, there are ways of dealing with this. One popular method is called key strengthening (or key stretching). Basically, you iterate over a hash many times (thousands usually). This does two things. First, it slows down the runtime of the hashing algorithm significantly. Second, if implemented right (passing the input and salt back in on each iteration) actually increases the entropy (available space) for the output, reducing the chances of collisions. A trivial implementation is:
There are other, more standard implementations such as PBKDF2, BCrypt. But this technique is used by quite a few security related systems (such as PGP, WPA, Apache and OpenSSL).
The bottom line, hash(password) is not good enough. hash(password + salt) is better, but still not good enough… Use a stretched hash mechanism to produce your password hashes…
Do not under any circumstances feed the output of one hash directly back into the hash function:
The reason for this has to do with collisions. Remember that all hash functions have collisions because the possible output space (the number of possible outputs) is smaller than then input space. To see why, let’s look at what happens. To preface this, let’s make the assumption that there’s a 0.001% chance of collision from sha1() (it’s much lower in reality, but for demonstration purposes).
Now, hash1 has a probability of collision of 0.001%. But when we do the next hash2 = sha1(hash1);, all collisions of hash1 automatically become collisions of hash2. So now, we have hash1’s rate at 0.001%, and the 2nd sha1() call adds to that. So now, hash2 has a probability of collision of 0.002%. That’s twice as many chances! Each iteration will add another 0.001% chance of collision to the result. So, with 1000 iterations, the chance of collision jumped from a trivial 0.001% to 1%. Now, the degradation is linear, and the real probabilities are far smaller, but the effect is the same (an estimation of the chance of a single collision with md5 is about 1/(2128) or 1/(3×1038). While that seems small, thanks to the birthday attack it’s not really as small as it seems).
Instead, by re-appending the salt and password each time, you’re re-introducing data back into the hash function. So any collisions of any particular round are no longer collisions of the next round. So:
Has the same chance of collision as the native sha512 function. Which is what you want. Use that instead.