Category Archives: Encryption
Since its inception, Skype has been notable for its secretive, proprietary algorithm. It’s also long had a complicated relationship with encryption: encryption is used by the Skype protocol, but the service has never been clear exactly how that encryption was implemented or exactly which privacy and security features it offers.
That changes today in a big way. The newest Skype preview now supports the Signal protocol: the end-to-end encrypted protocol already used by WhatsApp, Facebook Messenger, Google Allo, and, of course, Signal. Skype Private Conversations will support text, audio calls, and file transfers, with end-to-end encryption that Microsoft, Signal, and, it’s believed, law enforcement agencies cannot eavesdrop on.
Presently, Private Conversations are only available in the Insider builds of Skype. Naturally, the Universal Windows Platform version of the appthe preferred version on Windows 10isn’t yet supported. In contrast, the desktop version of the app, along with the iOS, Android, Linux, and macOS clients, all have compatible Insider builds. Private Conversations aren’t the default and don’t appear to yet support video calling. The latter limitation shouldn’t be insurmountable (Signal’s own app offers secure video calling). We hope to see the former change once updated clients are stable and widely deployed.
We’ve criticized Skype’s failure to provide this kind of security in the past. Skype still has valuable features, such as its interoperability with traditional phone networks and additional tools for TV and radio broadcasters. But its tardiness at adopting this kind of technology left Skype behind its peers. The adoption of end-to-end security is very welcome, and the decision to do so using the Signal protocol, rather than yet another proprietary Skype protocol, marks a change from the product’s history.
Although Skype remains widely used, mobile-oriented upstarts like WhatsApp and Facebook Messenger rapidly surpassed it. Becoming secure and trustworthy is a necessary development, but whether or not it’s going to be sufficient to reinvigorate the application is far from clear.
On August 15, 2017 the Wassenaar Arrangement 2016 Plenary Agreements Implementation was published in the Federal Register.
Here is a summary of the changes made to Category 5, Part 2.
The U.S. Commerce Control List (CCL) is broken in to 10 Categories 0 9 (see Supplement No. 1 to part 774 of the EAR). Encryption items fall under Category 5, Part 2 for Information Security. Cat. 5, Part 2 covers:
1) Cryptographic Information Security; (e.g., items that use cryptography)
2) Non-cryptographic Information Security (5A003); and
3) Defeating, Weakening of Bypassing Information Security (5A004)
You can find a Quick Reference Guide to Cat. 5, Part 2 here.
The controls in Cat. 5, Part 2 include multilateral and unilateral controls. The multilateral controls in Cat. 5, Part 2 of the EAR (e.g., 5A002, 5A003, 5A004, 5B002, 5D002, 5E002) come from the Wassenaar Arrangement List of Dual Use Goods and Technologies. Changes to the multilateral controls are agreed upon by the participating members of the Wassenaar Arrangement. Unilateral controls in Cat. 5, Part 2 (e.g., 5A992.c, 5D992.c, 5E992.b) of the EAR are decided on by the United States.
The main license exception that is used for items in Cat. 5, Part 2 is License Exception ENC (Section 740.17). License exception ENC provides a broad set of authorizations for encryption products (items that implement cryptography) that vary depending on the item, the end-user, the end-use, and the destination. There is no “unexportable” level of encryption under license exception ENC. Most encryption products can be exported to most destinations under license exception ENC, once the exporter has complied with applicable reporting and classification requirements. Some items going to some destinations require licenses.
This guidance does not apply to items subject to the exclusive jurisdiction of another agency. For example, ITAR USML Categories XI(b),(d), and XIII(b), (l) control software, technical data, and other items specially designed for military or intelligence applications.
The following 2 flowcharts lay out the analysis to follow for determining if and how the EAR and Cat.5 Part 2 apply to a product incorporating cryptography:
Flowchart 1: Items Designed to Use Cryptography Including Items NOT controlled under Category 5 Part 2 of the EAR Flowchart 2: Classified in Category 5, Part 2 of the EAR
Similarly, the following written outline provides the analysis to follow for determining if and how the EAR and Cat.5 Part 2 apply to a product incorporating cryptography. Although Category 5 Part 2 controls more than just cryptography, most items that are in Category 5 Part 2 fall under 5A002.a, 5A002.b, 5A004, or 5A992 or their software and technology equivalents.
1. Encryption items that are NOT subject to the EAR (publicly available)2. Items subject to Cat. 5, Part 2:
a. 5A002.a (and equivalent software under 5D002 c.1) applies to items that:
i. Use cryptography for data confidentiality; and
ii. Have in excess of 56 bits of symmetric key length, or equivalent; and
iii. Have cryptography described in 1 and 2 above that is useable without cryptographic activation or has already been activated; and
iv. Are described under 5A002 a.1 a.4; and
v. Are not described by Decontrol notes.
b. 5A992.c (and software equivalence controlled under 5D992.c) is also known as mass market. These items meet all of the above descried under 5A002.a and Note 3 to Category 5, Part 2. See the MASS MARKET section for more information.
c. 5A002.b (and software equivalence controlled under 5D002.b) applies to items designed or modified to enable, by means of cryptographic activation, an item to achieve/exceed the controlled performance levels for functionality specified by 5A002.a not otherwise enabled (e.g., license key to enable cryptography).
d. 5A004 (and equivalent software controlled under 5D002.c.3) applies to items designed or modified to perform cryptanalytic functions including by means of reverse engineering.
e. The following are less commonly used entries:
3. License Exception ENC and mass market
If you’ve gone through the steps above and your product is controlled in Cat. 5, Part 2 under an ECCN other than 5A003 (and equivalent or related software and technology), then it is eligible for at least some part of license exception ENC. The next step is to determine which part of License Exception ENC the product falls under. Knowing which part of ENC the product falls under will tell you what you need to do to make the item eligible for ENC, and where the product can be exported without a license.
Types of authorization available for license exception ENC:
a. Mass Market b. 740.17(a) c. 740.17(b)(2) d. 740.17(b)(3)/Mass market e. 740.17(b)(1)/ Mass market
4. Once you determine what authorization applies to your product, then you may have to file a classification request, annual self-classification report, and/or semi-annual sales report. The links below provide instructions on how to submit reports and Encryption Reviews:
a. How to file an Annual Self-Classification Report b. How to file a Semi-annual Report c. How to Submit an ENC or Mass market classification review
5. After you have submitted the appropriate classification and/or report, there may be some instances in which a license is still required. Information on when a license is required, types of licenses available, and how to submit are below:
a. When a License is Required b. Types of licenses available c. How to file a license application6. FAQs7. Contact us
Read the original here:
Encryption and Export Administration Regulations (EAR)
In cryptography, a key is a piece of information (a parameter) that determines the functional output of a cryptographic algorithm. For encryption algorithms, a key specifies the transformation of plaintext into ciphertext, and vice versa for decryption algorithms. Keys also specify transformations in other cryptographic algorithms, such as digital signature schemes and message authentication codes.
In designing security systems, it is wise to assume that the details of the cryptographic algorithm are already available to the attacker. This is known as Kerckhoffs’ principle “only secrecy of the key provides security”, or, reformulated as Shannon’s maxim, “the enemy knows the system”. The history of cryptography provides evidence that it can be difficult to keep the details of a widely used algorithm secret (see security through obscurity). A key is often easier to protect (it’s typically a small piece of information) than an encryption algorithm, and easier to change if compromised. Thus, the security of an encryption system in most cases relies on some key being kept secret.
Trying to keep keys secret is one of the most difficult problems in practical cryptography; see key management. An attacker who obtains the key (by, for example, theft, extortion, dumpster diving, assault, torture, or social engineering) can recover the original message from the encrypted data, and issue signatures.
Keys are generated to be used with a given suite of algorithms, called a cryptosystem. Encryption algorithms which use the same key for both encryption and decryption are known as symmetric key algorithms. A newer class of “public key” cryptographic algorithms was invented in the 1970s. These asymmetric key algorithms use a pair of keys or keypair a public key and a private one. Public keys are used for encryption or signature verification; private ones decrypt and sign. The design is such that finding out the private key is extremely difficult, even if the corresponding public key is known. As that design involves lengthy computations, a keypair is often used to exchange an on-the-fly symmetric key, which will only be used for the current session. RSA and DSA are two popular public-key cryptosystems; DSA keys can only be used for signing and verifying, not for encryption.
Part of the security brought about by cryptography concerns confidence about who signed a given document, or who replies at the other side of a connection. Assuming that keys are not compromised, that question consists of determining the owner of the relevant public key. To be able to tell a key’s owner, public keys are often enriched with attributes such as names, addresses, and similar identifiers. The packed collection of a public key and its attributes can be digitally signed by one or more supporters. In the PKI model, the resulting object is called a certificate and is signed by a certificate authority (CA). In the PGP model, it is still called a “key”, and is signed by various people who personally verified that the attributes match the subject.
In both PKI and PGP models, compromised keys can be revoked. Revocation has the side effect of disrupting the relationship between a key’s attributes and the subject, which may still be valid. In order to have a possibility to recover from such disruption, signers often use different keys for everyday tasks: Signing with an intermediate certificate (for PKI) or a subkey (for PGP) facilitates keeping the principal private key in an offline safe.
Deleting a key on purpose to make the data inaccessible is called crypto-shredding.
For the one-time pad system the key must be at least as long as the message. In encryption systems that use a cipher algorithm, messages can be much longer than the key. The key must, however, be long enough so that an attacker cannot try all possible combinations.
A key length of 80 bits is generally considered the minimum for strong security with symmetric encryption algorithms. 128-bit keys are commonly used and considered very strong. See the key size article for a more complete discussion.
The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher. Elliptic curve cryptography may allow smaller-size keys for equivalent security, but these algorithms have only been known for a relatively short time and current estimates of the difficulty of searching for their keys may not survive. As of 2004, a message encrypted using a 109-bit key elliptic curve algorithm had been broken by brute force. The current rule of thumb is to use an ECC key twice as long as the symmetric key security level desired. Except for the random one-time pad, the security of these systems has not (as of 2008[update]) been proven mathematically, so a theoretical breakthrough could make everything one has encrypted an open book. This is another reason to err on the side of choosing longer keys.
To prevent a key from being guessed, keys need to be generated truly randomly and contain sufficient entropy. The problem of how to safely generate truly random keys is difficult, and has been addressed in many ways by various cryptographic systems. There is a RFC on generating randomness (RFC 4086, Randomness Requirements for Security). Some operating systems include tools for “collecting” entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high quality randomness.
For most computer security purposes and for most users, “key” is not synonymous with “password” (or “passphrase”), although a password can in fact be used as a key. The primary practical difference between keys and passwords is that the latter are intended to be generated, read, remembered, and reproduced by a human user (although nowadays the user may delegate those tasks to password management software). A key, by contrast, is intended for use by the software that is implementing the cryptographic algorithm, and so human readability etc. is not required. In fact, most users will, in most cases, be unaware of even the existence of the keys being used on their behalf by the security components of their everyday software applications.
If a password is used as an encryption key, then in a well-designed crypto system it would not be used as such on its own. This is because passwords tend to be human-readable and,hence, may not be particularly strong. To compensate, a good crypto system will use the password-acting-as-key not to perform the primary encryption task itself, but rather to act as an input to a key derivation function (KDF). That KDF uses the password as a starting point from which it will then generate the actual secure encryption key itself. Various methods such as adding a salt and key stretching may be used in the generation.
Key (cryptography) – Wikipedia
Well, you could look it up in Wikipedia… But since you want an explanation, I’ll do my best here:
They provide a mapping between an arbitrary length input, and a (usually) fixed length (or smaller length) output. It can be anything from a simple crc32, to a full blown cryptographic hash function such as MD5 or SHA1/2/256/512. The point is that there’s a one-way mapping going on. It’s always a many:1 mapping (meaning there will always be collisions) since every function produces a smaller output than it’s capable of inputting (If you feed every possible 1mb file into MD5, you’ll get a ton of collisions).
The reason they are hard (or impossible in practicality) to reverse is because of how they work internally. Most cryptographic hash functions iterate over the input set many times to produce the output. So if we look at each fixed length chunk of input (which is algorithm dependent), the hash function will call that the current state. It will then iterate over the state and change it to a new one and use that as feedback into itself (MD5 does this 64 times for each 512bit chunk of data). It then somehow combines the resultant states from all these iterations back together to form the resultant hash.
Now, if you wanted to decode the hash, you’d first need to figure out how to split the given hash into its iterated states (1 possibility for inputs smaller than the size of a chunk of data, many for larger inputs). Then you’d need to reverse the iteration for each state. Now, to explain why this is VERY hard, imagine trying to deduce a and b from the following formula: 10 = a + b. There are 10 positive combinations of a and b that can work. Now loop over that a bunch of times: tmp = a + b; a = b; b = tmp. For 64 iterations, you’d have over 10^64 possibilities to try. And that’s just a simple addition where some state is preserved from iteration to iteration. Real hash functions do a lot more than 1 operation (MD5 does about 15 operations on 4 state variables). And since the next iteration depends on the state of the previous and the previous is destroyed in creating the current state, it’s all but impossible to determine the input state that led to a given output state (for each iteration no less). Combine that, with the large number of possibilities involved, and decoding even an MD5 will take a near infinite (but not infinite) amount of resources. So many resources that it’s actually significantly cheaper to brute-force the hash if you have an idea of the size of the input (for smaller inputs) than it is to even try to decode the hash.
They provide a 1:1 mapping between an arbitrary length input and output. And they are always reversible. The important thing to note is that it’s reversible using some method. And it’s always 1:1 for a given key. Now, there are multiple input:key pairs that might generate the same output (in fact there usually are, depending on the encryption function). Good encrypted data is indistinguishable from random noise. This is different from a good hash output which is always of a consistent format.
Use a hash function when you want to compare a value but can’t store the plain representation (for any number of reasons). Passwords should fit this use-case very well since you don’t want to store them plain-text for security reasons (and shouldn’t). But what if you wanted to check a filesystem for pirated music files? It would be impractical to store 3 mb per music file. So instead, take the hash of the file, and store that (md5 would store 16 bytes instead of 3mb). That way, you just hash each file and compare to the stored database of hashes (This doesn’t work as well in practice because of re-encoding, changing file headers, etc, but it’s an example use-case).
Use a hash function when you’re checking validity of input data. That’s what they are designed for. If you have 2 pieces of input, and want to check to see if they are the same, run both through a hash function. The probability of a collision is astronomically low for small input sizes (assuming a good hash function). That’s why it’s recommended for passwords. For passwords up to 32 characters, md5 has 4 times the output space. SHA1 has 6 times the output space (approximately). SHA512 has about 16 times the output space. You don’t really care what the password was, you care if it’s the same as the one that was stored. That’s why you should use hashes for passwords.
Use encryption whenever you need to get the input data back out. Notice the word need. If you’re storing credit card numbers, you need to get them back out at some point, but don’t want to store them plain text. So instead, store the encrypted version and keep the key as safe as possible.
Hash functions are also great for signing data. For example, if you’re using HMAC, you sign a piece of data by taking a hash of the data concatenated with a known but not transmitted value (a secret value). So, you send the plain-text and the HMAC hash. Then, the receiver simply hashes the submitted data with the known value and checks to see if it matches the transmitted HMAC. If it’s the same, you know it wasn’t tampered with by a party without the secret value. This is commonly used in secure cookie systems by HTTP frameworks, as well as in message transmission of data over HTTP where you want some assurance of integrity in the data.
A key feature of cryptographic hash functions is that they should be very fast to create, and very difficult/slow to reverse (so much so that it’s practically impossible). This poses a problem with passwords. If you store sha512(password), you’re not doing a thing to guard against rainbow tables or brute force attacks. Remember, the hash function was designed for speed. So it’s trivial for an attacker to just run a dictionary through the hash function and test each result.
Adding a salt helps matters since it adds a bit of unknown data to the hash. So instead of finding anything that matches md5(foo), they need to find something that when added to the known salt produces md5(foo.salt) (which is very much harder to do). But it still doesn’t solve the speed problem since if they know the salt it’s just a matter of running the dictionary through.
So, there are ways of dealing with this. One popular method is called key strengthening (or key stretching). Basically, you iterate over a hash many times (thousands usually). This does two things. First, it slows down the runtime of the hashing algorithm significantly. Second, if implemented right (passing the input and salt back in on each iteration) actually increases the entropy (available space) for the output, reducing the chances of collisions. A trivial implementation is:
There are other, more standard implementations such as PBKDF2, BCrypt. But this technique is used by quite a few security related systems (such as PGP, WPA, Apache and OpenSSL).
The bottom line, hash(password) is not good enough. hash(password + salt) is better, but still not good enough… Use a stretched hash mechanism to produce your password hashes…
Do not under any circumstances feed the output of one hash directly back into the hash function:
The reason for this has to do with collisions. Remember that all hash functions have collisions because the possible output space (the number of possible outputs) is smaller than then input space. To see why, let’s look at what happens. To preface this, let’s make the assumption that there’s a 0.001% chance of collision from sha1() (it’s much lower in reality, but for demonstration purposes).
Now, hash1 has a probability of collision of 0.001%. But when we do the next hash2 = sha1(hash1);, all collisions of hash1 automatically become collisions of hash2. So now, we have hash1’s rate at 0.001%, and the 2nd sha1() call adds to that. So now, hash2 has a probability of collision of 0.002%. That’s twice as many chances! Each iteration will add another 0.001% chance of collision to the result. So, with 1000 iterations, the chance of collision jumped from a trivial 0.001% to 1%. Now, the degradation is linear, and the real probabilities are far smaller, but the effect is the same (an estimation of the chance of a single collision with md5 is about 1/(2128) or 1/(3×1038). While that seems small, thanks to the birthday attack it’s not really as small as it seems).
Instead, by re-appending the salt and password each time, you’re re-introducing data back into the hash function. So any collisions of any particular round are no longer collisions of the next round. So:
Has the same chance of collision as the native sha512 function. Which is what you want. Use that instead.
Encryption is the mathematical science of codes, ciphers, and secret messages. Throughout history, people have used encryption to send messages to each other that (hopefully) couldn’t be read by anyone besides the intended recipient.
Today, we have computers that are capable of performing encryption for us. Digital encryption technology has expanded beyond simple secret messages; today, encryption can be used for more elaborate purposes, for example to verify the author of messages or to browse the Web anonymously with Tor.
Under some circumstances, encryption can be fairly automatic and simple. But there are ways encryption can go wrong, and the more you understand it, the safer you will be against such situations.
One of the most important concepts to understand in encryption is a key. Common types of encryption include a private key, which is kept secret on your computer and lets you read messages that are intended only for you. A private key also lets you place unforgeable digital signatures on messages you send to other people. A public key is a file that you can give to others or publish that allows people to communicate with you in secret, and check signatures from you. Private and public keys come in matched pairs, like the halves of a rock that has been split into two perfectly matching pieces, but they are not the same.
Another extremely valuable concept to understand is a security certificate. The Web browser on your computer can make encrypted connections to sites using HTTPS. When they do that, they examine certificates to check the public keys of domain names(like http://www.google.com, http://www.amazon.com, or ssd.eff.org). Certificates are one way of trying to determine if you know the right public key for a person or website, so that you can communicate securely with them.
From time to time, you will see certificate-related error messages on the Web. Most commonly, this is because a hotel or cafe network is trying to break your secret communications with the website. It is also common to see an error because of a bureaucratic mistake in the system of certificates. But occasionally, it is because a hacker, thief, police agency, or spy agency is breaking the encrypted connection.
Unfortunately, it is extremely difficult to tell the difference between these cases. This means you should never click past a certificate warning if it relates to a site where you have an account, or are reading any sensitive information.
The word “fingerprint” means lots of different things in the field of computer security. One use of the term is a “key fingerprint,” a string of characters like “342e 2309 bd20 0912 ff10 6c63 2192 1928” that should allow you to uniquely and securely check that someone on the Internet is using the right private key. If you check that someone’s key fingerprint is correct, that gives you a higher degree of certainty that it’s really them. But it’s not perfect, because if the keys are copied or stolen someone else would be able to use the same fingerprint.
Go here to read the rest:
What Is Encryption? | Surveillance Self-Defense
Comodo Disk Encryption is a reliable application that protects your sensitive data by encrypting your drives using complex algorithms.
It provides you with two different methods of securing your information. Either you encrypt any drive partition that contains personal information using combinations of different hashing and encryption algorithms or simply mount the virtual partitions in your hard drive, then save your data.
Since the encryption process can be carried out with two different authentication types, namely Password and USB Stick, the application helps you to add an extra layer of security, thus protecting your critical data from unauthorized users.
When you launch Comodo Disk Encryption for the first time, you will notice that all your drives are automatically recognized (after a restart has been performed). When you click on a random partition, detailed information such as file system, free space, encryption method and total size are displayed in the bottom pane of the program.
The right-click menu enables you to easily encrypt or decrypt the selected partition, edit the available settings, as well as format it by modifying the file system to NTFS, FAT32 or FAT and the allocation unit size.
By accessing the Encrypt option, you are able to choose one of the available authentication types, then set the properties according to your whims such as hash algorithm and password.
The ‘Virtual Drives’ tab enables you to view all the mounted drives in your system and create, mount, remove or unmount them, as well as edit the encryption settings effortlessly.
In case you want to decrypt a drive, you will just have to choose the proper option from the context menu and bring back the partition to its original form so that the drive becomes accessible for any user.
Overall, Comodo Disk Encryption keeps all your sensitive data protected from hackers, thieves and online scammers by encrypting your hard disks with ease.
Encryption allows information to be hidden so that it cannot be read without special knowledge (such as a password). This is done with a secret code or cypher. The hidden information is said to be encrypted.
Decryption is a way to change encrypted information back into plaintext. This is the decrypted form. The study of encryption is called cryptography. Cryptanalysis can be done by hand if the cypher is simple. Complex cyphers need a computer to search for possible keys. Decryption is a field of computer science and mathematics that looks at how difficult it is to break a cypher.
A simple kind of encryption for words is ROT13. In ROT13, letters of the alphabet are changed with each other using a simple pattern. For example, A changes to N, B changes to O, C changes to P, and so on. Each letter is “rotated” by 13 spaces. Using the ROT13 cipher, the words Simple English Wikipedia becomes Fvzcyr Ratyvfu Jvxvcrqvn. The ROT13 cipher is very easy to decrypt. Because there are 26 letters in the English alphabet, if a letter is rotated two times by 13 letters each time, the original letter will be obtained. So applying the ROT13 cipher a second time brings back the original text. When he communicated with his army, Julius Caesar sometimes used what is known as Caesar cipher today. This cipher works by shifting the position of letters: each letter is rotated by 3 positions.
Most kinds of encryption are made more complex so cryptanalysis will be difficult. Some are made only for text. Others are made for binary computer files like pictures and music. Today, many people use the asymmetric encryption system called RSA. Any computer file can be encrypted with RSA. AES is a common symmetric algorithm.
Most types of encryption can theoretically be cracked: an enemy might be able to decrypt a message without knowing the password, if he has clever mathematicians, powerful computers and lots of time. The one-time pad is special because, if it is used correctly, it is impossible to crack. There are three rules that must be followed:
If these three rules are obeyed, then it is impossible to read the secret message without knowing the secret key. For this reason, during the Cold War, embassies and large military units often used one-time pads to communicate secretly with their governments. They had little books (“pads”) filled with random letters or random numbers. Each page from the pad could only be used once: this is why it is called a “one-time pad”.
Encryption is often used on the Internet, as many web sites use it to protect private information. On the Internet, several encryption protocols are used, such as Secure Sockets Layer (SSL), IPsec, and SSH. They use the RSA encryption system and others. The protocol for protected web browsing is called HTTPS. URL encryption mostly uses the MD5 Algorithm. Various algorithms are used in the internet market depending upon the need.
Read the original post:
Encryption – Simple English Wikipedia, the free encyclopedia
BitLocker Drive Encryption is a data protection feature available Windows Server2008R2 and in some editions of Windows7. Having BitLocker integrated with the operating system addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers.
Data on a lost or stolen computer is vulnerable to unauthorized access, either by running a software-attack tool against it or by transferring the computer’s hard disk to a different computer. BitLocker helps mitigate unauthorized data access by enhancing file and system protections. BitLocker also helps render data inaccessible when BitLocker-protected computers are decommissioned or recycled.
BitLocker provides the most protection when used with a Trusted Platform Module (TPM) version1.2. The TPM is a hardware component installed in many newer computers by the computer manufacturers. It works with BitLocker to help protect user data and to ensure that a computer has not been tampered with while the system was offline.
On computers that do not have a TPM version1.2, you can still use BitLocker to encrypt the Windows operating system drive. However, this implementation will require the user to insert a USB startup key to start the computer or resume from hibernation, and it does not provide the pre-startup system integrity verification offered by BitLocker with a TPM.
In addition to the TPM, BitLocker offers the option to lock the normal startup process until the user supplies a personal identification number (PIN) or inserts a removable device, such as a USB flash drive, that contains a startup key. These additional security measures provide multifactor authentication and assurance that the computer will not start or resume from hibernation until the correct PIN or startup key is presented.
BitLocker can use a TPM to verify the integrity of early boot components and boot configuration data. This helps ensure that BitLocker makes the encrypted drive accessible only if those components have not been tampered with and the encrypted drive is located in the original computer.
BitLocker helps ensure the integrity of the startup process by taking the following actions:
To use BitLocker, a computer must satisfy certain requirements:
BitLocker is installed automatically as part of the operating system installation. However, BitLocker is not enabled until it is turned on by using the BitLocker setup wizard, which can be accessed from either the Control Panel or by right-clicking the drive in Windows Explorer.
At any time after installation and initial operating system setup, the system administrator can use the BitLocker setup wizard to initialize BitLocker. There are two steps in the initialization process:
When a local administrator initializes BitLocker, the administrator should also create a recovery password or a recovery key. Without a recovery key or recovery password, all data on the encrypted drive may be inaccessible and unrecoverable if there is a problem with the BitLocker-protected drive.
For detailed information about configuring and deploying BitLocker, see the Windows BitLocker Drive Encryption Step-by-Step Guide (http://go.microsoft.com/fwlink/?LinkID=140225).
BitLocker can use an enterprise’s existing Active Directory Domain Services (ADDS) infrastructure to remotely store recovery keys. BitLocker provides a wizard for setup and management, as well as extensibility and manageability through a Windows Management Instrumentation (WMI) interface with scripting support. BitLocker also has a recovery console integrated into the early boot process to enable the user or helpdesk personnel to regain access to a locked computer.
For more information about writing scripts for BitLocker, see Win32_EncryptableVolume (http://go.microsoft.com/fwlink/?LinkId=85983).
Many personal computers today are reused by people other than the computer’s initial owner or user. In enterprise scenarios, computers may be redeployed to other departments, or they might be recycled as part of a standard computer hardware refresh cycle.
On unencrypted drives, data may remain readable even after the drive has been formatted. Enterprises often make use of multiple overwrites or physical destruction to reduce the risk of exposing data on decommissioned drives.
BitLocker can help create a simple, cost-effective decommissioning process. By leaving data encrypted by BitLocker and then removing the keys, an enterprise can permanently reduce the risk of exposing this data. It becomes nearly impossible to access BitLocker-encrypted data after removing all BitLocker keys because this would require cracking 128-bit or 256-bit AES encryption.
BitLocker cannot protect a computer against all possible attacks. For example, if malicious users, or programs such as viruses or rootkits, have access to the computer before it is lost or stolen, they might be able to introduce weaknesses through which they can later access encrypted data. And BitLocker protection can be compromised if the USB startup key is left in the computer, or if the PIN or Windows logon password are not kept secret.
The TPM-only authentication mode is easiest to deploy, manage, and use. It might also be more appropriate for computers that are unattended or must restart while unattended. However, the TPM-only mode offers the least amount of data protection. If parts of your organization have data that is considered highly sensitive on mobile computers, consider deploying BitLocker with multifactor authentication on those computers.
For more information about BitLocker security considerations, see Data Encryption Toolkit for Mobile PCs (http://go.microsoft.com/fwlink/?LinkId=85982).
For servers in a shared or potentially non-secure environment, such as a branch office location, BitLocker can be used to encrypt the operating system drive and additional data drives on the same server.
By default, BitLocker is not installed with Windows Server2008R2. Add BitLocker from the Windows Server2008R2 Server Manager page. You must restart after installing BitLocker on a server. Using WMI, you can enable BitLocker remotely.
BitLocker is supported on Extensible Firmware Interface (EFI) servers that use a 64-bit processor architecture.
After the drive has been encrypted and protected with BitLocker, local and domain administrators can use the Manage BitLocker page in the BitLocker Drive Encryption item in Control Panel to change the password to unlock the drive, remove the password from the drive, add a smart card to unlock the drive, save or print the recovery key again, automatically unlock the drive, duplicate keys, and reset the PIN.
An administrator may want to temporarily disable BitLocker in certain scenarios, such as:
These scenarios are collectively referred to as the computer upgrade scenario. BitLocker can be enabled or disabled through the BitLocker Drive Encryption item in Control Panel.
The following steps are necessary to upgrade a BitLocker-protected computer:
Forcing BitLocker into disabled mode will keep the drive encrypted, but the drive master key will be encrypted with a symmetric key stored unencrypted on the hard disk. The availability of this unencrypted key disables the data protection offered by BitLocker but ensures that subsequent computer startups succeed without further user input. When BitLocker is enabled again, the unencrypted key is removed from the disk and BitLocker protection is turned back on. Additionally, the drive master key is keyed and encrypted again.
Moving the encrypted drive (that is, the physical disk) to another BitLocker-protected computer does not require any additional steps because the key protecting the drive master key is stored unencrypted on the disk.
For detailed information about disabling BitLocker, see Windows BitLocker Drive Encryption Step-by-Step Guide (http://go.microsoft.com/fwlink/?LinkID=140225).
A number of scenarios can trigger a recovery process, for example:
An administrator can also trigger recovery as an access control mechanism (for example, during computer redeployment). An administrator may decide to lock an encrypted drive and require that users obtain BitLocker recovery information to unlock the drive.
Using Group Policy, an IT administrator can choose which recovery methods to require, deny, or make optional for users who enable BitLocker. The recovery password can be stored in ADDS, and the administrator can make this option mandatory, prohibited, or optional for each user of the computer. Additionally, the recovery data can be stored on a USB flash drive.
The recovery password is a 48-digit, randomly generated number that can be created during BitLocker setup. If the computer enters recovery mode, the user will be prompted to type this password by using the function keys (F0 through F9). The recovery password can be managed and copied after BitLocker is enabled. Using the Manage BitLocker page in the BitLocker Drive Encryption item in Control Panel, the recovery password can be printed or saved to a file for future use.
A domain administrator can configure Group Policy to generate recovery passwords automatically and back them up to ADDS as soon as BitLocker is enabled. The domain administrator can also choose to prevent BitLocker from encrypting a drive unless the computer is connected to the network and ADDS backup of the recovery password is successful.
The recovery key can be created and saved to a USB flash drive during BitLocker setup; it can also be managed and copied after BitLocker is enabled. If the computer enters recovery mode, the user will be prompted to insert the recovery key into the computer.
Read the original here:
BitLocker Drive Encryption Overview – technet.microsoft.com
By Roberta Bragg
An Overview of the Encrypting File SystemWhat EFS IsBasic How-tosPlanning for and Recovering Encrypted Files: Recovery PolicyHow EFS WorksKey Differences Between EFS on Windows 2000, Windows XP, and Windows Server 2003Misuse and Abuse of EFS and How to Avoid Data Loss or ExposureRemote Storage of Encrypted Files Using SMB File Shares and WebDAVBest Practices for SOHO and Small BusinessesEnterprise How-tosTroubleshootingRadical EFS: Using EFS to Encrypt Databases and Using EFS with Other Microsoft ProductsDisaster RecoveryOverviews and Larger ArticlesSummary
The Encrypting File System (EFS) is a component of the NTFS file system on Windows 2000, Windows XP Professional, and Windows Server 2003. (Windows XP Home doesn’t include EFS.) EFS enables transparent encryption and decryption of files by using advanced, standard cryptographic algorithms. Any individual or program that doesn’t possess the appropriate cryptographic key cannot read the encrypted data. Encrypted files can be protected even from those who gain physical possession of the computer that the files reside on. Even persons who are authorized to access the computer and its file system cannot view the data. While other defensive strategies should be used, and encryption isn’t the correct countermeasure for every threat, encryption is a powerful addition to any defensive strategy. EFS is the built-in file encryption tool for Windows file systems.
However, every defensive weapon, if used incorrectly, carries the potential for harm. EFS must be understood, implemented appropriately, and managed effectively to ensure that your experience, the experience of those to whom you provide support, and the data you wish to protect aren’t harmed. This document will
Provide an overview and pointers to resources on EFS.
Point to implementation strategies and best practices.
Name the dangers and counsel mitigation and prevention from harm.
Many online and published resources on EFS exist. The major sources of information are the Microsoft resource kits, product documentation, white papers, and Knowledge Base articles. This paper provides a brief overview of major EFS issues. Wherever possible, it doesn’t rework existing documentation; rather, it provides links to the best resources. In short, it maps the list of desired knowledge and instruction to the actual documents where they can be found. In addition, the paper catalogs the key elements of large documents so that you’ll be able to find the information you need without having to work your way through hundreds of pages of information each time you have a new question.
The paper discusses the following key EFS knowledge areas:
What EFS is
Basic how-tos, such as how to encrypt and decrypt files, recover encrypted files, archive keys, manage certificates, and back up files, and how to disable EFS
How EFS works and EFS architecture and algorithms
Key differences between EFS on Windows 2000, Windows XP, and Windows Server 2003
Misuse and abuse of EFS and how to avoid data loss or exposure
Remote storage of encrypted files using SMB file shares and WebDAV
Best practices for SOHO and small businesses
Enterprise how-tos: how to implement data recovery strategies with PKI and how to implement key recovery with PKI
Radical EFS: using EFS to encrypt databases and using EFS with other Microsoft products
Where to download EFS-specific tools
Using EFS requires only a few simple bits of knowledge. However, using EFS without knowledge of best practices and without understanding recovery processes can give you a mistaken sense of security, as your files might not be encrypted when you think they are, or you might enable unauthorized access by having a weak password or having made the password available to others. It might also result in a loss of data, if proper recovery steps aren’t taken. Therefore, before using EFS you should read the information links in the section “Misuse and Abuse of EFS and How to Avoid Data Loss or Exposure.” The knowledge in this section warns you where lack of proper recovery operations or misunderstanding can cause your data to be unnecessarily exposed. To implement a secure and recoverable EFS policy, you should have a more comprehensive understanding of EFS.
You can use EFS to encrypt files stored in the file system of Windows 2000, Windows XP Professional, and Windows Server 2003 computers. EFS isn’t designed to protect data while it’s transferred from one system to another. EFS uses symmetric (one key is used to encrypt the files) and asymmetric (two keys are used to protect the encryption key) cryptography. An excellent primer on cryptography is available in the Windows 2000 Resource Kit as is an introduction to Certificate Services. Understanding both of these topics will assist you in understanding EFS.
A solid overview of EFS and a comprehensive collection of information on EFS in Windows 2000 are published in the Distributed Systems Guide of the Windows 2000 Server Resource Kit. This information, most of which resides in Chapter 15 of that guide, is published online at http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/default.mspx. (On this site’s page, use the TOC to go to the Distributed Systems Guide, Distributed Security, Encrypting File System.)
There are differences between EFS in Windows 2000, Windows XP Professional, and Windows Server 2003. The Windows XP Professional Resource Kit explains the differences between Windows 2000 and Windows XP Professionals implementation of EFS, and the document “Encrypting File System in Windows XP and Windows Server 2003” (http://www.microsoft.com/technet/prodtechnol/winxppro/deploy/cryptfs.mspx) details Windows XP and Windows Server 2003 modifications. The section below, “Key Differences between EFS on Windows 2000, Windows XP, and Windows Server 2003,” summarizes these differences.
The following are important basic facts about EFS:
EFS encryption doesn’t occur at the application level but rather at the file-system level; therefore, the encryption and decryption process is transparent to the user and to the application. If a folder is marked for encryption, every file created in or moved to the folder will be encrypted. Applications don’t have to understand EFS or manage EFS-encrypted files any differently than unencrypted files. If a user attempts to open a file and possesses the key to do so, the file opens without additional effort on the user’s part. If the user doesn’t possess the key, they receive an “Access denied” error message.
File encryption uses a symmetric key, which is then itself encrypted with the public key of a public key encryption pair. The related private key must be available in order for the file to be decrypted. This key pair is bound to a user identity and made available to the user who has possession of the user ID and password. If the private key is damaged or missing, even the user that encrypted the file cannot decrypt it. If a recovery agent exists, then the file may be recoverable. If key archival has been implemented, then the key may be recovered, and the file decrypted. If not, the file may be lost. EFS is an excellent file encryption systemthere is no “back door.”
File encryption keys can be archived (e.g. exported to a floppy disk) and kept in a safe place to ensure recovery should keys become damaged.
EFS keys are protected by the user’s password. Any user who can obtain the user ID and password can log on as that user and decrypt that user’s files. Therefore, a strong password policy as well as strong user education must be a component of each organization’s security practices to ensure the protection of EFS-encrypted files.
EFS-encrypted files don’t remain encrypted during transport if saved to or opened from a folder on a remote server. The file is decrypted, traverses the network in plaintext, and, if saved to a folder on the local drive that’s marked for encryption, is encrypted locally. EFS-encrypted files can remain encrypted while traversing the network if they’re being saved to a Web folder using WebDAV. This method of remote storage isn’t available for Windows 2000.
EFS uses FIPS 140-evaluated Microsoft Cryptographic Service Providers (CSPcomponents which contain encryption algorithms for Microsoft products).
EFS functionality is straightforward, and you can find step-by-step instructions in many documents online. Links to specific articles for each possible EFS function, as well as some documents which summarize multiple functionality, follow. If the document is a Knowledge Base article, the Knowledge Base number appears in parentheses after the article title.
Encrypting and Decrypting
The process of encrypting and decrypting files is very straightforward, but its important to decide what to encrypt and to note differences in EFS based on the operating system.
Sharing Encrypted Files
The GUI for sharing encrypted files is available only in Windows XP and Windows Server 2003.
A recovery policy can be an organization’s security policy instituted to plan for proper recovery of encrypted files. It’s also the policy enforced by Local Security Policy Public Key Policy or Group Policy Public Key Policy. In the latter, the recovery policy specifies how encrypted files may be recovered should the user private key be damaged or lost and the encrypted file unharmed. Recovery certificate(s) are specified in the policy. Recovery can be either data recovery (Windows 2000, Windows XP Professional, and Windows Server 2003) or key recovery (Windows Server 2003 with Certificate Services). Windows 2000 EFS requires the presence of a recovery agent (no recovery agent, no file encryption), but Windows XP and Windows Server 2003 don’t. By default, Windows 2000 and Windows Server 2003 have default recovery agents assigned. Windows XP Professional doesn’t.
The data recovery process is simple. The user account bound to the recovery agent certificate is used to decrypt the file. The file should then be delivered in a secure manner to the file owner, who may then encrypt the file. Recovery via automatically archived keys is available only with Windows Server 2003 Certificate Services. Additional configuration beyond the installation of Certificate Services is required. In either case, it’s most important that a written policy and procedures for recovery are in place. These procedures, if well written and if followed, can ensure that recovery keys and agents are available for use and that recovery is securely carried out. Keep in mind that there are two definitions for “recovery policy.” The first definition refers to a written recovery policy and procedures that describe the who, what, where, and when of recovery, as well as what steps should be taken to ensure recovery components are available. The second definition, which is often referred to in the documents below, is the Public Key Policy that’s part of the Local Security Policy on stand-alone systems, or Group Policy in a domain. It can specify which certificates are used for recovery, as well as other aspects of Public Key Policies in the domain. You can find more information in the following documents:
Disabling or Preventing Encryption
You may decide that you don’t wish users to have the ability to encrypt files. By default, they do. You may decide that specific folders shouldn’t contain encrypted files. You may also decide to disable EFS until you can implement a sound EFS policy and train users in proper procedures. There are different ways of disabling EFS depending on the operating system and the desired effect:
System folders cannot be marked for encryption. EFS keys aren’t available during the boot process; thus, if system files were encrypted, the system file couldn’t boot. To prevent other folders being marked for encryption, you can mark them as system folders. If this isn’t possible, then a method to prevent encryption within a folder is defined in “Encrypting File System.”
NT 4.0 doesn’t have the ability to use EFS. If you need to disable EFS for Windows 2000 computers joined to a Windows NT 4.0 domain, see “Need to Turn Off EFS on a Windows 2000-Based Computer in Windows NT 4.0-Based Domain” (288579). The registry key mentioned can also be used to disable EFS in Window XP Professional and Windows Server 2003.
Disabling EFS for Windows XP Professional can also be done by clearing the checkbox for the property page of the Local Security Policy Public Key Policy. EFS can be disabled in XP and Windows Server 2003 computers joined in a Windows Server 2003 domain by clearing the checkbox for the property pages of the domain or organizational unit (OU) Group Policy Public Key Policy.
“HOW TO: Disable/Enable EFS on a Stand-Alone Windows 2000-Based Computer” (243035) details how to save the recovery agent’s certificate and keys when disabling EFS so that you can enable EFS at a future date.
“HOW TO: Disable EFS for All Computers in a Windows 2000-Based Domain” (222022) provides the best instruction set and clearly defines the difference between deleted domain policy (an OU-based policy or Local Security Policy can exist) versus Initialize Empty Policy (no Windows 2000 EFS encryption is possible throughout the domain).
Let enough people look at anything, and you’ll find there are questions that are just not answered by existing documentation or options. A number of these issues, third-party considerations, and post introduction issues can be resolved by reviewing the following articles.
Specifications for the use of a third-party Certification Authority (CA) can be found at “Third-Party Certification Authority Support for Encrypting File System” (273856). If you wish to use third-party CA certificates for EFS, you should also investigate certificate revocation processing. Windows 2000 EFS certificates aren’t checked for revocation. Windows XP and Windows Server 2003 EFS certificates are checked for revocation in some cases, and third-party certificates may be rejected. Information about certificate revocation handling in EFS can be found in the white paper “Encrypting File System in Windows XP and Windows Server 2003”.
When an existing plaintext file is marked for encryption, it’s first copied to a temporary file. When the process is complete, the temporary file is marked for deletion, which means portions of the original file may remain on the disk and could potentially be accessible via a disk editor. These bits of data, referred to as data shreds or remanence, may be permanently removed by using a revised version of the cipher.exe tool. The tool is part of Service Pack 3 (SP3) for Windows 2000 and is included in Windows Server 2003. Instructions for using the tool, along with the location of a downloadable version, can be found in “HOW TO: Use Cipher.exe to Overwrite Deleted Data in Windows” (315672) and in “Cipher.exe Security Tool for the Encrypting File System” (298009).
How to make encrypted files display in green in Windows Explorer is explained in “HOW TO: Identify Encrypted Files in Windows XP” (320166).
“How to Enable the Encryption Command on the Shortcut Menu” (241121) provides a registry key to modify for this purpose.
You may wish to protect printer spool files or hard copies of encrypted files while they’re printing. Encryption is transparent to the printing process. If you have the right (possess the key) to decrypt the file and a method exists for printing files, the file will print. However, two issues should concern you. First, if the file is sensitive enough to encrypt, how will you protect the printed copy? Second, the spool file resides in the
To understand EFS, and therefore anticipate problems, envision potential attacks, and troubleshoot and protect EFS-encrypted files, you should understand the architecture of EFS and the basic encryption, decryption, and recovery algorithms. Much of this information is in the Windows 2000 Resource Kit Distributed Systems Guide, the Windows XP Professional Resource Kit, and the white paper, “Encrypting File System in Windows XP and Windows Server 2003.” Many of the algorithms are also described in product documentation. The examples that follow are from the Windows XP Professional Resource Kit:
A straightforward discussion of the components of EFS, including the EFS service, EFS driver, and the File System Run Time Library, is found in “Components of EFS,” a subsection of Chapter 17, “Encrypting File System” in the Windows XP Professional Resource Kit.
A description of the encryption, decryption, and recovery algorithms EFS uses is in the Resource Kit section “How Files Are Encrypted.” This section includes a discussion of the file encryption keys (FEKs) and file Data Recovery Fields and Data Decryption Fields used to hold FEKs encrypted by user and recovery agent public keys.
“Working with Encryption” includes how-to steps that define the effect of decisions made about changing the encryption properties of folders. The table defines what happens for each file (present, added later, or copied to the folder) for the choice “This folder only” or the option “This folder, subfolders and files.”
“Remote EFS Operations on File Shares and Web Folders” defines what happens to encrypted files and how to enable remote storage.
EFS was introduced in Windows 2000. However, there are differences when compared with Windows XP Professional EFS and Windows Server 2003 EFS, including the following:
You can authorize additional users to access encrypted files (see the section “Sharing Encrypted Files”, above). In Windows 2000, you can implement a programmatic solution for the sharing of encrypted files; however, no interface is available. Windows XP and Windows Server 2003 have this interface.
Offline files can be encrypted. See “HOW TO: Encrypt Offline Files to Secure Data in Windows XP.”
Data recovery agents are recommended but optional. XP doesn’t automatically include a default recovery agent. XP will take advantage of an existing Windows 2000 domain-level recovery agent if one is present, but the lack of a domain recovery agent wont prevent encryption of files on an XP system. A self-signed recovery agent certificate can be requested by using the cipher /R:filename command, where filename is the name that will be used to create a *.cer file to hold the certificate and a *.pfx file to hold the certificate and private key.
The Triple DES (3DES) encryption algorithm can be used to replace Data Encryption Standard X (DESX), and after XP SP1, Advanced Encryption Standard (AES) becomes the default encryption algorithm for EFS.
For Windows XP and Windows Server 2003 local accounts, a password reset disk can be used to safely reset a user’s password. (Domain passwords cannot be reset using the disk.) If an administrator uses the “reset password” option from the user’s account in the Computer Management console users container, EFS files won’t be accessible. If users change the password back to the previous password, they can regain access to encrypted files. To create a password reset disk and for instructions about how to use a password reset disk, see product documentation and/or the article “HOW TO: Create and Use a Password Reset Disk for a Computer That Is Not a Domain Member in Windows XP” (305478).
Encrypted files can be stored in Web folders. The Windows XP Professional Resource Kit section “Remote EFS Operations in a Web Folder Environment” explains how.
Windows Server 2003 incorporates the changes introduced in Windows XP Professional and adds the following:
A default domain Public Key recovery policy is created, and a recovery agent certificate is issued to the Administrator account.
Certificate Services include the ability for customization of certificate templates and key archival. With appropriate configuration, archival of user EFS keys can be instituted and recovery of EFS-encrypted files can be accomplished by recovering the user’s encryption keys instead of decrypting via a file recovery agent. A walk-through providing a step-by-step configuration of Certificate Services for key archival is available in “Certificate Services Example Implementation: Key Archival and Recovery.”
Windows Server 2003 enables users to back up their EFS key(s) directly from the command line and from the details property page by clicking a “Backup Keys” button.
Unauthorized persons may attempt to obtain the information encrypted by EFS. Sensitive data may also be inadvertently exposed. Two possible causes of data loss or exposure are misuse (improper use of EFS) or abuse (attacks mounted against EFS-encrypted files or systems where EFS-encrypted files exist).
Inadvertent Problems Due to Misuse
Several issues can cause problems when using EFS. First, when improperly used, sensitive files may be inadvertently exposed. In many cases this is due to improper or weak security policies and a failure to understand EFS. The problem is made all the worse because users think their data is secure and thus may not follow usual precautionary methods. This can occur in several scenarios:
If, for example, users copy encrypted files to FAT volumes, the files will be decrypted and thus no longer protected. Because the user has the right to decrypt files that they encrypted, the file is decrypted and stored in plaintext on the FAT volume. Windows 2000 gives no warning when this happens, but Windows XP and Windows Server 2003 do provide a warning.
If users provide others with their passwords, these people can log on using these credentials and decrypt the user’s encrypted files. (Once a user has successfully logged on, they can decrypt any files the user account has the right to decrypt.)
If the recovery agent’s private key isn’t archived and removed from the recovery agent profile, any user who knows the recovery agent credentials can log on and transparently decrypt any encrypted files.
By far, the most frequent problem with EFS occurs when EFS encryption keys and/or recovery keys aren’t archived. If keys aren’t backed up, they cannot be replaced when lost. If keys cannot be used or replaced, data can be lost. If Windows is reinstalled (perhaps as the result of a disk crash) the keys are destroyed. If a user’s profile is damaged, then keys are destroyed. In these, or in any other cases in which keys are damaged or lost and backup keys are unavailable, then encrypted files cannot be decrypted. The encryption keys are bound to the user account, and a new iteration of the operating system means new user accounts. A new user profile means new user keys. If keys are archived, or exported, they can be imported to a new account. If a revocation agent for the files exists, then that account can be used to recover the files. However, in many cases in which keys are destroyed, both user and revocation keys are absent and there is no backup, resulting in lost data.
Additionally, many other smaller things may render encrypted files unusable or expose some sensitive data, such as the following:
Finally, keeping data secure takes more than simply encrypting files. A systems-wide approach to security is necessary. You can find several articles that address best practices for systems security on the TechNet Best Practices page at http://www.microsoft.com/technet/archive/security/bestprac/bpent/sec2/secentbb.mspx. The articles include
Attacks and Countermeasures: Additional Protection Mechanisms for Encrypted Files
Any user of encrypted files should recognize potential weaknesses and avenues of attack. Just as its not enough to lock the front door of a house without considering back doors and windows as avenues for a burglar, encrypting files alone isn’t enough to ensure confidentiality.
Use defense in depth and use file permissions. The use of EFS doesn’t obviate the need to use file permissions to limit access to files. File permissions should be used in addition to EFS. If users have obtained encryption keys, they can import them to their account and decrypt files. However, if the user accounts are denied access to the file, the users will be foiled in their attempts to gain this sensitive information.
Use file permissions to deny delete. Encrypted files can be deleted. If attackers cannot decrypt the file, they may choose to simply delete it. While they don’t have the sensitive information, you don’t have your file.
Protect user credentials. If an attacker can discover the identity and password of a user who can decrypt a file, the attacker can log on as that user and view the files. Protecting these credentials is paramount. A strong password policy, user training on devising strong passwords, and best practices on protecting these credentials will assist in preventing this type of attack. An excellent best practices approach to password policy can be found in the Windows Server 2003 product documentation. If account passwords are compromised, anyone can log on using the user ID and password. Once user have successfully logged on, they can decrypt any files the user account has the right to decrypt. The best defense is a strong password policy, user education, and the use of sound security practices.
Protect recovery agent credentials. Similarly, if an attacker can log on as a recovery agent, and the recovery agent private key hasn’t been removed, the attacker can read the files. Best practices dictate the removal of the recovery agent keys, the restriction of this account’s usage to recovery work only, and the careful protection of credentials, among other recovery policies. The sections about recovery and best practices detail these steps.
Seek out and manage areas where plaintext copies of the encrypted files or parts of the encrypted files may exist. If attackers have possession of, or access to, the computer on which encrypted files reside, they may be able to recover sensitive data from these areas, including the following:
Data shreds (remanence) that exist after encrypting a previously unencrypted file (see the “Special Operations” section of this paper for information about using cipher.exe to remove them)
The paging file (see “Increasing Security for Open Encrypted Files,” an article in the Windows XP Professional Resource Kit, for instructions and additional information about how to clear the paging file on shutdown)
Hibernation files (see “Increasing Security for Open Encrypted Files” at http://technet.microsoft.com/library/bb457116.aspx)
Temporary files (to determine where applications store temporary files and encrypt these folders as well to resolve this issue
Printer spool files (see the “Special Operations” section)
Provide additional protection by using the System Key. Using Syskey provides additional protection for password values and values protected in the Local Security Authority (LSA) Secrets (such as the master key used to protect user’s cryptographic keys). Read the article “Using the System Key” in the Windows 2000 Resource Kit’s Encrypting File System chapter. A discussion of the use of Syskey, and possible attacks against a Syskey-protected Windows 2000 computer and countermeasures, can be found in the article “Analysis of Alleged Vulnerability in Windows 2000 Syskey and the Encrypting File System.”
If your policy is to require that data is stored on file servers, not on desktop systems, you will need to choose a strategy for doing so. Two possibilities existeither storage in normal shared folders on file servers or the use of web folders. Both methods require configuration, and you should understand their benefits and risks.
If encrypted files are going to be stored on a remote server, the server must be configured to do so, and an alternative method, such as IP Security (IPSec) or Secure Sockets Layer (SSL), should be used to protect the files during transport. Instructions for configuring the server are discussed in “Recovery of Encrypted Files on a Server” (283223) and “HOW TO: Encrypt Files and Folders on a Remote Windows 2000 Server” (320044). However, the latter doesn’t mention a critical step, which is that the remote server must be trusted for delegation in Active Directory. Quite a number of articles can be found, in fact, that leave out this step. If the server isn’t trusted for delegation in Active Directory, and a user attempts to save the file to the remote server, an “Access Denied” error message will be the result.
If you need to store encrypted files on a remote server in plaintext (local copies are kept encrypted), you can. The server must, however, be configured to make this happen. You should also realize that once the server is so configured, no encrypted files can be stored on it. See the article “HOW TO: Prevent Files from Being Encrypted When Copied to a Server” (302093).
You can store encrypted files in Web folders when using Windows XP or Windows Server 2003. The Windows XP Professional Resource Kit section “Remote EFS Operations in a Web Folder Environment” explains how.
If your Web applications need to require authentication to access EFS files stored in a Web folder, the code for using a Web folder to store EFS files and require authentication to access them is detailed in “HOW TO: Use Encrypting File System (EFS) with Internet Information Services” (243756).
Once you know the facts about EFS and have decided how you are going to use it, you should use these documents as a checklist to determine that you have designed the best solution.
By default, EFS certificates are self-signed; that is, they don’t need to obtain certificates from a CA. When a user first encrypts a file, EFS looks for the existence of an EFS certificate. If one isn’t found, it looks for the existence of a Microsoft Enterprise CA in the domain. If a CA is found, a certificate is requested from the CA; if it isn’t, a self-signed certificate is created and used. However, more granular control of EFS, including EFS certificates and EFS recovery, can be established if a CA is present. You can use Windows 2000 or Windows Server 2003 Certificate Services. The following articles explain how.
Troubleshooting EFS is easier if you understand how EFS works. There are also well known causes for many of the common problems that arise. Here are a few common problems and their solutions:
You changed your user ID and password and can no longer decrypt your files. There are two possible approaches to this problem, depending on what you did. First, if the user account was simply renamed and the password reset, the problem may be that you’re using XP and this response is expected. When an administrator resets an XP user’s account password, the account’s association with the EFS certificate and keys is removed. Changing the password to the previous password can reestablish your ability to decrypt your files. For more information, see “User Cannot Gain Access to EFS Encrypted Files After Password Change or When Using a Roaming Profile” (331333), which explains how XP Professional encrypted files cannot be decrypted, even by the original account, if an administrator has changed the password. Second, if you truly have a completely different account (your account was damaged or accidentally deleted), then you must either import your keys (if you’ve exported them) or ask an administrator to use recovery agent keys (if implemented) to recover the files. Restoring keys is detailed in “HOW TO: Restore an Encrypting File System Private Key for Encrypted Data Recovery in Windows 2000” (242296). How to use a recovery agent to recover files is covered in “Five-Minute Security AdvisorRecovering Encrypted Data Using EFS.”
Read this article:
The Encrypting File System – technet.microsoft.com
Getty Images | Peter Dazeley
The Federal Bureau of Investigation has not been able to break the encryption on the phone owned by a gunman who killed 26 people in a Texas church on Sunday.
“We are unable to get into that phone,” FBI Special Agent Christopher Combs said in a press conference yesterday (see video).
Combs declined to say what kind of phone was used by gunman Devin Kelley, who killed himself after the mass shooting.”I’m not going to describe what phone it is because I don’t want to tell every bad guy out there what phone to buy, to harass our efforts on trying to find justice here,” Combs said.
The phone is an iPhone,The Washington Post reported today:
After the FBI said it was dealing with a phone it couldnt open, Apple reached out to the bureau to learn if the phone was an iPhone and if the FBI was seeking assistance. Late Tuesday an FBI official responded, saying it was an iPhone but the agency was not asking anything of the company at this point. Thats because experts at the FBIs lab in Quantico, Va., are trying to determine if there are other methods to access the phones data, such as through cloud storage backups or linked laptops, these people said.
The US government has been calling on phone makers to weaken their devices’ security, but companies have refused to do so.Last year, Apple refused to help the government unlock and decrypt the San Bernardino gunman’s iPhone, but the FBI ended up paying hackers for a vulnerability that it used to access data on the device.
Deliberately weakening the security of consumer devices would help criminals target innocent people who rely on encryption to ensure their digital safety, Apple and others have said.
“With the advance of the technology in the phones and the encryptions, law enforcement, whether it’s at the state, local, or the federal level, is increasingly not able to get into these phones,” Combs said yesterday.
Combs said he has no idea how long it will take before the FBI can break the encryption.”I can assure you we are working very hard to get into the phone, and that will continue until we find an answer,” he said. The FBI is also examining “other digital media” related to the gunman, he said.
There are currently “thousands of seized devices sit[ting] in storage, impervious to search warrants,” Deputy Attorney General Rod Rosenstein said last month.