Thursday, July 21, 2016

Password Hashing

I have an online account that uses multifactor authentication.  If you're not familiar with multifactor authentication systems, then a brief introduction is in order:

A multifactor authentication system is one that that requires more than one method of authentication, each sourced from independent categories of credentials.  These categories are divided into the following three domains:
  1. Something you know
  2. Something you have
  3. Something you are
Something you know is a password or pass-phrase.
Something you have is a token generating key-fob or a key card with a magnetic stripe.
Something you are is a fingerprint, voice or retinal print.

The first two factors can be easily revoked, the last factor - something you are - cannot be revoked. That's why, along with false positive and false negative rates, something you are should never, ever be used as the primary method of proving identity: it must always be mixed with another factor.

Back to my online account.

I was attempting to log into this account one day when it refused to accept my password.  I reset the password 3 or 4 times - using a strong password each time - and finally called the support desk when I had successfully pulled out the remaining bits of my hair.  They gave me a reset password.  I tried using that password while on the phone with them: no go.

Then they said, 
"Did you also input your token from your authentication app?"
"Why, no.  I did not.  How do I do that?" 
"Enter the password, then follow the password with the authenticator passcode."
So I did just that, and presto: I was authenticated into the system.  Pretty neat.  Until you think about how it could be broken.

How Password Management Should Work

Storing passwords has always an Achilles' heel of Information Security. A lot of developers get it wrong. It's why companies like LinkedIn get pwn'd. In order to do it right, you have to implement several things.

Add some Salt

First, you must always employ a salt. IMO, a salt is at least 128 bits (16 bytes) of cryptographically secure random data. In other words, using srand() and rand() are right out. On Linux'ish systems you must use /dev/random or /dev/urandom (see here regarding the Myths of about /dev/urandom).

Windows has it's own way of providing cryptographically secure random numbers (Google is your friend).

Choose Speed or Memory

And by that I mean, never use SHA[what-ever] to hash your passwords.  Always use one of the following
  1. bcrypt - time intensive
  2. scrypt - memory intensive
  3. PBKDF2 - time intensive
If you use anything other than those functions, you're doing it wrong.
If you don't add unique salts to your password, you're doing it wrong.
If your salt is compiled into your program, you're doing it wrong.

Save the Result

It's really that simple: save the hash and the salt right next to each other if you want to, but you must save the both.  Never save a password, even if you encrypt it with AES.  You must, however, employ proper security controls around the password store.  While encryption is always a preferred control when storing information, there are customary ways to make sure files are available only to the entity that needs them.  Specifically, file ownership and read/write/modify permissions.  On a Linux-ish system, that generally means a unique, hidden folder accessible only by the entity, and a file accessible only by the entity.  E.g.,
$ mkdir  .my_hidden_dir
$ touch  .my_hidden_dir/my_file
$ chown  -R user:group .my_hidden_dir
$ chmod  u=rwx,go-rwx .my_hidden_dir
$ chmod  u=rw,go-rwx .my_hidden_dir/my_file
When the user enters their password, grab the salt, add it to the password, execute your hashing algorithm and compare.  If you get the stored hash, you have the correct password.

How It Could be Done Wrong

Lets recall the facts: my online provider requires me to enter my password and my token at the same time, one appended to the other, like this:
  • [password][token]
The problem is this: how does my provider know the length of my password?  In order to calculate the hash of my password, they need the salt and the password.  Since the information is input as one string, then there are only few ways that my password can be extracted from the string:
  1. They're saving the length of my password with my password hash
    • This is bad.  If the password database is exposed, then my hash, salt and password length will be exposed.  This reduces the time necessary to crack the password since an attacker needs only to create a dictionary of hashes based upon the length of my password - assuming that the attacker also knows the iteration count of the hash algorithm (e.g., the provider is not using SHA[what-ever]). 
  2. They're saving my password (not hashed)
    • Wrong all the way around, even if it's encrypted.  Don't do it.
  3. The token is a fixed length
    • A likely possibility.  But this breaks when the token generator changes the length of the token.
  4. The password + token is not hashed before it reaches the web server
    • This sounds dangerous, but actually the converse is more dangerous:
      • If the password is hashed in the client (assuming a browser interface), then the attacker has full knowledge of the hashing algorithm used by the password management system.  Once that system is compromised and the password hashes are exposed, it will be a short order before they're all cracked using a dictionary attack.
    • Never hash your passwords in the client

Conclusion

A proper implementation should request the one-time-token from the generator and match that pattern in the string provided by the user.  Where the match begins marks the end of the user password+1.  From there, it's simple string splitting, hashing and comparing.  This solves the problem of variable length tokens from the one-time-token generator.

Doing security correctly is not as easy as it looks or sounds.  We must employ the proper tools to insure our designs and architectures are sound.  As far as my online provider, I can only hope they're using strong hashing algorithms.  But given that they're requiring something I have, it's very, very unlikely that an attacker will randomly provide the correct password+token.

Tuesday, July 5, 2016

Requirements and Use Cases

Many times, requirements and use cases are not things that the average Joe (with apologies to my friend Joe) believes warrant security attention. But Architecture and Design is where it all begins, it is in that phase of development that requirements and use cases are being developed. If you miss the security boat there, then you've set the Rube Goldberg machine of hack, exploit, pay-the-lawyers, fix and patch, into motion.

The problem is that the Average Developer doesn't really care too much about Secure Design, Threat Modeling and Attacker Stories. That stuff just gets in the way of fun (writing code) and profits (selling code) - though we might like to assert that we don't see much of the profits being six or seven layers deep from the President & CEO. So lets explore the Attacker Story by first understanding exactly what a Use Case represents.

Requirements


The study of Software Requirements is a college-level course in-and-of itself. So I'm not going to spend too much time on the topic except to say this: building in security and constraints through your Software Requirements phase doesn't help a whole lot. If it did, we wouldn't have insecure software and buffer overruns. On top of Software Requirements we must add use cases and attacker stories (or abuse cases). Both of these things inform development and quality engineering. They help development consider the activities and actions of both the user and the attacker. On the QE side, the tools help the testers verify the implementation functions as designed and is resilient to attack.

Use Cases


A use case is a technique that helps us express functional requirements in a developer and tester friendly manner. But a use case is not a substitution for documenting specific requirements. Neither is it intended for all subject-object interactions. A use case should be specific enough to supplement design requirements so that correct design is implemented and sufficient tests are constructed to verify functionality. Use case modeling expresses the intended system behavior for specific actors.

Consider a simple banking application use case for making a transfer:
As a user of the Banking App, I want to transfer funds out of my account into another account in order to meet my personal financial goals.

This use case helps us think about what a specific actor wants to do with a given resource and for a specific purpose.

Abuse Cases (Attacker Stories)


An Attacker Story or Abuse Case exactly like a use case, except it is constructed from the perspective of an attacker. It may be based upon a specific use case, but it also can be based upon well known threat model attributes, such as those associated with STRIDE or DREAD. Using the example above, we can develop an abuse cases as follows:
As an attacker, I can access a user's account and transfer funds to other accounts for the purpose of stealing money.
As an attacker, I cannot steal money by accessing a user's account and transfer money to other accounts.
Just as in the use case, the abuse case causes an attentive developer and tester to ask questions:
  • How does the user access the account?
  • How do we prevent the attacker from gaining unauthorized access?
  • How do we assert that user has permission to transfer funds?
  • How do we prevent an attacker who has gained access to an account from stealing money?
Therefore, the Attacker Story helps us express the emergent security requirements from which we can design and implement controls and tests to prove the controls work. In the best situation, each use case should have one or more abuse cases.

We can (and many times should) create diagrams explaining both the use case and the abuse case.  While the example below is patently simple, it does express the idea that we don't want an un-authenticated user performing the same functions as an authenticated user.

Nevertheless, in both instances we consider different aspects of the same usage.  Perhaps one of the first questions we investigate is, "how does an attacker become an authenticated user without proper credentials?"  Perhaps session hijacking is exploited, or perhaps we use a form of multi-factor authentication and the attacker has performed this SIM attack or this one.

Use Case


Abuse Case


In summary,

Software Security is not an afterthought. If you're not considering how your information system can be attacked when you're in the architect and design phases, then you'll be playing catch-up for the entire lifecycle. Security is like a feature: it must be designed into the product and it must be testable. Security is an emergent property of software, just like any other feature or quality.

Sunday, June 26, 2016

Secure Development Lifecycle - The Basics

A secure and stable system, whether you're designing a Point of Sale terminal, a Space Shuttle or a Mobile Device Management system, always begins with the development system: not the software. It is the software development system that results in either sound or harmful software. Quality and Security are not things that can be bolted at the end of development or when the vehicle is being deployed.

In the same vein, the absence of prior failure does not provide an indicator of future success. It was precisely this belief that helped doom the Challenger STS-51-L mission:
We have also found that certification criteria used in Flight Readiness Reviews often develop a gradually decreasing strictness. The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious attempt to remedy them, or to delay a flight because of their continued presence.(Rogers Commission, STS-51-L)
This is not to say we do not implement controls in the environment the system is deployed to provide safety and security. Instead, those controls cannot account for or reverse bad planning, design, or implementation.

It is the responsibility of those performing the implementation - of any system - to ensure that the system under design is developed using a Secure Development Lifecycle.

For Software Development, there are six touch-points, each with associated activities, that must be performed to address security requirements:

  1. Requirements and Use Cases 
    • Abuse Cases / Attacker Stories 
    • Security Requirements 
  2. Architecture and Design 
    • Risk Analysis 
  3. Test Plans 
    • Risk Analysis 
    • Risk-Based Security Tests 
  4. Code 
    • Code Reviews 
  5. Tests and Results 
    • Risk Analysis 
    • Penetration Testing 
  6. Feedback from Consumers (the field, customers) 
    • Penetration Testing 
    • Security Operations
In future blogs, I'll address each of these topics.