Probabilistic Password Cracker

Overview:

This is a major area of research for me, and something that I truly believe in. For years, we have been applying probability models to help speed up brute force attacks, (aka letter frequency analysis and Markov Models). At the same time though, our approach to dictionary based attacks has been fairly ad-hoc. John the Ripper’s default, (and single mode), rules while built based on their creators experiences with cracking passwords, are still extremely subjective. For example I’ve found very few passwords in my cracking attacks that were created by reversing an input dictionary word. Cain and Able, while a great product, probably has the most bizarre rule selection in that it focuses on capitalization mangling at the expense of just about everything else, (though it will also add two numbers to the end of words and replace letters with numbers). AccessData orders their default rule set not on how effective the rules are but by how large the search space is for each rule. This is not a slam on these approaches but I do think that as passwords become stronger and stronger, (either through user training or password creation policies), we need to improve how we generate and use word mangling rules.

The main goal of this project is to see if we can assign probabilities to different word mangling rules and then generate password guesses in probability order. There are several advantages I feel this approach offers us.  First, by designing a way to measure the probability of word mangling rules, we can quickly generate new rules by training our password cracker on known passwords that we feel are similar to the target. This way, we will be able to train our cracker to go against English speakers, Russian speakers, passwords created for social networking sites, passwords created with a strong password creation strategy, etc. If you’ve ever spent time editing a John the Ripper config file, you know that ability to automatically generate rules is very nice. Second, it allows us to more effectively target strong passwords. Just like with letter frequency analysis, the letter “z” may be uncommon, but the string “aaaaz” may be more probable than the string “dfttp” since it takes into account the probability of all the letters. Likewise, by using a probability model of how passwords are created, we can better balance the order of how multiple word mangling rules are applied to password guesses. For example, the guess “$$password63” may be more probable than “!*password12”. Not only does this technique apply to word mangling rules, but also to the input words themselves. We know that the word “password” is more probable than the word “zebra”. Using a probabilistic approach gives us a framework to make use of this knowledge.

Special Thanks and Acknowledgments:

The original version of the program was written Bill Glodek, another graduate student at Florida State University. The original idea for using probabilistic context free grammars to represent how people create passwords was Dr. Sudhir Aggarwal’s and Professor Breno de Medeiros’s. Basically I was lucky enough to come in at the right time to assist with the start of the program and help carry it on once Bill graduated.

Theory, and Experimental Results:

For the theory behind our approach, an overview of the basic algorithm, and some of the results of using an early version of our password cracker against real passwords please see the included PDF of the paper we presented at the IEEE Security & Privacy conference, “Cracking Passwords Using a Probabilistic Context Free Grammar.” I’ll try to post additional results to the website, but the short preview is that the current version, (as feature incomplete as it is), helped out quite a bit with cracking the phpBB list and was one of the main tools that I used.

To Do List:

  1. The version included on the CD is still very much in the Beta phase since I’m right in the middle of completely rewriting the training program to add new features. The main limitations include
  2. The training program has “issues” when being trained on passwords that include non-ASCII characters. There’s a workaround, but it’s really messy.
  3. Currently the training program doesn’t learn case information from the training password set
  4. The training program doesn’t learn letter replacement rules from the training password set
  5. Need to integrate CUPPS support, (from the remote-exploit team), so we can better make use of collected information, (birthdays/children names/etc), in our password cracker.
  6. It would be nice to integrate dictionary evaluation into the training program instead of having to use a different program, (aka passPattern).
  7. Add support so the trainer can create multiple named rulesets automatically, (rather than overwriting the last ruleset forcing the user to manually back it up).
  8. General performance improvements in the password guess generator itself
  9. Saving session state information from the guess generator so it can be stopped/restarted
  10. Add the ability to detect user input and output the current guess/status to stderr
Use and operation: TRAINING:

The version on the DVD is already trained on the MySpace password set which we’ve found has been fairly effective, (this was the list of MySpace passwords that were stolen by phishers several years ago). Please note, when you train the password cracker on a new list it will overwrite the previous rules. To train the password cracker on a new list, just run

./process.py <training list>
The training list should be a newline separated file of the raw password values. Aka
password1
password2
2password
............
.
Use and operation: CREATING PASSWORD GUESSES:

First you need to build the password cracker from the source files. It has been tested on Ubuntu Desktop, and MacOSX, (with Xcode installed). To build the executable, simple type:

./make

Once the executable has compiled, (hopefully without errors), it is ready to run. Currently it is fairly stupid, (I’ll blame the program vs. my programming skills), so it needs to be run from the directory it is installed in, (I need to strip the path info off from the command line and use it in future versions). Also, the training grammar, (aka rules), need to be installed in the same directory as well. You should have three folders, “./grammar”, “./digits”, and “./special” that represent the full training grammar. These folders will be filled with various files labeled such as “1.txt” or “structures.txt” which contain the probability information gathered from the passwords it was trained upon.

To run the program type

./pcfg_manager <options>
Options:
-dname[0-9] <dictionary name>
<REQUIRED>: The input dictionary name
Example: -dname0 common_words.txt

-dprob[0-9] <dictionary probability> 
<OPTIONAL>: The input dictionary's 
probability. If not specified set to 1.0
Example: -dprob0 0.75

-removeUpper <OPTIONAL>: don't use words from the       
 input dictionary that have uppercase chars  

-removeSpecial <OPTIONAL>: don't use words from the 
 input dictionary that have special chars

-removeDigits <OPTIONAL>: don't use words from the 
 input dictionary that have digits


Generally I’ve found it to be more effective to use the -remove options and let the program, vs the input dictionary determine where to use special characters and digits. The one exception is the -removeUpper option which can allow you to attack case mangled words by using an input dictionary that contains words where different case mangling rules have already applied to them.

The -dprob option ranges from 1.0 to 0.00, and should be set based on what percentage of the passwords you expect the input dictionary to crack. For example a very large input dictionary might have a probability of 0.78. Note, the program is smart enough to assign a final probability by taking into account the probability set by the -dprob option and the size of the input dictionary. This way, even though a very small but highly effective input dictionary might have a -dprob setting of 0.05, and a much larger input dictionary might have a -dprob setting of 0.40, the smaller dictionary will be used more often.  Also, duplicate dictionary words will be removed from multiple dictionaries, with the duplicated word being assigned to the dictionary with the highest final probability. For example, the word “football” may show up in several of the input dictionaries that are used, but the password guess generator will place it in the most probable input dictionary to maximize the amount of times it is used.

An additional program for evaluating the probability of input dictionaries: 

I included an additional program, passPattern, which measures the effectiveness of input dictionaries by using the idea of edit-distance. It can be used in conjunction with our probabilistic password cracker to assign a value for the -dprob option. In addition, this program creates a golden dictionary of all the words found that would have cracked passwords in the training set which allows you to combine several input dictionaries into a single best of breed dictionary. Also it stores all passwords that would not have been cracked in the file “unclassified.txt.” I have found this very useful for narrowing down the number of passwords I need to manually inspect when looking for new mangling rules. Finally, it records the word mangling rules used and how often they were used in the file “editFile.txt”. To run this program, first use the make option to compile it and then type

./passPattern <dictionary> <password list>

The options should be fairly self explanatory.

SelectionFile type iconFile nameDescriptionSizeRevisionTimeUser
ċ

Download
Probabilistic Password Cracker and related files  567k v. 2 Jul 29, 2009, 9:06 PM Matt Weir
Comments