Overview: This is a major area of research for me, and something that I truly believe in. For years, we have been applying probability models to help speed up brute force attacks, (aka letter frequency analysis and Markov Models). At the same time though, our approach to dictionary based attacks has been fairly ad-hoc. John the Ripper’s default, (and single mode), rules while built based on their creators experiences with cracking passwords, are still extremely subjective. For example I’ve found very few passwords in my cracking attacks that were created by reversing an input dictionary word. Cain and Able, while a great product, probably has the most bizarre rule selection in that it focuses on capitalization mangling at the expense of just about everything else, (though it will also add two numbers to the end of words and replace letters with numbers). AccessData orders their default rule set not on how effective the rules are but by how large the search space is for each rule. This is not a slam on these approaches but I do think that as passwords become stronger and stronger, (either through user training or password creation policies), we need to improve how we generate and use word mangling rules. The main goal of this project is to see if we can assign probabilities to different word mangling rules and then generate password guesses in probability order. There are several advantages I feel this approach offers us. First, by designing a way to measure the probability of word mangling rules, we can quickly generate new rules by training our password cracker on known passwords that we feel are similar to the target. This way, we will be able to train our cracker to go against English speakers, Russian speakers, passwords created for social networking sites, passwords created with a strong password creation strategy, etc. If you’ve ever spent time editing a John the Ripper config file, you know that ability to automatically generate rules is very nice. Second, it allows us to more effectively target strong passwords. Just like with letter frequency analysis, the letter “z” may be uncommon, but the string “aaaaz” may be more probable than the string “dfttp” since it takes into account the probability of all the letters. Likewise, by using a probability model of how passwords are created, we can better balance the order of how multiple word mangling rules are applied to password guesses. For example, the guess “$$password63” may be more probable than “!*password12”. Not only does this technique apply to word mangling rules, but also to the input words themselves. We know that the word “password” is more probable than the word “zebra”. Using a probabilistic approach gives us a framework to make use of this knowledge. Special Thanks and Acknowledgments: The original version of the program was written Bill Glodek, another graduate student at Florida State University. The original idea for using probabilistic context free grammars to represent how people create passwords was Dr. Sudhir Aggarwal’s and Professor Breno de Medeiros’s. Basically I was lucky enough to come in at the right time to assist with the start of the program and help carry it on once Bill graduated. Theory, and Experimental Results: For the theory behind our approach, an overview of the basic algorithm, and some of the results of using an early version of our password cracker against real passwords please see the included PDF of the paper we presented at the IEEE Security & Privacy conference, “Cracking Passwords Using a Probabilistic Context Free Grammar.” I’ll try to post additional results to the website, but the short preview is that the current version, (as feature incomplete as it is), helped out quite a bit with cracking the phpBB list and was one of the main tools that I used. To Do List:
Use and operation: TRAINING: The version on the DVD is already trained on the MySpace password set which we’ve found has been fairly effective, (this was the list of MySpace passwords that were stolen by phishers several years ago). Please note, when you train the password cracker on a new list it will overwrite the previous rules. To train the password cracker on a new list, just run ./process.py <training list> . Use and operation: CREATING PASSWORD GUESSES: First you need to build the password cracker from the source files. It has been tested on Ubuntu Desktop, and MacOSX, (with Xcode installed). To build the executable, simple type: ./make Once the executable has compiled, (hopefully without errors), it is ready to run. Currently it is fairly stupid, (I’ll blame the program vs. my programming skills), so it needs to be run from the directory it is installed in, (I need to strip the path info off from the command line and use it in future versions). Also, the training grammar, (aka rules), need to be installed in the same directory as well. You should have three folders, “./grammar”, “./digits”, and “./special” that represent the full training grammar. These folders will be filled with various files labeled such as “1.txt” or “structures.txt” which contain the probability information gathered from the passwords it was trained upon. To run the program type ./pcfg_manager <options> Generally I’ve found it to be more effective to use the -remove options and let the program, vs the input dictionary determine where to use special characters and digits. The one exception is the -removeUpper option which can allow you to attack case mangled words by using an input dictionary that contains words where different case mangling rules have already applied to them. The -dprob option ranges from 1.0 to 0.00, and should be set based on what percentage of the passwords you expect the input dictionary to crack. For example a very large input dictionary might have a probability of 0.78. Note, the program is smart enough to assign a final probability by taking into account the probability set by the -dprob option and the size of the input dictionary. This way, even though a very small but highly effective input dictionary might have a -dprob setting of 0.05, and a much larger input dictionary might have a -dprob setting of 0.40, the smaller dictionary will be used more often. Also, duplicate dictionary words will be removed from multiple dictionaries, with the duplicated word being assigned to the dictionary with the highest final probability. For example, the word “football” may show up in several of the input dictionaries that are used, but the password guess generator will place it in the most probable input dictionary to maximize the amount of times it is used. An additional program for evaluating the probability of input dictionaries:
./passPattern <dictionary> <password list> The options should be fairly self explanatory. |
Main Page > Password Cracking Tools >
Probabilistic Password Cracker
Selection | File type icon | File name | Description | Size | Revision | Time | User |
---|---|---|---|---|---|---|---|
ċ
|
Download |
Probabilistic Password Cracker and related files | 567k | v. 2 | Jul 29, 2009, 9:06 PM | Matt Weir |