Friday, May 16, 2014

Full Disk Encrypted Kali™ AWS Instance


This post will detail instructions on how to create  a full disk encrypted Kali instance running on AWS EC2.
The gist of what we'll do is spin up a Kali instance from the Offensive Security official AWS AMI (unencrypted) as a build server, optionally install the tools we want, copy the "/" partition, then run some scripts from the build server. This will launch a separate EC2 instance with the "/" partition fully encrypted. The "/boot" partition will not be encrypted, and will launch a web server via an initramfs hook and script combo. The web server will run from /boot and listen on the external IP address for that instance, and accept the disk decryption password. Once you browse to the web server and enter your password, the "/" partition is unlocked and the boot process continues.

For the sake of this tutorial it will be assumed that the reader has a solid understanding of using AWS and starting instances, etc. This is not a tutorial on how to launch instances, or an introduction to using AWS EC2.

The Tools and Prep

  1. You need to create an AWS EC2 build instance based on the official Kali AMI by Offensive Security. You can use the free tier (t1.micro).
  2. Log in to your build instance (don't forget the default username is "kali") and install cryptesetup and some necessary tools:
    apt-get update && apt-get install cryptsetup perl curl kali-linux
    It's simpler to use the handy Kali metapackages (e.g. kali-linux) to install common tools. I accept all the default values and just hit enter on the prompts for locale info when installing cryptsetup. NOTE: You can ignore the warnings from cryptsetup about not reading fstab (more info here).

  3. You'll need a a fantastic tool called encroot that a co-worker of mine (thanks Matt!) brought to my attention. The author of encroot has written his own API wrapper in Perl for AWS API calls. NOTE: I've been in touch with the author of encroot and he may be releasing an update that allows users more flexibility. In the meantime we'll work with this rough write up of mine. From /root I run
    wget && tar -xvzf encroot_2013-06-20.tgz
  4. You'll need to follow the instructions provided in the encroot set of tools in the README.txt file (and the SSL.txt file if you need it). These files are very well written and clear, so for brevity's sake we'll not repeat the steps in this post. If you're not familiar with using the AWS API, one step that may stump you is step 4 in README.txt. That is creating the "/root/.awsapirc" file which will contain the credentials to access your AWS account. You can learn how to create the access ID and secret key here if you haven't already done it. NOTE: You only get one chance to copy down the secret key and access ID (once you create the user), so be sure to do so. You cannot retrieve them after the fact. Also, give the user permissions to your AWS account.

  5. I have modified two files within the encroot suite to suit our needs to get a Kali instance rather than Ubuntu. There are enough modifications that it makes more sense for you to just use the two files and I found that it's safest to use git to pull the files down, to ensure spacing is maintained.
    git clone
  6. Copy the two .sh files beneath the  sunera-ap-team/encroot_kali dir to the same directory you extracted the other encroot tools. If you're curious about the changes feel free to diff them against the originals. 

  7. Allocate an elastic IP address.  Be sure, if you're using a VPC (virtual private cloud, I recommend this) to add the elastic IP to that VPC. I like to use VPCs for AWS networking, because they allow more flexibility and control in my opinion.

Do It

Since this is Kali, and we know what we're doing, run this all as root on your EC2 instance, or use sudo.
  1. On your build server create a directory called "/kali" then rsync the entire "/" to it:
    mkdir /kali
    rsync -avh --exclude={kcore,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found,/root/encroot*,/kali} / /kali
  2. Now run the script with the appropriate arguments. This is all detailed in the README.txt file and man page for encroot.  You can find the ID numbers for the switches below from your AWS dashboard. The IP address in the below example is the un-associated elastic IP address that you provisioned earlier. The "group" is the group id of a security group (think firewall rules). Make sure you have one that allows 22 and 443 TCP in. The "subnet" option is the subnet ID you want to use, and the "vpc" option is the vpc ID. Again, you can get all these from your AWS dashboard. You can also look to the encroot man page if you need to customize the command line string to match your setup.
    ./ --group "abcd123" --subnet "abcd123" --vpc "abcd123"
  3. Now just follow the prompts. You should be prompted for passwords for keyslots 0 and 1 for the LUKS volume.

  4. When it's done you'll have a new instance waiting for you. Simply browse to the IP address using https and enter your disk decryption password.

  5. If anything goes wrong, the Cleanup() function will be called, which will remove the newly create volumes. The most common issue I've seen is if your build server does not have a tool or program needed by one of the encroot scripts. I had some issues with initramfs hooks not running properly if I hadn't installed the myriad of tools the kali-linux metapackage offers. When I had installed the bare minimum tools I thought was needed (e.g. curl, wget, perl, cryptsetup, git) the encroot scripts would fail. It was only after installing more tools (and their various dependencies) that the scripts would run. I found it easier to simply install the kali-linux metapackage than to figure out what was missing.

Thursday, May 8, 2014

Starting an Active Directory Password Auditing Program

Introduction and Background

This post provides an overview of the systemic issue of weak password requirements within organizations, including a tutorial detailing steps to begin an enterprise Active Directory (AD) password auditing program. The goal of AD password auditing is to provide usable metrics and actionable insight to better assist organizations in strengthening their password policies, which in-turn raises their overall security posture. 

In our experience as consultants we asses many different types networks, which vary by size, industry, complexity and age. However, one of the most common issues we find in need of improvement is passwords. 

AD is commonly used as an authentication source for organizations. Users authenticate to AD to log into their workstations, to access email or VPNs, and to access internal web applications. As such, it is an extremely critical piece of most environments and of high-value to attackers.

A corporate culture with an over reliance on meeting bare minimum requirements in compliance standards or other regulatory guidance has lead to organizations implementing password policies that do not provide sufficient protection from attack.

Online Attacks Against Passwords

One of the primary ways weak password requirements affect an organization's security posture is by allowing users to choose passwords that can be brute-forced, or put more simply: guessed.

Passwords like Welcome1, Happy123 or Password1 are still in use today. Obtaining a list of users is not overly complex. Attempting historically weak passwords against the discovered users limits the number of authentication attempts, and often yields successful authentication results. 

Offline Attacks Against Passwords

AD stores passwords in an encrypted form in a value called a hash. If an attacker retrieves a password hash, he or she performs off-line password attacks against the hash in an attempt to discover the plaintext value. The only factor keeping attackers from determining the value of the password hash is the strength and complexity of the password. The advent of GPU enabled password auditing programs has made it possible to calculate billions of password hash values per second. 

Chances are, if your organization uses laptops and those laptops run the Windows operating system, user password hashes have been exposed to unknown entities. If your organization has not implemented multi-factor authentication for remote access, and the password hash is successfully "cracked," then an unauthorized entity may have direct access into your environment.

Why Audit AD?

Performing AD audits can help identify areas that need improvement in your organization's password policy. It can also show where passwords are re-used, or bring to light systemic failures to follow existing policies or guidelines. Performing AD password audits can also provide metrics that can be used to measure the success of password policy enhancement efforts.

Technical Steps and Tools


This tutorial assumes the user will use Linux operating system with Python installed. Any virtual machine (e.g. VMWare or VirtualBox) or live boot instance of a current Linux distribution will do. For example, a popular IT security-centric Linux distribution is Kali Linux

When referencing commands that should be entered in a prompt or terminal window the text will look like the sample below:
Hello World!

The following two packages should be installed on the system:

 LibesedbJoachim Metz created library and scripts to interact with Extensible Storage Engine (ESE) Database File (EDB) format. 

ntdsxtract - Csaba Barta's framework for working with NT directory information tree file files containing AD objects.


1. Gather NTDS.dit file and System registry hive from the Domain Controller.

This is all done from the domain controller. You will need two files: %SystemRoot%\ntds\NTDS.dit and %SystemRoot%\System32\config\System.

Start the Volume Shadow Copy Service (VSS).
net start vss

Create volume shadow copy of drive containing the target files.
vssadmin create shadow /for=c: 

List  the volume shadow copies (to make it easier to copy/paste the path).
vssadmin list shadows

Copy the SYSTEM and NTDS.dit files off, in this example I'm copying to the T: drive.
copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\windows\ntds\ntds.dit T:\

copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\windows\system32\config\SYSTEM T:\

2.  Extract data from NTDS.dit.
This is all done from your Linux system (not the domain controller).

Extract the libesedb tarball.
tar -xvzf libesedb-alpha-20120102.tar.gz

Enter the extracted dir and compile libesedb tools.
cd libesedb-20120102 && ./configure && make && sudo make install

Navigate to the libesedb-20120102/esedbtools directory and run the esedbexport script with appropriate arguments.

./esedbexport -t /home/demo/AD /home/demo/samba/ntds.dit

This will extract data from the NTDS.dit file and place it in the "/home/demo/AD.export" folder, using the example above. The resulting output will need to be processed with the NTDSExtract framework. We care about the datatable and link_table objects.

3. Extract password hashes with NTDSExtract.

Extract Barta's NTDSExtract framework.

I prefer using a python script by LaNMaSteR53 which outputs the extracted hashes in PWDump format. Run the (or the file that comes with framework) from the directory that you extracted NTDSExtract.

./ /home/demo/AD.export/datatable.3 /home/demo/AD.export/link_table.5 --passwordhashes /home/demo/samba/SYSTEM

4. Crack the hashes.

Now you can use a password auditing program like John The Ripper (John) to attempt to determine the cleartext values of the password hashes. Password cracking is an art and science, and you will find many tutorials and methods. What follows is an example to get started, using the "rockyou.txt" file as a password list for a dictionary based attack against password hashes.

Install John.

git clone
cd JohnTheRipper/src && make clean generic

Now run john.

./john --pot=demo.pot --wordlist=rockyou.txt --rules --format=nt2 /home/demo/ad_hashes.txt

You can use the "show" command in john to show the results.

./john --pot=demo.pot --show --format=nt2 /home/demo/ad_hashes.txt

This will output the cracked passwords and usernames for your input file of hashes.

As you progress in your password auditing program you may desire to use GPU enabled cracking, and password auditing tools like ocl-hashcat. However, this should be enough to get you started auditing your AD environment.

5. Analyze and Present the Data. 

Now we need to provide highlights of the data. We use a tool called pipal. This is a ruby script that takes an input file of the cracked passwords and returns useful metrics that can be incorporated into reports or presentations very easily.

Get pipal.

git clone

cd pipal && ruby pipal.rb demo_pipal_in.txt >> pipal_out.txt

Here are some examples of pipal output:

6. Clean up.

Ensure that the output of the AD password audit, the NTDS.dit file, SYSTEM registry hive, and resultant password hashes and retrieved passwords are securely deleted. The data should be treated as sensitive information and protected and disposed of appropriately.

On Linux systems the "shred" command can be used on files that are left-over from the audit. 


The information provided above is not an all-encompassing password auditing program. The steps outlined should lay the groundwork to help organizations formulate a password policy enhancement strategy, and further strengthen their overall security posture.


Csaba Barta's framework

Wednesday, February 12, 2014

Matt Wood & Nick Popovich at BSides Tampa

Sunera's penetration testing team members Matt Wood and Nick Popovich will both be presenting at this week's BSides Tampa security conference!

Nick Popovich is a Senior Consultant on the A&P team, and recently presented at the Shmoocon conference. His talk, Enterprise Active Directory Password Auditing, will be at 2:30 PM in Track 2. Here is the summary:

Most organizations enforce some form of password complexity requirements for their Active Directory (AD) users. They may be required by a compliance vertical, or they are attempting to employ an industry best practice. However, as security consultants, we have observed that not many organizations take the time to audit their Active Directory passwords, and therefore are unaware if their password policy is being enforced, or if it requires enhancement. This talk will detail the process and steps necessary to audit AD passwords using publicly available tools, and provide metrics that can be used to identify common weaknesses in passwords.

Matt Wood is a Manager on the A&P team, and is a veteran presenter from conferences such as BlackHat, Source, RSA, and OWASP. His presentation, What's lurking inside the "Real-Time Web"?, will be in Track 2 at 4:30 PM. The talk summary is below:

Increasingly "real-time" web applications are utilizing new protocols implemented by HTTP clients and servers such as WebSockets and SPDY. This presentation will demonstrate how these new functionalities permit attackers to more effectively, and more stealthily establish bidirectional communication with compromised hosts and in the process bypass outbound connection restrictions. We will cover the theory, historical techniques, defensive methodologies and new techniques throughout the presentation.

At the heart of these techniques is the ability to establish bidirectional communication channels on-top of HTTP connections; which is in stark contrast to the original intent of HTTP. These new channels defeat event the best DMZ traffic policies which generally disallow all connectivity outbound from the DMZ and only allow certain ports (80,443) inbound. Attackers have for many years known to abuse the trusted relationship between web servers (or any exposed service!) and perimeter firewalls (inbound ports). Generally these tricks come at a price and due to the way these applications functioned were something that could be detected by a vigilant security team.

We will discuss how attackers can easily bypass outbound firewall rules, the history of these methodologies, and common defensive techniques combating this threat. Furthermore, new techniques will be described that utilize "real-time" protocols; specifically, how can these new techniques create back-channels and simultaneously hide from those vigilant security teams, increase the throughput and reliability of an attacker’s "VPN", and arbitrarily direct traffic from the Internet into a DMZ environment.

Thursday, January 16, 2014

Sunera's Nick Popovich Speaking at ShmooCon 2014

Nick Popovich, senior consultant on the Sunera Attack & Penetration team, will be speaking at this week's ShmooCon 2014 conference.

Nick will be speaking about a recent research project which brought to light an information exposure vulnerability with a major US-based ISP.  What began with simple curiosity into the inner workings of an application led to the ability to list wireless network names and wireless encryption keys (among other things) armed only with a WAN IP address.

His research also shows that coordinated disclosure can go right.

Nick will be in the "Bring it On!" track on Saturday at 11 AM.

For the full abstract or more information, please see the link below, and look here for more information following the conference.