Friday, July 19, 2013

Enumerating web services with classify.webbies.py

classify.webbies.py is a Python script that captures and presents a high-level overview of all the web listeners within a defined scope. This allows the user to spot the more interesting web targets with efficiency and relative ease, regardless of the number of discovered web services. The script will enumerate the web listener to determine if the service is using SSL, the banner of the web service, the title of the web application, and if the web application has any interactive components such as forms and logins. Last, the script can also take a screenshot of the web application.

Throughout our engagements, we have often used Nessus or nmap to scan targets to find open ports and the possible services listening on those ports. However, in regards to web services, Nessus does not list if the service is http or https, and neither tool gives much information about the possible web applications running on the discovered ports. When confronted with thousands of web listeners, it can be a soul draining task to manually inspect each service. I myself find it especially frustrating when the bulk of web listeners merely redirect to either a previously inspected host or a site that is out of scope.

To automate the task of inspecting the discovered web listeners, I put together a Python script to classify them. At the time of writing, the accepted input types are Nessus nbe files, nmap gnmap files, or text files containing a list of hosts and ports in "host:port" format. With this input, the script will attempt to connect to each of the web listeners and determine several details:
  • http or https
  • web service banner
  • are there form elements
  • is a login present
The result will be printed out in CSV form for easy grep-fu and importation into other tools.

When enumerating the web service, the script will follow a predetermined number of redirects as long as the redirection stays within the defined scope. Additionally, the script is threaded to make classifying extremely fast, and it can work through proxychains. To top it off, it has a screen capture option that will render a PNG screenshot of each web application that responded with 200 HTTP status code. Unfortunately, the screen capture feature is not threaded and will take some time to complete. The original screen capture class was written by plumo and can be found here: http://webscraping.com/blog/Webpage-screenshots-with-webkit/. I have modified this version by adding a spoofed user-agent and disabling javascript.

We have found this python script to be useful and hope the community does as well.

The script can be downloaded from here:
https://sunera-ap-team.googlecode.com/git/classify_webbies/classify.webbies.py

Version 3.1 Change notes:

  • Removed QT. It was too buggy and unreliable for our uses. Instead, phantomjs is now used. 
  • Added -A, which analyzes the webbies and groups them according to similarity. This method generates graphs in the form of .ps files to be later converted into jpg,png, etc. 
  • Generates pickle files of webbies when given the Debug flag or Analyze option. These can be reloaded into ClassifyWebbies.py using the -P option to be reanalyzed or re-screenshot the hosts without crawling. 
  • Fixed a ton of random bugs.


Tuesday, May 21, 2013

Download Multiple Nessus Reports via the Nessus XML-RPC API

Several months back I began to look at various ways to automate some of the common tasks that are usually performed within the Nessus GUI. I was familiar with nessuscmd, and had leveraged that tool within some scripts, but it didn't fit the bill for a lot of the administrative activity that I thought could be automated, or at least made more efficient.

The catalyst for my digging into this was that I wanted to be able to download multiple reports from a Nessus scanner (or scanners) en masse, without having to log in and manually download them.  I didn't have access to Security Center, and I'm not even sure if that's a feature or not. I started looking into the Nessus API Documentation and found that I could interact with Nessus and get almost all of the features of the GUI via HTTP POST's and GET's.

I coded up a python script that connects to a Nessus scanner, prompts you to enter a report name string (that accepts common wildcard characters such as * and ?) to list the reports, then downloads the reports matching your query. You can specify the file type to download (.nessus or .nbe) or you can specify that you want both formats. You can also download all of the reports from the scanner by leaving the query string blank.

I've found this script to be useful, especially when downloading 100+ scan files from multiple scanners.

You can download the script here: https://code.google.com/p/sunera-ap-team/downloads/list