Friday, July 19, 2013

Enumerating web services with is a Python script that captures and presents a high-level overview of all the web listeners within a defined scope. This allows the user to spot the more interesting web targets with efficiency and relative ease, regardless of the number of discovered web services. The script will enumerate the web listener to determine if the service is using SSL, the banner of the web service, the title of the web application, and if the web application has any interactive components such as forms and logins. Last, the script can also take a screenshot of the web application.

Throughout our engagements, we have often used Nessus or nmap to scan targets to find open ports and the possible services listening on those ports. However, in regards to web services, Nessus does not list if the service is http or https, and neither tool gives much information about the possible web applications running on the discovered ports. When confronted with thousands of web listeners, it can be a soul draining task to manually inspect each service. I myself find it especially frustrating when the bulk of web listeners merely redirect to either a previously inspected host or a site that is out of scope.

To automate the task of inspecting the discovered web listeners, I put together a Python script to classify them. At the time of writing, the accepted input types are Nessus nbe files, nmap gnmap files, or text files containing a list of hosts and ports in "host:port" format. With this input, the script will attempt to connect to each of the web listeners and determine several details:
  • http or https
  • web service banner
  • are there form elements
  • is a login present
The result will be printed out in CSV form for easy grep-fu and importation into other tools.

When enumerating the web service, the script will follow a predetermined number of redirects as long as the redirection stays within the defined scope. Additionally, the script is threaded to make classifying extremely fast, and it can work through proxychains. To top it off, it has a screen capture option that will render a PNG screenshot of each web application that responded with 200 HTTP status code. Unfortunately, the screen capture feature is not threaded and will take some time to complete. The original screen capture class was written by plumo and can be found here: I have modified this version by adding a spoofed user-agent and disabling javascript.

We have found this python script to be useful and hope the community does as well.

The script can be downloaded from here:

Version 3.1 Change notes:

  • Removed QT. It was too buggy and unreliable for our uses. Instead, phantomjs is now used. 
  • Added -A, which analyzes the webbies and groups them according to similarity. This method generates graphs in the form of .ps files to be later converted into jpg,png, etc. 
  • Generates pickle files of webbies when given the Debug flag or Analyze option. These can be reloaded into using the -P option to be reanalyzed or re-screenshot the hosts without crawling. 
  • Fixed a ton of random bugs.