Tuesday, December 18, 2012

Choose the 2012 Toolsmith Tool of the Year

Merry Christmas and Happy New Year! It's that time again.
Please vote below to choose the best of 2012, the 2012 Toolsmith Tool of the Year.
We covered some outstanding information security-related tools in ISSA Journal's toolsmith during 2012; which one do you believe is the best?
I appreciate you taking the time to make your choice.
Review all 2012 articles here for a refresher on any of the tools listed in the survey.
You can vote through January 31, 2013. Results will be announced February 1, 2013

Create your free online surveys with SurveyMonkey, the world's leading questionnaire tool.

Tuesday, December 04, 2012

toolsmith: ModSecurity for IIS



Part 2 of 2 - Web Application Security Flaw Discovery and Prevention

Prerequisites/dependencies
Windows OS with IIS (Win2k8 used for this article)
SQL Server Express 2004 SP4 and Management Studio Express for vulnerable web app
.NET Framework 4.0 for ModSecurity IIS

Introduction

December’s issue continues where we left off in November with part two in our series on web application security flaw discovery and prevention. In November we discussed Arachni, the high-performance, modular, open source web application security scanning framework. This month we’ll follow the logical work flow from Arachni’s distributed, high-performance scan results to how to use the findings as part of mitigation practices. One of Arachni’s related features is WAF Realtime Virtual Patching.
Trustwave Spider Lab’s Ryan Barnett has discussed the concept of dynamic application scanning testing (DAST) data that can be imported into a web application firewall (WAF) for targeted remediation. This discussion included integrating export data from Arachni into ModSecurity, the cross–platform, open source WAF for which he is the OWASP ModSecurity Core Rule Set (CRS) project leader. I reached out to Ryan for his feedback with particular attention to ModSecurity for IIS, Microsoft’s web server.
He indicated that WAF technology has gained traction as a critical component of protecting live web applications for a number of key reasons, including:
1)      Gaining insight into HTTP transactional data that is not provided by default web server logging
2)      Utilizing Virtual Patching to quickly remediate identified vulnerabilities
3)      Addressing PCI DSS Requirement 6.6
The ModSecurity project is just now a decade old (first released in November 2002), has matured significantly over the years, and is the most widely deployed WAF in existence protecting millions of websites. “Until recently, ModSecurity was only available as an Apache web server module. That changed, however, this past summer when Trustwave collaborated with the Microsoft Security Response Center (MSRC) to bring the ModSecurity WAF to the both the Internet Information Services (IIS) and nginx web server platforms.  With support for these platforms, ModSecurity now runs on approximately 85% of internet web servers.” 
Among the features that make ModSecurity so popular, there are a few key capabilities that make it extremely useful:
It has an extensive audit engine which allows the user to capture the full inbound and outbound HTTP data.  This is not only useful when reviewing attack data but is also extremely valuable for web server administrators who need to trouble-shoot errors.
·         It includes a powerful, event-driven rules language which allows the user to create very specific and accurate filters to detect web-based attacks and vulnerabilities.
·         It includes an advanced Lua API which provides the user with a full-blown scripting language to define complex logic for attack and vulnerability mitigation.
·         It also includes the capability to manipulate live transactional data.  This can be used for a variety of security purposes including setting hacker traps, implementing anti-CSRF tokens, or Cryptographic HASH tokens to prevent data manipulation.
In short, Ryan states that ModSecurity is extremely powerful and provides a very flexible web application defensive framework that allows organizations to protect their web applications and quickly respond to new threats.
I also sought details from Greg Wroblewski, Microsoft’s lead developer for ModSecurity IIS.
“As ModSecurity was originally developed as an Apache web server module, it was technically challenging to bring together two very different architectures. The team managed to accomplish that by creating a thin layer abstracting ModSecurity for Apache from the actual server API. During the development process it turned out that the new layer is flexible enough to create another ModSecurity port for the nginx web server. In the end, the security community received a new cross-platform firewall, available for the three most widely used web servers.
The current ModSecurity development process (still open, recently migrated to GitHub) preserves compatibility of features between three ported versions. For the IIS version, only features that rely on specific web server behavior show functional differences from the Apache version, while the nginx version currently lacks some of the core features (like response scanning and content injection) due to limited extensibility of the server. Most ModSecurity configuration files can be used without any modifications between Apache and IIS servers. The upcoming release of the RTM version for IIS will include a sample of ModSecurity OWASP Core Rule Set in the installer.

Installing ModSecurity for IIS

In order to test the full functionality of ModSecurity for IIS I needed to create an intentionally vulnerable web application and did so following guidelines provided by Metasploit Unleashed. The author wrote these guidelines for Windows XP SP2, I chose Windows Server 2008 just to be contrarian. I first established a Win2k8 virtual machine, enabled the IIS role, downloaded and installed SQL Server 2005 Express SP4, .NET Framework 4.0, as well as SQL Server 2005 Management Studio Express, then downloaded and the ModSecurity IIS 2.7.1 installer. We’ll configure ModSecurity IIS after building our vulnerable application. When configuring SQL Server 2005 Express ensure you enable SQL Server Authentication, and set the password to something you’ll use in the connection string established in Web.config. I used p@ssw0rd1 to meet required complexity. J Note: It’s “easier” to build a vulnerable application using SQL Server 2005 Express rather than 2008 or later; for time’s sake and reduced troubleshooting just work with 2005. We’re in test mode here, not production. That said, remember, you’re building this application to be vulnerable by design. Conduct this activity only in a virtual environment and do not expose it to the Internet. Follow the Metasploit guidelines carefully but remember to establish a proper connection string in the Web.config (line 4) and build it from this sample I’m hosting for you rather than the one included with the guidelines. As an example, I needed to establish my actual server name rather than localhost, I defined my database name as crapapp instead of WebApp per the guidelines, and used p@ssw0rd1 instead of password1 as described:
I also utilized configurations recommended for the pending ModSecurity IIS install so go with my version.
Once you’re finished with your vulnerable application build you should browse to http://localhost and first pass credentials that you know will fail to ensure database connectivity. Then test one of the credential pairs established in the users table, admin/s3cr3t as an example. If all has gone according to plan you should be treated to a successful login message as seen in Figure 1.

FIGURE 1: A successful login to CrapApp
ModSecurity IIS installation details are available via TechNet but I’ll walk you through a bit of it to help overcome some of the tuning issues I ran into. Make sure you have the full version of .NET 4.0 installed and patch it in full before you execute the ModSecurity IIS installer you downloaded earlier.
Download the ModSecurity OWASP Core Rule Set (CRS) and as a starting point copy the files from the base_rules to the crs directory you create in C:\inetpub\wwwroot. Also put the test.conf file I’m also hosting for you in C:\inetpub\wwwroot. This will call the just-mentioned ModSecurity OWASP Core Rule Set (CRS) that Ryan maintains and also allow you to drop any custom rules you may wish to create right in test.conf.
There are a few elements to be comfortable with here. Watch the Windows Application logs via Event Viewer to both debug any errors you receive as well as ModSecurity alerts once properly configured. I’m hopeful that the debugging time I spent will help save you a few hours, but watch those logs regardless. Also make regular use of the Internet Information Services (IIS) Manger to refresh the DefaultAppPool under Application Pools as well as restart the IIS instance after you make config changes. Finally, this experimental installation intended to help get you started is running in active mode versus passive. It will both detect and block what the CRS notes as malicious. As such, you’ll want to initially comment out all the HTTP Policy rules in order to play with the CrapApp we built above. To do so, open modsecurity_crs_30_http_policy.conf in the crs directory and comment out all lines that start with SecRule. Again, we’re in experiment mode here. Don’t deploy ModSecurity in production with the SecDefaultAction directive set to "block" without a great deal of testing in passive mode first or you’ll likely blackhole known good traffic.

Using ModSecurity and virtual patching to protect applications

Now that we’re fully configured, I’ll show you the results of three basic detections then close with a bit of virtual patching for your automated web application protection pleasure. Figure 2 is a mashup of a login in attempt via our CrapApp with a path traversal attack and the resulting detection and block as noted in the Windows Application log.

FIGURE 2: Path traversal attack against CrapApp denied
Similarly, a simple SQL injection such as ‘1=1-- against the same form field results in the following Application log entry snippet:
[msg "SQL Injection Attack: Common Injection Testing Detected"] [data "Matched Data: ' found within ARGS:txtLogin: '1=1--"] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.6"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"]

Note the various tags including a match to the appropriate OWASP Top 10 entry as a well as the relevant section of the PCI DSS.
Ditto if we pop in a script tag via the txtLogin parameter:
[data "Matched Data: "] [ver "OWASP_CRS/2.2.6"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"]
    
Finally, we’re ready to connect our Arachni activities in Part 1 of this campaign to our efforts with ModSecurity IIS. There are a couple of ways to look at virtual patching as amply described by Ryan. His latest focus has been more on dynamic application scanning testing as actually triggered via ModSecurity. There is now Lua scripting that integrates ModSecurity and Arachni over RPC where a specific signature hit from ModSecurity will contact the Arachni service and kick off a targeted scan. At last check this code was still experimental and likely to be challenging with the IIS version of ModSecurity. That said we can direct our focus in the opposite direction to utilize Ryan’s automated virtual patching script, arachni2modsec.pl, where we gather Arachi scan results and automatically convert the XML export into rules for ModSecurity. These custom rules will then protect the vulnerabilities discovered by Arachni while you haggle with the developers over how long it’s going to take them to actually fix the code.
To test this functionality I scanned the CrapApp from Arachni instance on the Ubuntu VM I built for last month’s article. I also set the SecDefaultAction directive set to "pass" in my test.conf file to ensure the scanner is not blocked while it discovers vulnerabilities. Currently the arachni2modsec.pl script writes rules specifically for SQL Injection, Cross-site Scripting, Remote File Inclusion, Local File Inclusion, and HTTP Response Splitting. The process is simple; assuming the results file is results.xml, arachni2modsec.pl –f results.xml will create modsecurity_crs_48_virtual_patches.conf. On my ModSecurity IIS VM I’d then copy modsecurity_crs_48_virtual_patches.conf into the C:\inetpub\wwwroot\crs directory and refresh the DefaultAppPool. Figure 3 gives you an idea of the resulting rule.  

FIGURE 3: arachni2modsec script creates rule for ModSecurity IIS
Note how the rule closely resembles the alert spawned when I passed the simple SQL injection attack to CrapApp earlier in the article. Great stuff, right?

In Conclusion

What a great way to wrap up 2012 with the conclusion of this two-part series on Web Application Security Flaw Discovery and Prevention. I’m thrilled with the performance of ModSecurity for IIS and really applaud Ryan and Greg for their efforts. There are a number of instances where I intend to utilize the ModSecurity port for IIS and will share feedback as I gather data. Please let me know how it’s working for you as well should you choose to experiment and/or deploy.
Good luck and Merry Christmas.
Stay tuned to vote for the 2012 Toolsmith Tool of the year starting December 15th.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Ryan Barnett, Trustwave Spider Labs, Security Researcher Lead
Greg Wroblewski, Microsoft, Senior Security Developer

Sunday, November 11, 2012

CTIN Digital Forensics Conference - No fluff, all forensics

For those of you in the Seattle area or willing to travel who are interested in digital forensics there is a great opportunity to learn and socialize coming up in March.
The CTIN Digital Forensics Conference will be March 13 though 15, 2013 at the Hilton Seattle Airport & Conference Center. CTIN, the Computer Technology Investigators Network, is non-profit, free membership organization comprised of public and private sector computer forensic examiners and investigators focused on the areas of high-tech security, investigation, and prosecution of high-tech crimes for both private and public sector.

Topics slated for the conference agenda are many, with great speakers to discuss them in depth:
Windows Time Stamp Forensics, Incident Response Procedures, Tracking USB Devices, Timeline Analysis with Encase, Internet Forensics, Placing the Suspect Behind the Keyboard, Social Network Investigations, Triage, Live CDs (WinFE & Linux)
F-Response and Intella, Lab - Hard drive repair, Mobile Device Forensics, Windows 7/8 Forensics
Child Pornography, Legal Update, Counter-forensics, Linux Forensics, X-Ways Forensics
Expert Testimony, ProDiscover, Live Memory Forensics, Encase, Open Source Forensic Tools
Cell Phone Tower Analysis, Mac Forensics, Registry Forensics, Malware Analysis, iPhone/iPad/other Apple products, Imaging Workshop, Paraben Forensics, Virtualization Forensics


Register before 1 DEC 2012 for $295, and $350 thereafter.

While you don't have to be a CTIN member to attend I strongly advocate your joining and supporting CTIN.

Friday, November 02, 2012

toolsmith: Arachni - Web Application Security Scanner



Part 1 of 2 - Web Application Security Flaw Discovery and Prevention


Prerequisites/dependencies
Ruby 1.9.2 or higher in any *nix environment

Introduction
This month’s issue kicks off a two part series on web application security flaw discovery and prevention, beginning with Arachni. As this month’s topic is another case of mailing lists facilitating great toolsmith topics, I’ll begin this month by recommending a few you should join if you haven’t already. The Web Application Security Consortium mailing list is a must, as are the SecurityFocus lists. I favor their Penetration Testing and Web Application Security lists but they have many others as well. As you can imagine, these two make sense for me given focuses on web application security and penetration testing, and it was via SecurityFocus that I received news of the latest release of Arachni. Arachni is a high-performance, modular, open source web application security scanning framework written in Ruby. It was refreshing to discover a web app scanner I had not yet tested. I spend a lot of time with the likes of Burp, ZAP, and Watobo but strongly advocate expanding the arsenal.
Arachni’s developer/creator is Tasos "Zapotek" Laskos, who kindly provided details on this rapidly maturing tool and project.
Via email, Tasos indicated that to date, Arachni's role has been that of an experiment/learning-exercise hybrid, mainly focused on doing things a little bit differently. He’s glad to say that the fundamental project goals have been achieved; Arachni is fast, relatively simple, quite accurate, open source and quite flexible in the ways which it can be deployed. In addition, as of late, stability and testing have been given top priority in order to ensure that the framework won't exhibit performance degradation as the code-base expands.
With a strong foundation laid and a clear road map, future plans for Arachni include pushing the envelope where version 0.4.2 include improved distributed, high-performance scan features such as the new, distributed crawler (under current development), and a new, cleaner, more stable and attractive Web User Interface, as well as general code clean-up.
Version 0.5 is where a lot of interesting work will take place as the Arachni team will be attempting to break some new ground with native DOM and JavaScript support, with the intent of allowing a depth/level of analysis beyond what's generally possible today, from either open source or commercial systems. According to Tasos, most, if not all, current scanners rely on external browser engines to perform their duties bringing with them a few penalties (performance hits, loss of control, limited inspection capabilities, design compromises, etc.), which Arachni will be able to avoid. This kind of functionality, especially from an open and flexible system, will be greatly beneficial to web application testing in general, and not just in a security-related context.

Arachni success stories include incredibly cool features such as WAF Realtime Virtual Patching. At OWASP AppSec DC 2012, Trustwave Spider Lab’s Ryan Barnett discussed the concept of dynamic application scanning testing (DAST) exporting data that is then imported into a web application firewall (WAF) for targeted remediation. In addition to stating that the Arachni scanner is an “absolutely awesome web application scanner framework” Ryan describes how to integrate export data from Arachni with ModSecurity, the WAF for which he is OWASP ModSecurity Core Rule Set (CRS) project leader. Take note here as next month in toolsmith we’re going to discuss ModSecurity for IIS as part two of this series and will follow Ryan’s principles for DAST to WAF.   
Other Arachni successes include highly-customized scripted audits and easy incorporation into testing platforms (by virtue of its distributed features).  Tasos has received a lot of positive feedback and has been pleasantly surprised there has not been one unsatisfied user, even in the Arachni's early, immature phases. Many Arachni users end up doing so out of frustration with the currently available tools and are quite happy with the results after giving Arachni a try given that Arachni gives users a decent alternative while simplifying web application security assessment tasks.
Arachni benefits from excellent documentation and support via its wiki, be sure to give a good once over before beginning installation and use.

Installing Arachni

On an Ubuntu 12.10 instance, I first made sure I had all dependencies met via sudo apt-get install build-essential libxml2-dev libxslt1-dev libcurl4-openssl-dev libsqlite3-dev libyaml-dev zlib1g-dev ruby1.9.1-dev ruby1.9.1.
For developer’s sake, this includes Gem support so thereafter one need only issue sudo gem install arachni to install Arachni. However, the preferred method is use of the appropriate system packages from the latest downloads page.
While Arachni features robust CLI use, for presentation’s sake we’ll describe Arachni use with the Web UI. Start it via arachni_web_autostart which will initiate a Dispatcher and the UI server. The last step is to point your browser to http://localhost:4567, accept the default settings and begin use.

Arachni in use

Of interest as you begin Arachni use is the dispatcher which spawns RPC instances and allows you to attach to, pause, resume, and shutdown Arachni instances. This is extremely important for users who wish to configure Arachni instances in a high performance grid (think a web application security scanning cluster with a master and slave configuration). Per the wiki, “this allows scan-time to be severely decreased, by as much as n times less under ideal circumstances, where n equals the number of running instances.”   
You can configure Arachni’s web UI to run under SSL and provide HTTP Basic authentication if you wish to lock use down. Refer to the wiki entry for the web user interface for more details.
Before beginning a simple scan (one Dispatcher), let’s quickly review Arachni’s modules and plugins. Each has a tab in Arachni’s primary UI view. The  45 modules are divided into Audit (22) and Recon (23) options where the audit modules actively test the web application via inputs such as parameters, forms, cookies and headers while the recon modules passively test the web application, focusing on server configuration, responses and specific directories and files. I particularly like the additional SSN and credit card number disclosure modules as they are helpful for OSINT, as well as the Backdoor module, which looks to determine if the web application you’re assessing is already owned. Of note from the Audit options is the Trainer module that probes all inputs of a given page in order to uncover new input vectors and trains Arachni by analyzing the server responses. Arachni modules are all enabled by default. Arachni plugins offer preconfigured auto-logins (great when spidering), proxy settings, and notification options along with some pending plugins supported in the CLI version but not yet ready for the Web UI as of v.0.4.1.1
To start a scan, navigate to the Start a scan tab and confirm that a Dispatcher is running. You should see the likes of @localhost:7331 (host and port) along with number of running scans, as well as RAM and CPU usage. Then paste a URL into the URL form, and select Launch Scan as seen in Figure 1
 
Figure 1: Launching an Arachni scan

While the scan is running you can monitor the Dispatcher status via the Dispatchers tab as seen in Figure 2.

Figure 2: Arachni Dispatcher status
From the Dispatchers view you can choose to Attach to the running Instance (there will be multiples if you’ve configured a high performance grid) which will give a real-time view to the scan statistics, percentage of completion for the running instance, scanner output, and results for findings discovered as seen in Figure 3. Dispatchers provide Instances, Instances perform the scans.

Figure 3: Arachni scan status
Once the scan is complete, as you might imagine, the completed results report will be available to you in the Reports tab. As an example I chose the HTML output but realize that you can also select JSON, text, YAML, and XML as well as binary output such as Metareport, Marshal report, and even Arachni Framework Reporting. Figure 4 represents the HTML-based results of a scan against NOWASP Mutillidae.

Figure 4: HTML Arachni results
Even the reports are feature-rich with a summary tab with graphs and issues, remedial guidance, plugin results, along with a sitemap and configuration settings.
The results are accurate too; in my preliminary testing I found very few false positives. When Arachni isn’t definitive about results, it even goes so far as to label the result “untrusted (and may in fact be false positives) because at the time they were identified the server was exhibiting some kind of anomalous behavior or there was 3rd part interference (like network latency for example).” Nice, I love truth and transparency in my test results.
I am really excited to see Arachni work at scale. I intend to test it very broadly on large applications using a high performance grid. This is definitely one project I’ll keep squarely on my radar screen as it matures through its 0.4.2 and 0.5 releases.

In Conclusion

Join us again next month as we resume this discussion when take Arachni results and leverage them for Realtime Virtual Patching with ModSecurity for IIS. By then I will have tested Arachni’s clustering capabilities as well so we should have some real benefit to look forward to next month. Please feel free to seek support via the support portal, file a bug report via the issue tracker, or to reach out to Tasos via Twitter or email as he looks forward to feedback and feature requests.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Tasos "Zapotek" Laskos, Arachni project lead

Monday, October 01, 2012

toolsmith: Network Security Toolkit (NST) - Packet Analysis Personified





Prerequisites
Virtualization software if you don’t wish to run NST as a LiveCD or install to dedicated hardware.

Introduction
As I write this I’m on the way back from SANS Network Security in Las Vegas where I’d spent two days deeply entrenched analyzing packet captures during the lab portion of the GSE exam. During preparation for this exam I’d used a variety of VM-based LiveCD distributions to study and practice, amongst them Security Onion. There are three distributions I run as VMs that are always on immediate standby in my toolkit. They are, in no particular order, Doug Burk’s brilliant Security Onion, Kevin Johnson’s SamuraiWTF, and Back Track 5 R3. Security Onion and SamuraiWTF have both been toolsmith topics for good reason; I’ve not covered Back Track only because it would seem so cliché. I will tell you that I am extremely fond of Security Onion and consider it indispensable. As such, I hesitated to cover the Network Security Toolkit (NST) when I first learned of it while preparing for the lab, feeling as if it might violate some code of loyalty I felt to Doug and Security Onion. Weird I know, and the truth is Doug would be one of the first to tell you that the more tools made available to defenders the better. NST represents a number of core principles inherent to toolsmith and the likes of Security Onion. NST is comprehensive and convenient and allows the analyst almost immediate and useful results. NST is an excellent learning tool and allows beginners and experts much success in discovering more about their network environments. NST is also an inclusive, open project that grows with help from an interested and engaged community. The simple truth is Security Onion and NST represent different approaches to complex problems. We all have a community to serve and the same goals at heart, so I got over my hesitation and reached out to the NST project leads.
The Network Security Toolkit is the brainchild of Paul Blankenbaker and Ron Henderson and is a Linux distribution that includes a vast collection of best-of-breed open source network security applications useful to the network security professional. In the early days of NST Paul and Ron found that they needed a common user interface and unified methodology for ease of access and efficiency in automating the configuration process. Ron’s background in network computing and Paul’s in software development lead to what is now referred to as the NST WUI (Web User Interface). Given the wide range of open source networking tools with corresponding command line interface that differ from one application to the next, this was no small feat. The NST WUI now provides a means to allow easy access and a common look-and-feel for many popular network security tools, giving the novice the ability to point and click while also providing advanced users (security analysts, ethical hackers) options to work directly with command line console output.
According to Ron, one of the most beneficial tool enhancements that NST has to offer for the network and security administrator is the Single-Tap and Multi-Tap Network Packet Capture interface. Essentially, adding a web-based front-end to Wireshark, Tcpdump, and Snort for packet capture analysis and decode has made it easy to perform these tasks using a web browser. With the new NST v2.16.0-4104 release they took it a step forward and integrated CloudShark technology into the NST WUI for collaborative packet capture analysis, sharing and management.
Ron is also fond of the Network Interface Bandwidth monitor.  This tool is an interactive dynamic SVG/AJAX enabled application integrated into the NST WUI for monitoring Network Bandwidth
usage on each configured network interface in pseudo real-time. He designed this application with the controls of a standard digital oscilloscope in mind.
Ron is also proud of NST’s ability to Geolocate network entities. We’ll further explore using NST’s current repertoire of available network entities that can geolocated with their associated application, as well as Ron’s other favorites mentioned above.
Paul also shared something I enjoyed as acronyms are so common in our trade. He mentioned that the NST distribution can be used in many situations. One of his personal favorites is related to the FIRST Robotics Competition (FRC) which occurs each year. FIRST for Paul is For Inspiration and Recognition of Science and Technology where I am more accustomed to its use as Forum for Incident Response and Security Teams. Paul mentors FIRST team 868, the TechHounds at the Carmel high school in Indiana, where in FRC competitions, teams have used NST (or could use) during a hectic FRC build season:
·      Quickly identity which network components involved with operating the robot are "alive"
o   From the WUI menu: Security -> Active Scanners -> ARP Scan (arp-scan)
·         Observe how much network traffic increases or decreases as we adjust the IP based robot camera settings
o   From the WUI menu: Network -> Monitors -> Network Interface Bandwidth Monitor
·         Capture packets between the robot and the controlling computer
·         Scan the area for WIFI traffic and use this information to pick frequencies for robot communications that are not heavily used
·         Set up a Subversion and Trac server for managing source code through the build season.
o   From the WUI menu: System -> File System Management -> Subversion Manager
·         Teach the benefits of scripting and automating tasks
·         Provide an environment that can be expanded and customized
While Paul and team have used NST for robotics, it’s quite clear how their use case bullet list applies to the incident responder and network security analyst. 

Installing NST

NST, as an ISO, can be run as LiveCD, installed to dedicated hardware, and also as a virtual machine. If you intend to take advantage of the Multi-Tap Network Packet Capture interface feature with your NST installation set up as a centralized, aggregating sensor then you’ll definitely want to utilize dedicated hardware with multiple network interfaces. As an example, Figure 1 displays using NST to capture network and port address translation traffic across a firewall boundary.

Figure 1: Multi-Tap Network Packet Capture Across A Firewall Boundary - NAT/PAT Traffic
Once booted into NST you can navigate from Applications to System Tools to Install NST to Hard Drive in order to execute a dedicated installation.
Keep in mind that when virtualizing you could enable multiple NICs to leverage multi-tap, but your performance will be limited as you’d likely do so on a host system with one NIC.

Using NST

NST use centers around the WUI; access it via Firefox on the NST installation at http://127.0.0.1/nstwui/main.cgi. 
The first time you login, you’ll be immediately reminded to change the default password (nst2003). After doing so, log back in and select Tools -> Network Widgets -> IPv4 Address. Once you know what the IP address is you can opt to use NST WUI from another browser. My session as an example: https://192.168.153.132/nstwui/index.cgi.
Per Ron’s above mentioned tool enhancements, let’s explore Single-Tap Network Packet Capture (I’m running NST as a VM). Click Network -> Protocol Analyzers -> Single-Tap Network Packet Capture where you’ll be presented with a number of options regarding how you’d like to configure the capture. You can choose define the likes of duration, file size, and packet count or select predefined short or long capture sessions as seen in Figure 2.

Figure 2: Configure a Single-Tap capture with NST
If you accepted defaults for capture storage location you can click Browse and find the results of your efforts in /var/nst/wuiout/wireshark. Now here’s where the cool comes in. CloudShark (yep, Wireshark in the cloud) allows you to “secure, share, and analyze capture files anywhere, on any device” via either cloudshark.org or a CloudShark appliance. Please note that capture files uploaded to cloudshark.org are not secured by default and can be viewed by anyone who knows the correct URL. You’ll need an appliance or CloudShark Enterprise to secure and manage captures. That aside the premise of CloudShark is appealing and NST integrates CloudShark directly. From the Tools menu select Network Widgets then CloudShark Upload Manager. I’d already upload malicious.pcap as seen in Figure 3.

Figure 3: CloudShark tightly integrated with NST
Users need only click on View Network Packet Captures in the upload manager and they’ll be directed right to the CloudShark instance of their uploaded capture as seen in Figure 4.

Figure 4: Capture results displayed via CloudShark
Many of the features you’d expect from a local instance of Wireshark are available to the analyst, including graphs, conversations, protocol decodes, and follow stream.

NST also includes the Network Interface Bandwidth Monitor. Select Network -> Monitors -> Network Interface Bandwidth Monitor. A bandwidth monitor for any interface present on your NST instance will be available to you (eth0 and lo on my VM) as seen in Figure 5.

Figure 5: NST’s Network Interface Bandwidth Monitor
You can see the +100 kbps spikes I generated against eth0 with a quick NMAP scan as an example.

NST’s geolocation capabilities are many, but be sure to setup the NST system to geolocate data first. I uploaded a multiple host PCAP (P2P traffic) via Network Packet Capture Manager, clicked the A (attach) button under Action and was them redirected back to Network -> Protocol Analyzers -> Single-Tap Network Packet Capture. I then chose to use the Text-Based Protocol Analyzer Decode option as described on the NST Wiki and clicked the Hosts – Google Maps button. This particular capture gave NST a lot of work to do as it includes thousands of IPs but the resulting geolocated visualization as seen in Figure 6 is well worth it.

Figure 6: P2P bot visually geolocated via NST
If we had page space available to show you the whole world you’d see that the entire globe is represented by this bot, but I’m only showing you North America and Europe.

As discussed in recent OSINT-related toolsmiths, there’s even an NST OSINT feature called theHarvester found under Security -> Information Search -> theHarvester. Information gathering with theHarvester includes e-mail accounts, user names, hostnames, and domains from different public internet sources.
So many features, so little time. Pick an item from the menu and drill in. There’s a ton of documentation under the Docs menu too, including the NST Wiki, so you have no excuses not to jump in head first.

In Conclusion

NST is one of those offerings where the few pages dedicated to it in toolsmith don’t do it justice. NST is incredibly feature rich, and literally invites the user to explore while the hours sneak by unnoticed. The NST WUI has created a learning environment I will be incorporating into my network security analysis teaching regimens. New to network security analysis or a salty old hand, NST is a worthy addition to your tool collection.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Paul Blankenbaker and Ron Henderson, NST project leads

Moving blog to HolisticInfoSec.io

toolsmith and HolisticInfoSec have moved. I've decided to consolidate all content on one platform, namely an R markdown blogdown sit...