Sunday, July 10, 2016

Toolsmith Release Advisory: Steph Locke's HIBPwned R package

I'm a bit slow on this one but better late than never. Steph dropped her HIBPwned R package on CRAN at the beginning of June, and it's well worth your attention. HIBPwned is an R package that wraps Troy Hunt's HaveIBeenPwned.com API, useful to check if you have an account that has been compromised in a data breach. As one who has been "pwned" no less than three times via three different accounts thanks to LinkedIn, Patreon, and Adobe, I love Troy's site and have visited it many times.

When I spotted Steph's wrapper on R-Bloggers, I was quite happy as a result.
Steph built HIBPwned to allow users to:
  • Set up your own notification system for account breaches of myriad email addresses & user names that you have
  • Check for compromised company email accounts from within your company Active Directory
  • Analyse past data breaches and produce reports and visualizations
I installed it from Visual Studio with R Tools via install.packages("HIBPwned", repos="http://cran.rstudio.com/", dependencies=TRUE).
You can also use devtools to install directly from the Censornet Github
if(!require("devtools")) install.packages("devtools")
# Get or upgrade from github
devtools::install_github("censornet/HIBPwned")
Source is available on the Censornet Github, as is recommended usage guidance.
As you run any of the HIBPwned functions, be sure to have called the library first: library("HIBPwned").

As mentioned, I've seen my share of pwnage, luckily to no real impact, but annoying nonetheless, and well worth constant monitoring.
I first combined my accounts into a vector and confirmed what I've already mentioned, popped thrice:
account_breaches(c("rmcree@yahoo.com","holisticinfosec@gmail.com","russ@holisticinfosec.org"), truncate = TRUE)
$`rmcree@yahoo.com`
   Name
1 Adobe

$`holisticinfosec@gmail.com`
      Name
1 LinkedIn

$`russ@holisticinfosec.org`
     Name
1 Patreon

You may want to call specific details about each breach to learn more, easily done continuing with my scenario using breached_site() for the company name or breached_sites() for its domain.
Breached
You may also be interested to see if any of your PII has landed on a paste site (Pastebin, etc.). The pastes() function is the most recent Steph added to HIBPwned.

Pasted
Uh oh, on the list here too, not quite sure how I ended up on this dump of "Egypt gov stuff". According to PK1K3, who "got pissed of at the Egypt gov", his is a "list of account the egypt govs is spying on if you find your email/number here u are rahter with them or slaves to them." Neither are true, but fascinating regardless.

Need some simple markdown to run every so often and keep an eye on your accounts? Try HIBPwned.Rmd. Download the file, open it R Studio, swap out my email addresses for yours, then select Knit HTML. You can also produce Word or PDF output if you'd prefer.

Report
Great stuff from Steph, and of course Troy. Use this wrapper to your advantage, and keep an eye out for other related work on itsalocke.com.

Wednesday, June 22, 2016

Toolsmith Tidbit: XssPy

You've likely seen chatter recently regarding the pilot Hack the Pentagon bounty program that just wrapped up, as facilitated by HackerOne. It should come as no surprise that the most common vulnerability reported was cross-site scripting (XSS). I was invited to participate in the pilot, yes I found and submitted an XSS bug, but sadly, it was a duplicate finding to one already reported. Regardless, it was a great initiative by DoD, SecDef, and the Defense Digital Service, and I'm proud to have been asked to participate. I've spent my share of time finding XSS bugs and had some success, so I'm always happy when a new tool comes along to discover and help eliminate these bugs when responsibly reported.
XssPy is just such a tool.
A description as paraphrased from it's Github page:
XssPy is a Python tool for finding Cross Site Scripting vulnerabilities. XssPy traverses websites to find all the links and subdomains first, then scans each and every input on each and every page discovered during traversal.
XssPy uses small yet effective payloads to search for XSS vulnerabilities.
The tool has been tested in parallel with commercial vulnerability scanners, most of which failed to detect vulnerabilities that XssPy was able to find. While most paid tools typically scan only one site, XssPy first discovers sub-domains, then scans all links.
XssPy includes:
1) Short Scanning
2) Comprehensive Scanning
3) Subdomain discovery
4) Comprehensive input checking
XssPy has discovered cross-site scripting vulnerabilities in the websites of MIT, Stanford, Duke University, Informatica, Formassembly, ActiveCompaign, Volcanicpixels, Oxford, Motorola, Berkeley, and many more.

Install as follows:
git clone https://github.com/faizann24/XssPy/ /opt/xsspy
Python 2.7 is required and you should have mechanize installed. If mechanize is not installed, type pip install mechanize in the terminal.

Run as follows:
python XssPy.py website.com (no http:// or www).

Let me know what successes you have via email or Twitter and let me know if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next time.

Wednesday, June 08, 2016

Toolsmith Feature Highlight: Autopsy 4.0.0's case collaboration

First, here be changes.
After nearly ten years of writing toolsmith exactly the same way once a month, now for the 117th time, it's time to mix things up a bit.
1) Tools follow release cycles, and often may add a new feature that could be really interesting, even if the tool has been covered in toolsmith before.
2) Sometimes there may not be a lot to say about a tool if its usage and feature set are simple and easy, yet useful to us.
3) I no longer have an editor or publisher that I'm beholden too, there's no reason to only land toolsmith content once a month at the same time.
Call it agile toolsmith. If there's a good reason for a short post, I'll do so immediately, such as a new release of feature, and every so often, when warranted, I'll do a full coverage analysis of a really strong offering.
For tracking purposes, I'll use title tags (I'll use these on Twitter as well):
  • Toolsmith Feature Highlight
    • new feature reviews
  • Toolsmith Release Advisory
    • heads up on new releases
  • Toolsmith Tidbit
    • infosec tooling news flashes
  • Toolsmith In-depth Analysis
    • the full monty
That way you get the tl;dr so you know what you're in for.

On to our topic.
This is definitely in the "in case you missed it" category, I was clearly asleep at the wheel, but Autopsy 4.0.0 was released Nov 2015. The major highlight of this release is specifically the ability to setup a multi-user environment, including "multi-user cases supported that allow collaboration using network-based services." Just in case you aren't current on free and opensource DFIR tools, "Autopsy® is a digital forensics platform and graphical interface to The Sleuth Kit® and other digital forensics tools." Thanks to my crew, Luiz Mello for pointing the v4 release out to me, and to Mike Fanning for a perfect small pwned system to test v4 with.

Autopsy 4.0.0 case creation walk-through

I tested the latest Autopsy with an .e01 image I'd created from a 2TB victim drive, as well as against a mounted VHD.

Select the new case option via the opening welcome splash (green plus), the menu bar via File | New Case, or Ctrl+N:
New case
Populate your case number and examiner:
Case number and examiner
Point Autopsy at a data source. In this case I refer to my .e01 file, but also mounted a VHD as a local drive during testing (an option under select source type drop-down.
Add data source
Determine which ingest modules you'd like to use. As I examined both a large ext4 filesystem as well as a Windows Server VHD, I turned off Android Analyzer...duh. :-)
Ingest modules
After the image or drive goes through initial processing you'll land on the Autopsy menu. The Quick Start Guide will get you off to the races.

The real point of our discussion here is the new Autopsy 4.0.0 case collaboration feature, as pulled directly from Autopsy User Documentation: Setting Up Multi-user Environment

Multi-user Installation

Autopsy can be setup to work in an environment where multiple users on different computers can have the same case open at the same time. To set up this type of environment, you will need to configure additional (free and open source) network-based services.

Network-based Services

You will need the following that all Autopsy clients can access:

  • Centralized storage that all clients running Autopsy have access to. The central storage should be either mounted at the same Windows drive letter or UNC paths should be used everywhere. All clients need to be able to access data using the same path.
  • A central PostgreSQL database. A database will be created for each case and will be stored on the local drive of the database server. Installation and configuration is explained in Install and Configure PostgreSQL.
  • A central Solr text index. A Solr core will be created for each case and will be stored in the case folder (not on the local drive of the Solr server). We recommend using Bitnami Solr. This is explained in Install and Configure Solr.
  • An ActiveMQ messaging server to allow the various clients to communicate with each other. This service has minimal storage requirements. This is explained in Install and Configure ActiveMQ.

When you setup the above services, securely document the addresses, user names, and passwords so that you can configure each of the client systems afterwards.

The Autopsy team recommends using at least two dedicated computers for this additional infrastructure. Spreading the services out across several machines can improve throughput. If possible, place Solr on a machine by itself, as it utilizes the most RAM and CPU among the servers.

Ensure that the central storage and PostgreSQL servers are regularly backed up.

Autopsy Clients

Once the infrastructure is in place, you will need to configure Autopsy to use them.

Install Autopsy on each client system as normal using the steps from Installing Autopsy.
Start Autopsy and open the multi-user settings panel from "Tools", "Options", "Multi-user". As shown in the screenshot below, you can then enter all of the address and authentication information for the network-based services. Note that in order to create or open Multi-user cases, "Enable Multi-user cases" must be checked and the settings below must be correct.

Multi-user settings
In closing

Autopsy use is very straightforward and well documented. As of version 4.0.0, the ability to utilize a multi-user is a highly beneficial feature for larger DFIR teams. Forensicators and responders alike should be able to put it to good use.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month time.

Sunday, May 08, 2016

toolsmith #116: vFeed & vFeed Viewer

Overview

In case you haven't guessed by now, I am an unadulterated tools nerd. Hopefully, ten years of toolsmith have helped you come to that conclusion on your own. I rejoice when I find like-minded souls, I found one in Nabil (NJ) Ouchn (@toolswatch), he of Black Hat Arsenal and toolswatch.org fame. In addition to those valued and well-executed community services, NJ also spends a good deal of time developing and maintaining vFeed. vFeed included a Python API and the vFeed SQLite database, now with support for Mongo. It is, for all intents and purposes a correlated community vulnerability and threat database. I've been using vFeed for quite a while now having learned about it when writing about FruityWifi a couple of years ago.
NJ fed me some great updates on this constantly maturing product.
Having achieved compatibility certifications (CVE, CWE and OVAL) from MITRE, the vFeed Framework (API and Database) has started to gain more than a little gratitude from the information security community and users, CERTs and penetration testers. NJ draws strength from this to add more features now and in the future. The actual vFeed roadmap is huge. It varies from adding new sources such as security advisories from industrial control system (ICS) vendors, to supporting other standards such as STIX, to importing/enriching scan results from 3rd party vulnerability and threat scanners such as Nessus, Qualys, and OpenVAS.
There have a number of articles highlighting impressive vFeed uses cases of vFeed such as:
Needless to say, some fellow security hackers and developers have included vFeed in their toolkit, including Faraday (March 2015 toolsmith), Kali Linux, and more (FruityWifi as mentioned above).

The upcoming version vFeed will introduce support for CPE 2.3, CVSS 3, and new reference sources. A proof of concept to access the vFeed database via a RESTFul API is in testing as well. NJ is fine-tuning his Flask skills before releasing it. :) NJ, does not consider himself a Python programmer and considers himself unskilled (humble but unwarranted). Luckily Python is the ideal programming language for someone like him to express his creativity.
I'll show you all about woeful programming here in a bit when we discuss the vFeed Viewer I've written in R.

First, a bit more about vFeed, from its Github page:
The vFeed Framework is CVE, CWE and OVAL compatible and provides structured, detailed third-party references and technical details for CVE entries via an extensible XML/JSON schema. It also improves the reliability of CVEs by providing a flexible and comprehensive vocabulary for describing the relationship with other standards and security references.
vFeed utilizes XML-based and  JSON-based formatted output to describe vulnerabilities in detail. This output can be leveraged as input by security researchers, practitioners and tools as part of their vulnerability analysis practice in a standard syntax easily interpreted by both human and machine.
The associated vFeed.db (The Correlated Vulnerability and Threat Database) is a detective and preventive security information repository useful for gathering vulnerability and mitigation data from scattered internet sources into an unified database.
vFeed's documentation is now well populated in its Github wiki, and should be read in its entirety:
  1. vFeed Framework (API & Correlated Vulnerability Database)
  2. Usage (API and Command Line)
  3. Methods list
  4. vFeed Database Update Calendar
vFeed features include:
  • Easy integration within security labs and other pentesting frameworks 
  • Easily invoked via API calls from your software, scripts or from command-line. A proof of concept python api_calls.py is provided for this purpose
  • Simplify the extraction of related CVE attributes
  • Enable researchers to conduct vulnerability surveys (tracking vulnerability trends regarding a specific CPE)
  • Help penetration testers analyze CVEs and gather extra metadata to help shape attack vectors to exploit vulnerabilities
  • Assist security auditors in reporting accurate information about findings during assignments. vFeed is useful in describing a vulnerability with attributes based on standards and third-party references(vendors or companies involved in the standardization effort)
vFeed installation and usage

Installing vFeed is easy, just download the ZIP archive from Github and unpack it in your preferred directory or, assuming you've got Git installed, run git clone https://github.com/toolswatch/vFeed.git
You'll need a Python interpreter installed, the latest instance of 2.7 is preferred. From the directory in which you installed vFeed, just run python vfeedcli.py -h followed by python vfeedcli.py -u to confirm all is updated and in good working order; you're ready to roll.

You've now read section 2 (Usage) on the wiki, so you don't need a total usage rehash here. We'll instead walk through a few options with one of my favorite CVEs: CVE-2008-0793.

If we invoke python vfeedcli.py -m get_cve CVE-2008-0793, we immediately learn that it refers to a Tendenci CMS cross-site scripting vulnerability. The -m parameter lets you define the preferred method, in this case, get_cve.


Groovy, is there an associated CWE for CVE-2008-0793? But of course. Using the get_cwe method we learn that CWE-79 or "Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')" is our match.


If you want to quickly learn all the available methods, just run python vfeedcli.py --list.
Perhaps you'd like to determine what the CVSS score is, or what references are available, via the vFeed API? Easy, if you run...

from lib.core.methods.risk import CveRisk
cve = "CVE-2014-0160"
cvss = CveRisk(cve).get_cvss()
print cvss

You'll retrieve...


For reference material...

from lib.core.methods.ref import CveRef
cve = "CVE-2008-0793"
ref = CveRef(cve).get_refs()
print ref

Yields...

And now you know...the rest of the story. CVE-2008-0793 is one of my favorites because a) I discovered it, and b) the vendor was one of the best of many hundreds I've worked with to fix vulnerabilities.

vFeed Viewer

If NJ thinks his Python skills are rough, wait until he sees this. :-)
I thought I'd get started on a user interface for vFeed using R and Shiny, appropriately name vFeed Viewer and found on Github here. This first version does not allow direct queries of the vFeed database as I'm working on SQL injection prevention, but it does allow very granular filtering of key vFeed tables. Once I work out safe queries and sanitization, I'll build the same full correlation features you enjoy from NJ's Python vFeed client.
You'll need a bit of familiarity with R to make use of this viewer.
First install R, and RStudio.  From the RStudio console, to ensure all dependencies are met, run install.packages(c("shinydashboard","RSQLite","ggplot2","reshape2")).
Download and install the vFeed Viewer in the root vFeed directory such that app.R and the www directory are side by side with vfeedcli.py, etc. This ensures that it can read vfeed.db as the viewer calls it directly with dbConnect and dbReadTable, part of the RSQLite package.
Open app.R with RStudio then, click the Run App button. Alternatively, from the command-line, assuming R is in your path, you can run R -e "shiny::runApp('~/shinyapp')" where ~/shinyapp is the path to where app.R resides. In my case, on Windows, I ran R -e "shiny::runApp('c:\\tools\\vfeed\\app.R')". Then browser to the localhost address Shiny is listening on. You'll probably find the RStudio process easier and faster.
One important note about R, it's not known for performance, and this app takes about thirty seconds to load. If you use Microsoft (Revolution) R with the MKL library, you can take advantage of multiple cores, but be patient, it all runs in memory. Its fast as heck once it's loaded though.
The UI is simple, including an overview.


At present, I've incorporated NVD and CWE search mechanisms that allow very granular filtering.


 As an example, using our favorite CVE-2008-0793, we can zoom in via the search field or the CVE ID drop down menu. Results are returned instantly from 76,123 total NVD entries at present.


From the CWE search you can opt to filter by keywords, such as XSS for this scenario, to find related entries. If you drop cross-site scripting in the search field, you can then filter further via the cwetitle filter field at the bottom of the UI. This is universal to this use of Shiny, and allows really granular results.


You can also get an idea of the number of vFeed entries per vulnerability category entities. I did drop CPEs as their number throws the chart off terribly and results in a bad visualization.


I'll keep working on the vFeed Viewer so it becomes more useful and helps serve the vFeed community. It's definitely a work in progress but I feel as if there's some potential here.

Conslusion

Thanks to NJ for vFeed and all his work with the infosec tools community, if you're going to Black Hat be certain to stop by Arsenal. Make use of vFeed as part of your vulnerability management practice and remember to check for updates regularly. It's a great tool, and getting better all the time.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month.

Acknowledgements

Nabil (NJ) Ouchn (@toolswatch)













Saturday, April 09, 2016

toolsmith #115: Volatility Acuity with VolUtility

Yes, we've definitely spent our share of toolsmith time on memory analysis tools such as Volatility and Rekall, but for good reason. I contend that memory analysis is fundamentally one of the most important skills you'll develop and utilize throughout your DFIR career.
By now you should have read The Art of Memory Forensics, if you haven't, it's money well spent, consider it an investment.
If there is one complaint, albeit a minor one, that analysts might raise specific to memory forensics tools, it's that they're very command-line oriented. While I appreciate this for speed and scripting, there are those among us who prefer a GUI. Who are we to judge? :-)
Kevin Breen's (@kevthehermit) VolUtility is a full function web UI for Volatility which fills the gap that's been on user wishlists for some time now.
When I reached out to Kevin regarding the current state of the project, he offered up a few good tidbits for user awareness.

1. Pull often. The project is still in its early stages and its early life is going to see a lot of tweaks, fixes, and enhancements as he finishes each of them.
2. If there is something that doesn’t work, could be better, or removed, open an issue. Kevin works best when people tell him what they want to see.
3. He's working with SANS to see VolUtility included in the SIFT distribution, and release a Debian package to make it easier to install. Vagrant and Docker instances are coming soon as well. 

The next two major VolUtility additions are:
1. Pre-Select plugins to run on image import.
2. Image Threat Score.

Notifications recently moved from notification bars to the toolbar, and there is now a right click context menu on the plugin output, which adds new features.

Installation

VolUtility installation is well documented on its GitHub site, but for the TLDR readers amongst you, here's the abbreviated version, step by step. This installation guidance assumes Ubuntu 14.04 LTS where Volatility has not yet been installed, nor have tools such as Git or Pip.
Follow this command set verbatim and you should be up and running in no time:
  1. sudo apt-get install git python-dev python-pip
  2. git clone https://github.com/volatilityfoundation/volatility
  3. cd volatility/
  4. sudo python setup.py install
  5. sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
  6. echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
  7. sudo apt-get update
  8. sudo apt-get install -y mongodb-org
  9. sudo pip install pymongo pycrypto django virustotal-api distorm3
  10. git clone https://github.com/kevthehermit/VolUtility
  11. cd VolUtility/
  12. ./manage.py runserver 0.0.0.0:8000
Point your browser to http://localhost:8000 and there you have it.

VolUtility and an Old Friend

I pulled out an old memory image (hiomalvm02.raw) from September 2011's toolsmith specific to Volatility where we first really explored Volatility, it was version 2.0 back then. :-) This memory image will give us the ability to do a quick comparison of our results from 2011 against a fresh run with VolUtility and Volatility 2.5.

VolUtility will ask you for the path to Volatility plugins and the path to the memory image you'd like to analyze. I introduced my plugins path as /home/malman/Downloads/volatility/volatility/plugins.


The image I'd stashed in Downloads as well, the full path being /home/malman/Downloads/HIOMALVM02.raw.


Upon clicking Submit, cats began loading stuffs. If you enjoy this as much as I do, the Help menu allows you to watch the loading page as often as you'd like.


If you notice any issues such as the image load hanging, check your console, it will have captured any errors encountered.
On my first run, I had not yet installed distorm3, the console view allowed me to troubleshoot the issue quickly.

Now down to business. In our 2011 post using this image, I ran imageinfo, connscan, pslist, pstree, and malfind. I also ran cmdline for good measure via VolUtility. Running plugins in VolUtility is as easy as clicking the associated green arrow for each plugin. The results will accumulate on the toolbar and the top of the plugin selection pane, while the raw output for each plugin will appears beneath the plugin selection pane when you select View Output under Actions.


Results were indeed consistent with those from 2011 but enhanced by a few features. Imageinfo yielded WinXPSP3x86 as expected, connscan returned 188.40.138.148:80 as our evil IP and the associated suspect process ID of 1512. Pslist and pstree then confirmed parent processes and the evil emanated from an ill-conceived click via explorer.exe. If you'd like to export your results, it's as easy as selecting Export Output from the Action menu. I did so for pstree, as it is that plugin from whom all blessings flow, the results were written to pstree.csv.


We're reminded that explorer.exe (PID 1512) is the parent for cleansweep.exe (PID 3328) and that cleansweep.exe owns no threads current threads but is likely the original source of pwn. We're thus reminded to explore (hah!) PID 1512 for information. VolUtility allows you to run commands directly from the Tools Bar, I did so with vol.py -f /home/malman/Downloads/HIOMALVM02.raw malfind -p 1512.


Rather than regurgitate malfind results as noted from 2011 (you can review those for yourself), I instead used the VolUtility Tools Bar feature Yara Scan Memory. Be sure to follow Kevin's Yara installation guidance if you want to use this feature. Also remember to git pull! Kevin updated the Yara capabilities between the time I started this post and when I ran yarascan. Like he said, pull often. There is a yararules folder now in the VolUtility file hierarchy, I added spyeye.yar, as created from Jean-Philippe Teissier's rule set. Remember, from the September 2011 post, we know that hiomalvm02.raw taken from a system infected with SpyEye. I then selected Yara Scan Memory from the Tools Bar, and pointed to the just added spyeye.yar file.


The results were immediate, and many, as expected.


You can also use String Search / yara rule from the Tools Bar Search Type field to accomplish similar goals, and you can get very granular with you string searches to narrow down results.
Remember that your sessions will persist thanks to VolUtility's use of MongoDB, so if you pull, then restart VolUtility, you'll be quickly right back where you left off.

In Closing

VolUtility is a great effort, getting better all the time, and I find its convenience irresistible. Kevin's doing fine work here, pull down the project, use it, and offer feedback or contributions. It's a no-brainer for VolUtility to belong in SIFT by default, but as you've seen, building it for yourself is straightforward and quick. Add it to your DFIR utility belt today.
As always, ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.

ACK

Thanks to Kevin (@kevthehermit) Breen for VolUtility and his feedback for this post.

Wednesday, March 09, 2016

toolsmith #114: WireEdit and Deep Packet Modification




PCAPs or it didn't happen, right? 



Introduction
Packet heads, this toolsmith is for you. Social media to the rescue. Packet Watcher (jinq102030) Tweeted using the #toolsmith hashtag to say that WireEdit would make a great toolsmith topic. Right you are, sir! Thank you. Many consider Wireshark the eponymous tool for packet analysis; it was only my second toolsmith topic almost ten years ago in November 2006. I wouldn't dream of conducting network forensic analysis without NetworkMiner (August 2008) or CapLoader (October 2015). Then there's Xplico, Security Onion, NST, Hex, the list goes on and on...
Time to add a new one. Ever want to more easily edit those packets? Me too. Enter WireEdit, a comparatively new player in the space. Michael Sukhar (@wirefloss) wrote and maintains WireEdit, the first universal WYSIWYG (what you see is what you get) packet editor. Michael identifies WireEdit as a huge productivity booster for anybody working with network packets, in a manner similar to other industry groundbreaking WYSIWIG tools.

In Michael's own words: "Network packets are complex data structures built and manipulated by applying complex but, in most cases, well known rules. Manipulating packets with C/C++, or even Python, requires programming skills not everyone possesses and often lots of time, even if one has to change a single bit value. The other existing packet editors support editing of low stack layers like IPv4, TCP/UDP, etc, because the offsets of specific fields from the beginning of the packet are either fixed or easily calculated. The application stack layers supported by those pre-WireEdit tools are usually the text based ones, like SIP, HTTP, etc. This is because no magic is required to edit text. WireEdit's main innovation is that it allows editing binary encoded application layers in a WYSIWYG mode."

I've typically needed to edit packets to anonymize or reduce captures, but why else would one want to edit packets?
1) Sanitization: Often, PCAPs contain sensitive data. To date, there has been no easy mechanism to “sanitize” a PCAP, which, in turn, makes traces hard to share.
2) Security testing: Engineers often want to vary or manipulate packets to see how the network stack reacts to it. To date, that task is often accomplished via programmatic means.
WireEdit allows you to do so in just a few clicks.

Michael describes a demo video he published in April 2015, where he edits the application layer of the SS7 stack (GSM MAP packets). GSM MAP is the protocol responsible for much of the application logic in “classic” mobile networks, and is still widely deployed. The packet he edits carries an SMS text message, and the layer he edits is way up the stack and binary encoded. Michael describes the message displayed as a text, but notes that if looking at the binary of the packet, you wouldn’t find it there due to complex encoding. If you choose to decode in order to edit the text, your only option is to look up the offset of the appropriate bytes in Wireshark or a similar tool, and try to edit the bytes directly.
This often completely breaks the packet and Michael proudly points out that he's not aware of any tool allowing to such editing in WYSIWYG mode. Nor am I, and I enjoyed putting WireEdit through a quick validation of my own.

Test Plan

I conceived a test plan to modify a PCAP of normal web browsing traffic with web application attacks written in to the capture with WireEdit. Before editing the capture, I'd run it through a test harness to validate that no rules were triggered resulting in any alerts, thus indicating that the capture was clean. The test harness was a Snort 2.9.8.0 instance I'd implemented on a new Ubuntu 14.04 LTS, configured with Snort VRT and Emerging Threats emerging-web_server and emerging-web_specific_apps rules enabled. To keep our analysis all in the family I took a capture while properly browsing the OpenBSD entry for tcpdump.
A known good request for such a query would, in text, as a URL, look like:
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8
Conversely, if I were to pass a cross-site scripting attack (I did not, you do not) via this same URL and one of the available parameters, in text, it might look something like:
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8%22onmouseover%3Dalert(1337)%2F%2F
Again though, my test plan was one where I wasn't conducting any actual attacks against any website, and instead used WireEdit to "maliciously" modify the packet capture from a normal browsing session. I would then parsed it with Snort to validate that the related web application security rules fired correctly.
This in turn would validate WireEdit's capabilities as a WYSIWYG PCAP editor as you'll see in the walk-though. Such a testing scenario is a very real method for testing the efficacy of your IDS for web application attack detection, assuming it utilized a Snort-based rule set.

Testing

On my Ubuntu Snort server VM I ran sudo tcpdump -i eth0 -w toolsmith.pcap while browsing http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/tcpdump.8?query=tcpdump&sec=8.
Next, I ran sudo snort -c /etc/snort/snort.conf -r toolsmith.pcap against the clean, unmodified PCAP to validate that no alerts were received, results noted in Figure 1.

Figure 1: No alerts triggered via initial OpenBSD browsing PCAP
I then dragged the capture (407 packets) over to my Windows host running WireEdit.
Now wait for it, because this is a lot to grasp in short period of time regarding using WireEdit.
In the WireEdit UI, click the Open icon, then select the PCAP you wish to edit, and...that's it, you're ready to edit. :-) WireEdit tagged packet #9 with the pre-requisite GET request marker I was interest in so expanded that packet, and drilled down to the HTTP: GET descriptor and the Request-URI under Request-Line. More massive complexity, here take notes because it's gonna be rough. I right-clicked the Request-URI, selected Edit PDU, and edited the PDU with a cross-site scripting (JavaScript) payload (URL encoded) as part of the GET request. I told you, really difficult right? Figure 2 shows just how easy it really is.

Figure 2: Using WireEdit to modify Request-URI with XSS payload
I then saved the edited PCAP as toolsmithXSS.pcap and dragged it back over to my Snort server and re-ran it through Snort. The once clean, pristine PCAP elicited an entirely different response from Snort this time. Figure 3 tells no lies.

Figure 3: XSS ET Snort alert fires
Perfect, in what was literally a :30 second edit with WireEdit, I validated that my ten minute Snort setup catches cross-site scripting attempts with at least one rule. And no websites were actually harmed in the making of this test scenario, just a quick tweak with WireEdit.
That was fun, let's do it again, this time with a SQL injection payload. Continuing with toolsmithXSS.pcap I jumped to the GET request in frame 203 as it included a request for a different query and again edited the Request-URI with an attack specific for MySQL as seen in Figure 4.



I saved this PCAP modification as toolsmithXSS_SQLi.pcap and returned to the Snort server for yet another happy trip Snort Rule Lane. As Figure 5 represents, we had an even better result this time.  


Figure 5: WireEdited PCAP trigger multiple SQL injection alerts
In addition to the initial XSS alert firing again, this time we collected four alerts for:

  • ET WEB_SERVER MYSQL SELECT CONCAT SQL Injection Attempt
  • ET WEB_SERVER SELECT USER SQL Injection Attempt in URI
  • ET WEB_SERVER Possible SQL Injection Attempt UNION SELECT
  • ET WEB_SERVER Possible SQL Injection Attempt SELECT FROM

That's a big fat "hell, yes" for WireEdit.
Still with me that I never actually executed these attacks? I just edited the PCAP with WireEdit and fed it back to the Snort beast. Imagine a PCAP like being optimized for the OWASP Top 10 and being added to your security test library, and you didn't need to conduct any actual web application attacks. Thanks WireEdit!

Conclusion

WireEdit is beautifully documented, with a great Quickstart. Peruse the WireEdit website and FAQ, and watch the available videos. The next time you need to edit packets, you'll be well armed and ready to do so with WireEdit, and you won't be pulling your hair out trying to accomplish it quickly, effectively, and correctly. WireEdit my a huge leap from not known to me to the top five on my favorite tools list. WireEdit is everything it is claimed to be. Outstanding.
Ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.

ACK

Thanks to Michael Sukhar for WireEdit and Packet Watcher for the great suggestion.

Tuesday, February 09, 2016

toolsmith #113: DFIR case management with FIR

#NousSommesUnis #ViveLaFrance

Bonjour! This month we'll explore Fast Incident Response, or FIR, from CERT Societe Generale, the team responsible for providing information security incident handling and response to cybercrime issues targeting  for Societe Generale. If you're developing a CERT or incident management team but haven't yet allocated budget for commercial case management tooling such as DFLabs Incman NG or CO3/Resilient (not endorsements), FIR is an immediate solution for your consideration. It's a nice quick, easy to deploy fit for any DFIR team in my opinion. It's built on Django (also one of my favorite movies), the Python Web framework, and leverages virtualenv, a tool to create isolated Python environments.
From their own README: "FIR (Fast Incident Response) is an cybersecurity incident management platform designed with agility and speed in mind. It allows for easy creation, tracking, and reporting of cybersecurity incidents.
FIR is for anyone needing to track cybersecurity incidents (CSIRTs, CERTs, SOCs, etc.). It's was tailored to suit our needs and our team's habits, but we put a great deal of effort into making it as generic as possible before releasing it so that other teams around the world may also use it and customize it as they see fit."
I had a quick chat with Gael Muller who said that the story about why they created and open-sourced FIR is on their blog, and that one year later, they do not regret their choice to do the extra work in order to make it FIR generic and release it to the public. "It seems there are plenty of people using and loving it, and we received several contributions, so I guess this is a win/win situation."
FIR offers a production and development environment, I tested the development version as I ran it from my trusty Ubuntu 14.04 LTS VM test instance.
Installation is easy, follow this abridged course of action as pulled from FIR's Setting up a development environment guidance:
  1. sudo apt-get update
  2. sudo apt-get install python-dev python-pip python-lxml git libxml2-dev libxslt1-dev libz-dev
  3. sudo pip install virtualenv
  4. virtualenv env-FIR
  5. source env-FIR/bin/activate
  6. git clone https://github.com/certsocietegenerale/FIR.git
  7. cd FIR
  8. pip install -r requirements.txt
  9. cp fir/config/installed_apps.txt.sample fir/config/installed_apps.txt (enables the Plugins)
  10. ./manage.py migrate
  11. ./manage.py loaddata incidents/fixtures/seed_data.json
  12. ./manage.py loaddata incidents/fixtures/dev_users.json
  13. ./manage.py runserver
If not in Paris (#jesuisParis), you'll want to change the timezone for your location of operation, default is Europe/Paris. Make the change in /FIR/for/config/base.py, I converted to America/Los_Angeles as seen in Figure 1.
Figure 1
Control-C then re-run./manage.py runserver after you update base.py.
As you begin to explore the FIR UI you can login as admin/admin or dev/dev, I worked from the admin account (change the password if exposed to any active networks). You'll likely want to make some changes to create a test bed that is more relevant to your workflows and business requirements. To do so click Admin in the upper right-hand corner of the UI, it's a hyperlink to http://127.0.0.1:8000/admin/ as seen in Figure 2.

Figure 2
This is one incredibly flexible, highly configurable, user friendly and intuitive application. You'll find that the demo configuration options are just that, take the time to tune them to what makes sense for your DFIR and security incident management processes. I created test workflows imaging this instance of FIR was dedicated to CERT activities for a consortium of hospitals, we'll call it Holistic Hospital Alliance. I first modified Business Lines to better align with such a workload. Figure 3 exhibits these options.

Figure 3: Business Lines
Given that we're imagining response in a medical business scenario, I updated Incident Categories to include IoT and Medical Devices as seen in Figure 4. At teams these are arguably one and the same but imagine all the connected devices now or in the future in a hospital that may not be specifically medical devices.

Figure 4: Incident Categories
I also translated (well, I didn't, a search engine did) the French Bale Categories to English (glad to share), as seen in Figure 5.
Figure 5: Bale Categories
The initial Bale Categories are one of the only feature that remains that is specific to CERT Societe Generale. The categories provide correspondence between the incident categories they use every day, and the categories mentioned in the Basel III regulation. As a CERT for financials, they need to be able to report stats using these categories. According to Gael, most people do not use these or even know they exist, as it is only visible in the "Major Incidents" statistics view. Gael thinks it is better if people ignore this as these as they are not very useful for most users.

Now to create a few cases and enjoy the resulting dashboard. I added four events, three of which were incidents, including a Sev 3 malware incident (in FIR a Sev 4 is the highest severtity), a Sev 4 stolen credit card data incident, a Sev 2 vulnerable ICU machine incident, and a Sev 1 vulnerability scanning event as we see in Figure 6.

Figure 6: Dashboard

Numerous editing options await you, including the ability to define you plan of action and incident confidentiality levels, and granularity per unique incident handler (production version). And I'll bet about now you're saying "But Russ! What about reporting?" Aye, that's what the Stats page offers, yearly, quarterly, major incidents and annual comparisons, ready to go. Figure 7 tells the tale.

Figure 7: Stats
You will enjoy FIR, I promise, its easy to use, well conceived, simple to implement, and as free DFIR case management systems go, you really can't ask for more. Give a go for sure, and if so possessed, contribute to the FIR project. Vive la FIR et bien fait CERT Societe Generale! Merci, Gael Muller.
Ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.

Cheers…until next month.