Tuesday, July 01, 2014

toolsmith: ThreadFix - You Found It, Now Fix It



 

Prerequisites
ThreadFix is self-contained and as such runs on Windows, Mac, and Linux systems
JEE based, Java 7 needed

Introduction
As an incident responder, penetration tester, and web application security assessor I have long participated in vulnerability findings and reporting. What wasn’t always a big part of my job duties was seeing the issues remediated, particularly on the process side. Sure, some time later, we’d retest the reported issue to ensure that it had been fixed properly but none of the process in between was in scope for me. Now, as part of a threat intelligence and engineering team I’ve been enabled to take a much more active role in remediation, often even providing direct solutions to discovered problems. I’m reminded of a London underground (Tube) analogy for information security gap analysis (that space between find and fix) taken whilst stepping on the train. Mind the gap!


But with new responsibilities comes new challenges. How best to organize all those discovered issues to see them through to repaired nirvana? As is often the case, I keep an eye on some of my favorite tool sources, and NJ Ouchn’s Toolswatch came through as it often does.  There I discovered ThreadFix, developed by Denim Group, a team I was already familiar thanks to my work with ISSA. When, in 2011 I presented Incident Response in Increasingly Complex Environments to the ISSA Alamo Chapter in San Antonio, TX, I met Lee Carsten and Dan Cornell of Denim Group. They’ve had continued growth and success in the three years since and ThreadFix is part of that success. After pinging Lee regarding ThreadFix for toolsmith he turned me over to Dan who has been project lead for ThreadFix from its inception and provided me ample insight. Dan indicated that while working with their clients, they saw two common scenarios – teams just getting started with their software security programs and teams trying to find a way to scale their programs – and that ThreadFix is geared toward helping these groups. They’d seen lots of teams that had just purchased a desktop scanning tool who’d run some scans and the results would end up stored on a shared drive or in a Sharepoint Document Repository. Dan pointed out that these results though were just blobs of data such as PDFs being emailed around to development teams with no action being taken. ThreadFix gives organizations in this situation an opportunity to start treating the results of their scanning as managed data so they can lay out their application portfolio, track the results of scanning over time and start looking at their software security programs in a much more quantitative manner. Per Dan, this lets them have much more "grown up" conversations with management about application and software risk. A natural byproduct of managed data leads to conversations that evolve from "Cross-site scripting is scary" to "We've only remediated 50% of the XSS vulnerabilities we've found and on average it takes us 120 days which is twice as slow as what others in our industry are doing." WHAT!? An informed conversation is more effective than a FUD conversation? Sweet! Dan described more sophisticated organizations who are tracking this "mean time to fix" metric as better managing their window of exposure, and that public data sets, such as those released by Veracode and WhiteHat Security, can provide a basis for benchmarking. Amen, brother. Mean time to remediate is one of my favorite metrics.

Dan and the Denim team, while working with bigger organizations, saw huge struggles with teams getting bogged down trying to deal with different technologies across huge portfolios of applications. He cites the example of the Information Security group buying scanner X while the IT Audit group purchased scanning service Y and the QA team was starting to roll out static analysis engine Z. He summed this challenge up best with “The only thing worse than approaching a development team with a 300 page PDF report with a color graph on the front page is approaching them with two or three PDFs and expecting them to take action.” Everyone familiar with Excel hell? That’s where these teams and many like them languish, trying to track mountains of vulnerabilities and making no headway. Dan and Denim intended for ThreadFix to enable these teams to automatically normalize and consolidate the results of different scanning tools even across dynamic (DAST) and static (SAST) application security testing technologies. This is achieved with Hybrid Analysis Mapping as developed under a contract with the US Department of Homeland Security (DHS). According to Dan, with better data management, security teams can focus on high value tasks such as working with development teams to actually implement training and remediation programs. Security teams can take the data from ThreadFix and export it to developer defect tracking tools and IDEs that developers are already using. This reduces the friction in the remediation process and helps them fix more vulnerabilities, faster.
Great stuff from, Dan. The drive to remediate has to be the primary goal. The industry has proven its ability to find vulnerabilities, the harder challenge and the one I’m spending the vast majority of my focus on, is the remediation work. Threat modeling, security development lifecycles, and secure coding best practices are a great start but one way to take your program to the next level is the tuning your vulnerability data management efforts with ThreadFix. There is a Community Edition, free under the Mozilla Public License (MPL), which we’ll focus on here, which includes a central dashboard, SAST and DAST scanner support, defect tracker integration, virtual patching via WAF/IDS/IPS options, trend analysis & reporting, and IDE integration.
If you seek an enterprise implementation you can upgrade for LDAP & Active Directory integration, role based user management, scan orchestration, enhanced compliance reporting, and technical support.

Preparing ThreadFix

First, I tested both the 2.0.1 stable version and the 2.1M1 development version and found the bleeding edge to be perfectly viable. ThreadFix includes a number of plugins, and most importantly for our scenario, for OWASP ZAP and Burp Suite Pro. There is also a plugin for Eclipse too though for defect tracking and IDE I’m a Microsoft TFS/Visual Studio guy (shocker!). Under Defect Tracking there is support for TFS but I can’t wait until Dan and team implement a plugin for VS. J To get started ThreadFix installation is a download-and-run-it scenario. ThreadFix Community Edition includes a self-contained .ZIP download containing a Tomcat web & servlet engine along with an HSQL database. That said, most production environment installations of ThreadFix use a MySQL database for scalability; if you wish to do so instructions are provided. As ThreadFix uses Hibernate for data access, other database engines are also be supported.
Once you’ve downloaded ThreadFix, navigate to your installation directory and double-click threadfix.bat on a Windows host or run sh threadfix.sh on *nix systems. Once the server has started, navigate to https://localhost:8443/threadfix/ in a web browser and log in with the username user and the password password. Then immediately proceed to change the password, please.
Click Applications on the ThreadFix menu and add a team, then an application you’ll be assessing and managing. My team is HolisticInfoSec and my application is Mutillidae as it has obvious flaws we can experiment with for remediation tracking.
After you download the appropriate plugins, unpack each (I did so as subdirectories in my ThreadFix path) and fire up the related tool. Big note here: Burp and XAP default proxy ports conflict with ThreadFix’s API interface, you’ll have contention for port 8080 if you don’t configure Burp and ZAP to run on different ports. For Burp, click the Extender tab, choose Add, navigate to the Burp plugin path and select threadfix-release-2.jar. You’ll then see a new ThreadFix tab in your Burp UI which will include Import Endpoints and Export Scan. You’ll need to generate API keys as follows: click the settings gear in the upper right hand of the menu bar and select API keys as seen in Figure 1.

FIGURE 1: Create ThreadFix API keys for plugin use
Click Export Scan and paste in the API key you created as mentioned above. Similarly in ZAP, choose File then Load Add-On File and choose threadfix-release-1.zap. After restarting ZAP you’ll see ThreadFix: Import Endpoints and ThreadFix: Export Scan under Tools.
You may find it just as easy to save scan results from Burp and ZAP in an .xml format and upload them via the ThreadFix UI. Go to Applications, then Expand All, select your Application, and click Upload Scan. You’ll benefit from immediate results as seen from an incomplete Burp and ZAP scans of Mutillidae in Figure 2.

FIGURE 2: Scan results uploaded into ThreadFix
The ThreadFix dashboard then updated to give me a status overview per Figure 3.

FIGURE 3: ThreadFix dashboard provides application vulnerability status
Drilling into your target via the Application menu will provide even more nuance and detail with the ability to dig into each vulnerability as seem in Figure 4.

FIGURE 4: ThreadFix vulnerability details
In order to enable IDE support for the like of Eclipse you’ll need to take a few steps from here
  • Have Team/Application setup in Threadfix
  • Have source code for an Application linked in Threadfix
  • Have a scan for the Application in Threadfix
  • Have the Applications scan linked to a Defect Tracker
Once you have it configured, you can select specific vulnerabilities and submit them directly to your preferred Defect Tracker under the Application view, then click Action. This is vital if you’re pushing repairs to the development team via the likes of Jira or TFS.
Additionally, if you’re interested in virtual patching, first create a WAF under Settings and WAFs where you choose from Big-IP ASM (F5), DenyAll rWeb, Imperva SecureSphere, Snort, and mod_security, which I selected and named it HolisticInfoSec. Click Applications again, drill into the application you’ve added scans for, then click Action and Edit/Delete. The Edit menu will allow you to Set WAF. I then selected HolisticInfoSec and click Add WAF. You can also simply add a new WAF here as well. Regardless, go back to Settings, then WAFs, then choose Rules. I selected HolisticInfoSec/Mutillidae and deny then Generate WAF rules. The results as seen in Figure 5 can then be imported directly into mod_security. Tidy!

FIGURE 5: ThreadFix generates mod_security rules
So many other useful features with ThreadFix too. Under Settings and Remote Providers you can configure ThreadFix to integrate with QualysGuard WAS, Veracode, and WhiteHat Sentinel. There are tons of reporting options including trending, snapshots (point in time), scan comparisons (Burp versus ZAP for this scenario), and vulnerability searching. Try the scan comparisons; you’ll often be surprised, amused, and angry all at the same time. That said, trending is vital for tracking mitigation performance over time and quite valuable for denoting improvement or decline.

In Conclusion

Make use of the ThreadFix wiki to fill you in on the plethora of detail I didn’t cover here and really, really consider standing up an instance if you’re at all involved application security discovery and repair. This tool is one I am absolutely making use of in more than one venue; you should do the same. You’re probably used to me saying it every few months but I’m really excited about ThreadFix as it is immediately useful in my every day role. You will likely find it equally useful in your organization as you push harder for find and fix versus find and…
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Dan Cornell, CTO, Denim Group, ThreadFix project lead
Lee Carsten, Senior Manager, Business Development, Denim Group

Wednesday, June 04, 2014

toolsmith: Testing and Research with BlackArch Linux


Introduction
It’s the 24th of May as I write this, just two days prior to Memorial Day. I am reminded, as Wallace Bruce states in his poem of the same name, that “who kept the faith and fought the fight; the glory theirs, the duty ours.” I also write this on the heels of the Department of Justice’s indictment of five members of the Chinese People’s Liberation Army charging them hacking and cyber theft. While I will not for a moment draw any discussion of cyber conflict together with Memorial Day, I will say that it is our obligation and duty as network defenders to understand offensive tactics to better prepare ourselves for continued digital conflicts. To that end we’ll focus on BlackArch Linux, “a lightweight expansion to Arch Linux for penetration testers and security researchers.” I was not familiar with Arch Linux prior to discovering BlackArch but found myself immediately intrigued by the declarations of its being lightweight, flexible, simple, and minimalist; worthy goals all. Add a powerful set of information security-related tools as seen in BlackArch Linux and you’ve got a top notch distribution for your tool kit. Likely, any toolsmith reader has heard of BackTrack, now Kali, and for good reason as it set the standard for pentesting distributions, but it’s also refreshing to see other strong contenders emerge. BlackArch is distributed as an Arch Linux unofficial user repository so you can install it on top of an existing Arch Linux installation where packages may be installed individually or by specific categories. There is also a live ISO which I utilized to create a BlackArch virtual machine. Arch Linux, while independently developed, is very UNIX-like and draws inspiration from the likes of Slackware and BSD.
According to Evan Teitelman, the founder and one of the primary developers, BlackArch started out as ArchTrack. Arch Track was a small collection of PKGBUILD files, mostly collected from the Arch User Repository (AUR), for his own personal use. PKGBUILDs are an Arch Linux package build description file (a shell script) used when creating packages. At some point, Evan created a few metapackages and uploaded them to the AUR; these metapackages allowed people to install packages by category with AUR helpers. He also created an unofficial user repository but only a few people used it. About six months after ArchTrack began, Evan merged with a smaller project called BlackArch which consisted of about 40 PKGBUILD files at the time, while ArchTrack had about 160. The team ultimately decided to use the BlackArch name as it was more favorable and also came with a website and a Twitter handle. The team abandoned the AUR metapackages and put their focus on the unofficial user repository. Over time, they picked up a few more contributors and the original BlackArch contributor left the project to focus elsewhere. Around the same time, noptrix joined the group who redesigned the website, created the live ISO, and brought in many new packages. Elken and nrz also joined the team and are currently two of the most active members. There are currently about 1200 packages in the BlackArch repository. The team’s goal is to provide as many packages as possible and see no reason to limit the size of the repository but are considering trimming down the ISO.
If you would like to contribute or report a bug, contact the BlackArch team or send a pull request via Github. Evan describes the team as one with little structure and no formal leader or rank; it’s just a group of friends working together who welcome you to join them.

Quick configuration pointers

When booting the ISO in VMWare I found making a few tweaks essential. The default display size is 800x600 and can be changed to 1440x900, or your preferred resolution, with the following: 
xrandr --output Virtual1 --mode 1440x900
BlackArch configures the network interface via DHCP, if you wish to assign a static address right-click on the desktop, choose network, then wicd-gtk.
System updates and package installations are handled via pacman. To sync repositories and upgrade out of date packages use, pacman -Syyu. To install individual packages use pacman –S .

Using BlackArch Linux

BlackArch exemplifies ease of use, as intended. Right-click anywhere on the desktop and the menu is immediately presented. Under terminals I prefer the green xterm as I am in fact writing this from the Nebuchadnezzar while flying through the tunnels under the megacities that existed before the Man–Machine war. J “You take the blue pill – the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill – you stay in Wonderland, and I show you how deep the rabbit hole goes.” Sorry, unavoidable Matrix digression. Anyway, you’ve got Firefox and Opera under browsers, and we’ve already discussed using network to define settings. It’s under the blackarch menu that the magic begins on your journey down the rabbit hole as seen in Figure 1.

FIGURE 1: Down the rabbit hole with BlackArch
Pick your poison, what are you in the mood for? The options are clearly many. I was surprised to see Gremwell’s MagicTree under the threat modeling menu having just discussed threat modeling last month. While not quite classic threat modeling, magictree allows penetration testers to organize and query nmap and Nessus data, list all finding by severity (prioritize for ordered mitigation), and generate reports. This activity most assuredly supports both good threat models and penetration testing reporting, the bane of the pentester’s existence.  I was even more amused, given our emerging theme for this month, to note that MagicTree includes a Matrix view.
Malware analysts will enjoy an entire section dedicated to their cause under the malware menu, including cuckoo and malwaredetect (checks Virustotal results from the command line) as seen in Figure 2. I downloaded a Blackhole payload (Zbot password stealer) from my malware repository and ran malwaredetect updateflashplayer.exe.

FIGURE 2:  malwaredetect identifies malware
The forensic options are vast and include your regular odds-on favorites such as Maltego and Volatility as well as hash computation tools such as hashdeep, md5deep, tigerdeep, whirlpooldeep, etc. Tools for the EnCase EWF format are included such as ewfacquire, ewfdebug, ewfexport, ewfinfo, and others. Snort fans will enjoy the inclusion of u2spewfoo which I mention purely for the pleasure of the crisp consonance of the tool name. For forensicators investigating Windows systems with Access databases you can utilize the MDB Tools kit included in BlackArch. To acquire schema execute mdb-schema access.mdb, to determine the Access version run mdb-ver access.mdb, to dump tables try mdb-tables access.mdb, and if you wish to export that table to CSV use mdb-export access.mdb table > table.txt, all as seen in Figure 3.

FIGURE 3: Carving up Access DBs with MDB Tools
While threat modeling, malware analysis and Access forensics may be interesting to some or many of you, most anyone interested in BlackArch Linux is probably most interested in the pwn. “Show us some exploit tools already!” Gotcha, will do. In addition to the Metasploit Framework you’ll find Inguma, the killerbee ZigBee tools, shellnoob, a shellcode writing toolkit, as well as a plethora of other options.
Under the cracker menu you’ll find the likes of mysql_login useful in bruteforcing MySQL connections. As seen in Figure 4 the syntax is simple enough. I tested against one of my servers with mysql_login host=192.168.43.147 user=root password=password which of course failed. You can utilize dictionary lists for usernames and passwords and define parameters to ignore messages as well.

FIGURE 4: Bruteforcing MySQL connections
In fact, BlackArch includes the whole patator toolkit, the multi-purpose brute-forcer, with a modular design and a flexible usage and login brute-forcers for MS-SQL, Oracle, Postgres, as well as other non-database options too as seen in Figure 5.
  
FIGURE 5: Patator
For your next penetration testing engagement you definitely want BlackArch Linux in your toolbag. For that matter, incident response and forensics personnel should carry it as well as it’s useful across the whole spectrum.

In Conclusion

This is one of those “too many tools, not enough time” scenarios. You can and should spend hours leveraging BlackArch across any one of your preferred information security disciplines. Jump in and help the project out if so inclined and keep an eye on the website and Twitter feed for updates and information.
Ping me via email if you have questions or suggestions for topic via russ at holisticinfosec dot org or hit me on Twitter @holisticinfosec.
Cheers…until next month.

Thursday, May 01, 2014

toolsmith: Microsoft Threat Modeling Tool 2014 - Identify & Mitigate




Prerequisites/dependencies
Windows operating system

Introduction
I’ve long been deeply invested in the performance of threat modeling with particular attention to doing so in operational environments rather than limiting the practice to simply software. I wrote the ITInfrastructure Threat Modeling Guide for Microsoft in 2009 with the hope of stimulating this activity. In recent months two events have taken place that contribute significantly to the threat modeling community. In February Adam Shostack published his book, Threat Modeling: Designing for Security and I can say, without hesitation, that it is a gem. I was privileged to serve as the technical proof reader for this book and found that its direct applicability to threat modeling across the full spectrum of target opportunities is inherent throughout. I strongly recommend you add this book to your library as it is, in and of itself, a tool for threat modelers and those who wish to reduce risk, apply mitigations, and improve security posture. This was followed in mid-April by the release of the Microsoft Threat Modeling Tool 2014. The tool had become a bit stale and the 2014 release is a refreshing update that includes a number of feature improvements that we’ll discuss shortly. We’ll also use the tool to conduct a threat model that envisions the ISSA Journal’s focus for the month of May: Healthcare Threats and Controls.
First, I sought out Adam to provide us with insight regarding his perspective on operational threat modeling. As expected, he indicated that whether you're a system administrator, system architect, site reliability engineer, or IT professional, threat modeling is important and applicable to your job. Adam often asks four related questions:
1)      What are you building?
He describes that building an operational system is more likely to be building additional components on top of an existing system and that it's therefore important to model both what you have and how it's changing.    
2)      What can go wrong? 
Adam reminds us that you can use any of the threat enumeration techniques, but that, in particular, STRIDE relates closely to the “CIA” set of properties that are desirable for an operational system. I’ll add OWASP Risk Rating Methodology to the tool’s KB for good measure, given its direct integration of CIA.  
3)      What are you going to do about it? 
Several frameworks can be used here, such as prevent, detect, and respond as well as available technologies. 
4)      Did you do a good job at 1-3? 
Adam points out that assurance activities (which can include compliance) can help you.  More importantly, you can also use approaches such as penetration testing and red teaming to help you determine if you did a good job. I am a strong proponent for this approach. My team at Microsoft includes both threat engineers for threat modeling and assessment as well as penetration testers for discovery and validation of mitigations.
To supplement the commitment to operational threat modeling, I asked Steve Lipner, one of the founding fathers of Microsoft’s Security Development Lifecycle and the Security Response Center (MSRC), for his perspective, which he eloquently provided as follows:
“While threat modeling originated as an approach to evaluating the security of software components, we have found the techniques of security threat modeling to have wide applicability.  Like software components, operational services are targets of attack and can exhibit vulnerabilities.  Threat modeling and STRIDE have proven to be effective for identifying and mitigating vulnerabilities in operational services as well as software products and components.”
With clear alignment around the premise of operational threat modeling let’s take a look at what it means to apply it. 

Identifying Threats and Mitigations with TMT 2014

Emil Karafezov, who is responsible for the Threat Modeling component of the Security Development Lifecycle (SDL) at Microsoft, wrote a useful introduction to the Microsoft Threat Modeling Tool 2014 (TMT). Emil let me know that there are additional details and pointers in the Getting Started Guide and the User Guide which are part of the Threat ModelingTool 2014 Principles SDK. You should definitely read the introduction as well as the guides before proceeding here as I will not be revisiting the basic usage information for the TMT tool or how to threat model (read the book) and will instead focus more in depth on some key new capabilities. I will do so in the context of a threat model for the operational environment of a fictional medical services company called MEDSRV.
Figure 1 includes a view of the MEDSRV operational environment for its web application and databases implementation.

FIGURE 1: A MEDSRV threat model with TMT 2014
Emil offered some additional pointers not shared in his blog post that we’ll explore further with the MEDSRV threat model specific to data extraction and search capabilities.

Data extraction:
From a workflow perspective, the ability to extract information from the tool for record keeping or bug filing is quite useful. The previous version of the TMT included Product Studio and Visual Studio plugins for bug filing but Emil describes them as rather rigid templates that were problematic for users syncing with their server. With TMT 2014 there is a simple right-click Copy Threats for each entry that can be pasted into any text editor or bug tracking system. For bulk threat entry manipulation there is another feature ‘Copy Custom Threat Table’ which lets you dump results conveniently into Excel, which in turn can be imported into workflow management systems via automation. When in Analysis View with focus set in the Threat Information list use the known Ctrl+A shortcut to select all threat entries and with right-click you can edit the constants in the Custom Threat Table as seen in Figure 2.

FIGURE 2: TMT 2014’s Copy Custom Threat Table feature
Search for Threat Information:
Emil also pointed out that TMT 2014’s Search for Threat Information area, while seemingly a standard-to-have option, is new and worth mentioning. This feature is really important if you have a massive threat model with a plethora of threats; the threat list filter is not always the most efficient way to narrow down your criteria. I have found this to be absolute truth during threat modeling sessions of online services at Microsoft where a large model may include hundreds or thousands of threats. To find threats that contained keywords specific to a particular implementation of your mitigations as an example, using Search is the way to go. You might be focusing on data store accessibility as seen in Figure 3.

FIGURE 3: Search for threat information
I also asked Ralph Hood, Microsoft Trustworthy Computing’s Group Program Manager for Secure Development Policies & Tools (the group that oversees the TMT) what stood out for him with this version of the tool. He offered two items in particular:
1)      Migration capability of models from the old version of the tool
2)      The ability to customize threats
Ralph indicated that the TMT tool has not historically supported any kind of migration to newer versions; the ability to migrate models from earlier versions to the 4.1 version is therefore a powerful feature for users who have already conducted numerous threat models with older versions. Threat models should always be considered dynamic (never static) as systems always change and you’ll likely update a model at a later date.
The ability to customize threats is also very important, particularly in the operations space. The ability to change the threat elements and information (mitigation suggestions, threat categories, etc.) for specific environments is of significant importance. Ralph points out as an example that if a specific service or product owner knows that certain threats are assessed differently because of specific characteristics of the service or platform, they can change the related threat information. Threat modelers can do so using a Knowledge Base (KB) created for all related models so any user going forward can utilize the modified KB rather than having to always change threat attributes for each threat manually. According to Ralph, this is important functionality in the operations space where certain service dependencies and platform benefits and/or downfalls may consistently alter threat information. He’s absolutely right so I’ll take the opportunity to tweak the imaginary MEDSRV KB here for your consideration using Appendix II of the User Guide (read it).  The KB is installed by default in C:\Program Files (x86)\Microsoft Threat Modeling Tool 2014\KnowledgeBase. Do not tweak the original, create a copy and modify that. I called my copy KnowledgeBaseMEDSRV and saved it in C:\tmp. I focused exclusively on ThreatCategories.xml and ThreatTypes.xml. Using the OWASP Risk Rating Methodology I added Technical Impact Factors to ThreatCategories.xml and ThreatTypes.xml. Direct from the OWASP site, “technical impact can be broken down into factors aligned with the traditional security areas of concern: confidentiality, integrity, availability, and accountability. The goal is to estimate the magnitude of the impact on the system if the vulnerability were to be exploited.”
·         Loss of confidentiality
o   How much data could be disclosed and how sensitive is it? Minimal non-sensitive data disclosed (2), minimal critical data disclosed (6), extensive non-sensitive data disclosed (6), extensive critical data disclosed (7), all data disclosed (9)
·         Loss of integrity
o   How much data could be corrupted and how damaged is it? Minimal slightly corrupt data (1), minimal seriously corrupt data (3), extensive slightly corrupt data (5), extensive seriously corrupt data (7), all data totally corrupt (9)
·         Loss of availability
o   How much service could be lost and how vital is it? Minimal secondary services interrupted (1), minimal primary services interrupted (5), extensive secondary services interrupted (5), extensive primary services interrupted (7), all services completely lost (9)
·         Loss of accountability
o   Are the threat agents' actions traceable to an individual? Fully traceable (1), possibly traceable (7), completely anonymous (9)
Note: I renamed the original KnowledgeBase to KnowledgeBase.bak then copied KnowledgeBaseMEDSRV back to the original destination directory and renamed it KnowledgeBase. This prevents corruption of your original files and eliminates the need to re-install TMT. If you’d like my changes to ThreatCategories.xml and ThreatTypes.xml hit me over email or Twitter and I’ll send them to you. That said, following are snippets (Figures 4 & 5) of the changes I made.

FIGURE 4: Additions to ThreatCategories.xml
FIGURE 5: Additions to ThreatTypes.xml
Take notice of a few key elements in the modified XML. I set OTI1 for OWASP Technical Impact and O to O for OWASP. J Remember that each subsequent needs to be unique. I declared source is 'GE.P' and (target is 'GE.P' or target is 'GE.DS') and flow crosses 'GE.TB' because GE.P defines a generic process, GE.DS defines a generic data store and GE.TB defines a generic trust boundary. Therefore, per my modification, data subject to technical impact factors flows across trust boundaries between processes and data stores. Make sense? I used the resulting TMT KB update to provide a threat model of zones defined for MEDSRV as seen in Figure 6.

FIGURE 6: A threat model of MEDSRV zones using Technical Impact Factors
I’m hopeful these slightly more in depth investigations of TMT 2014 features entices you to utilize the tool and to engage in the practice of threat modeling. No time like the present to get started.

In Conclusion

We’ll learned enough here to conclude that you have two immediate actions. First, purchase Threat Modeling: Designing For Security and begin to read it. Follow this by downloading the Microsoft Threat Modeling Tool 2014 and practice threat modeling scenarios with the tool while you read the book. Conducting these in concert will familiarize you with both the practice of threat modeling as well as the use of TMT 2014.  
Remember that July’s ISSA Journal will be entirely focused on the Practical Use of InfoSec Tools. Send articles or abstracts to editor at issa dot org.
Ping me via email if you have questions or suggestions for topic via russ at holisticinfosec dot org or hit me on Twitter @holisticinfosec.
Cheers…until next month.

Acknowledgements
Microsoft’s:
Adam Shostack, author, Threat Modeling: Designing for Security & Principal Security PM, TwC Secure Ops
Emil Karafezov, Security PM II, TwC Secure Development Tools and Policies
Ralph Hood, Principal Security GPM, TwC Secure Development Tools and Policies
Steve Lipner, Partner Director, TwC Software Security