Last updated at Fri, 23 Mar 2018 13:39:43 GMT

This is a guest post from a long-time Metasploit contributor and community member. Over the next few months, Rapid7 will be publishing a series of guest posts featuring unique perspectives on Metasploit Framework and highlighting some of our community’s favorite functionality, hidden gems, and backstories.

Want to contribute an idea or a post? Reach out to community[at]

Red team exercises have been around in military contexts for a long time: the idea is to train as one fights, making use of every available resource to breach an adversary’s defenses or achieve another objective (e.g., obtain the colors of the defending unit). While this practice means both units gain invaluable training and experience, it is not without risk: a certain level of improvisation and chaos is inherent to the process, meaning actual enemy agents have an opportunity to take advantage of the confusion. Aided by the troops emulating attackers, they slip into the cities of defenders and drop the bridge, set fire to things, and so on. Worse, they may disperse into the city, blending with the population until they expose themselves by hostile action, while the training session is lauded a success by people watching the show.

People whose national or physical security depends on the integrity of their day-to-day operations maintain OPSEC, plan their exits, keep rear guard and overwatch, and generally think through the implications of the unorthodox nature of their actions because failure to do so produces literally terminal outcomes. The first lackadaisical move is a critical failure due to the scope of potential exposure.

Pen testers can take a page out of the books written by these folks: the use of cleartext shells and unencrypted payloads (especially over WAN connections) in the modern era of cyber espionage is as irresponsible as playing football with a the ball. How can we use the machinery inside Metasploit to protect our operations while we continue to pen test and improve posture in the system itself? The groundwork is all there, and coupled with rational practices, it allows us to tighten our OPSEC and reduce collateral during operations.

The Metasploit Framework implements both transport security in-session and payload-level protections in modular and layered manners, allowing red teams to ensure basic OPSEC and to extend it according to their use cases. While Metasploit modules and handlers expose the derived implementations of these functions, the magic behind the modules lies in /lib.

Simple command shells are generally some form of loop reading attacker input from a source (say, a file descriptor), executing it, and sending the result to a destination (the same FD, or an arbitrary IO construct). The process for achieving a reverse shell session involves communicating with the remote host to deliver the exploit/affect the process, deliver the payload itself, and establish a communications channel (or channels) to handle the IO. This process is implemented by the exploit module’s creating a Rex::Socket::Tcp connection to the destination host (first point where subversion may occur), sending the exploit, and the payload. The handler module’s catching the reverse shell starts a Rex::Socket::TcpServer, accepts the inbound TCP connection (second point of subversion) from the payload, and creates a session object handling the MSF side of the IO loop; this continues to read from and write to the socket (third point). In the case of staged payloads, the session handler sends a stage back to the target system for execution and establishment of the final session: Meterpreter/Mettle (third/fourth targets).

What if an attacker has MITM on the wire between red team and the target host? What if they achieve it during the session? They can modify the initial payload sent to the target system, inserting their IP address instead of the practitioner’s to hijack the shell. They can intercept the command session and modify its contents (waiting for red team to privesc before taking control), and they can intercept the stage, decoding it (if encoded), and inserting their own code which also hijacks control or brings "other guests" to the party. What if the red team test is on a critical facility? Bad guys caught wire control somewhere in the datapath and red team gave them root past the inner cordon... The concern around unauthenticated bind handlers is obvious in this context, so simple mention of it should suffice.

The wire-level MITM concerns can be allayed by the use of encryption in transit: focusing exploits against HTTPS services, hitting SSH instead of telnet when we have credentials, using TLS for session communication (in the command shell case), and replacing Rex::Socket::Tcp/TcpServer with their TcpSsl/TcpSslServer counterparts. In fact, that's exactly what we did a while back by implementing lib/msf/core/handler/reverse_tcp_ssl on the server side, a bunch of shell payloads using language-runtime TLS interfaces, OpenSSL binaries, socat, 'telnet -z' and other weird obscurities—even stages for Python and PHP Meterpreters that establish the initial socket over TLS. Adding certificate validation where it’s not present is up to the user, and hopefully also those who push their code back upstream for refinement and use by the community. Operators seeking different wire-level or encryption properties can look to mihi's AES socket wrappers in /lib/msf/core/socket (which could also be sensibly moved to Rex::Socket) for a taste of how custom ciphers can be arbitrarily applied, avoiding all the wire signatures of a common DH/EC handshake. This part was actually done to accommodate the absorption of Michael's excellent JavaPayload code into Java Meterpreter, but it’s a great working reminder to all that anything that works with both ends of the transport semantic flies. [Insert children's book reference about the power of imagination here, and go code something.]

In the case of staged reverse_tcp payloads, we are transmitting significant amounts of binary content to be injected into memory/executed on target over cleartext. If sent unencoded, the content of these stages is very easy to pick up via pattern matching on the wire, and it enables others to perform arbitrary substitutions at known offsets to inject their code into the code we're injecting into the client's code. Encoders and custom binaries help, but they're delay tactics that are overcome by automated analysis tools. Over the years, mihi and max3raza have both stepped up with Windows assembly blocks for x86 and x64, respectively, which we wired into Metasm to produce dynamic stages capable of decrypting subsequent stages prior to use. These code blocks live in /lib/msf/core/payload/windows and are interpolated into the assembly strings, which are fed into Metasm to build payloads for delivery on target. Attackers intercepting both stage0 and stage1 will get access to the key material required to decrypt and perform their modifications, but separate delivery can prevent this entirely, eliminating that leg of the msf <-> target interaction as a viable attack domain for the real adversary. Added security can be implemented in a number of ways: using context data to decrypt (no key transmitted); obfuscating the keys and decryption stubs in the Metasm composition/build phases; extending key length to deter brute-force, or implementing entirely different ciphers capable of ensuring secrecy for a while to come (if the contents of stage1 contain things we don’t want to leak even in hindsight, especially for some AV engine to send overseas for analysis).

Meterpreter TLVs can theoretically be sent over any number of synchronous or asynchronous channels, which may themselves provide a cryptographic secrecy layer. We used to carry OpenSSL with us on target, but the weight the library added to Meterpreter was deemed too high an overhead (along with some technical debt it brought that we're still digging through), and TheColonial implemented layer 7 encryption in the TLV structure itself. Other Meterpreter implementations are adopting the protocol change as well, moving us closer to being able to rely on a known quotient of content security in our own layer 7 implementation, and offering safer operation over inherently unencrypted transports.

Pen testers, especially those in critical or compliance-beholden environments, should keep tabs on which payload type is actually providing them with secrecy, and which isn’t: there may be grave repercussions associated with transmitting an engagement flag bearing real informational value over unencrypted comms. It would be great if the TLVCrypt implementation could be extended to take external key inputs, thus avoiding vectors around negotiation and enforcing use of encryption. Work on this is ongoing.

Finally, the best way to remain untouched is to go unseen. Protocol encoders that permit us to tunnel communications over layer 7 are useful to avoid defensive measures that watch for abnormalities or signatures. Automated malicious actor tools are doing the same: sniffing traffic, looking for something to mess with. By increasing the protocols supported by asynchronous handlers such as reverse_http/s and the work being done for reverse_dns, we can avoid detection in the first place and protect our content from abuse at layer 7 with TLVCrypt. Defense-in-depth applies to red team just the same.

Contributors, coders, practitioners, curious minds who want to know how things work: go take a look into /lib/msf/core/handler and core/payload, and learn how payloads, transports, and the session context are composed. Please extend the work already there. The more of this stuff that makes it upstream, the harder it is for everyone (defenders and real adversaries alike) to detect and attack our operations. Stay safe, keep pwning, keep coding, and please check in for our next installment entitled "The compiler of my enemy is my friend," regarding use of some little known PowerShell code the Metasploit community snuck into /lib a while back for advanced autonomous post exploitation.