Researchers uncover Python package targeting crypto wallets with malicious code
Welcome to the latest edition of Chainmail: Software Supply Chain Security News, which brings you the latest software security headlines from around the world, curated by the team at ReversingLabs .
This week: Researchers uncover CryptoAITools, a PyPI package targeting crypto wallets with malware. Also: Researchers circumvent Microsoft Azure AI Content Safety.
This Week’s Top Story
Researchers uncover Python package targeting crypto wallets with malicious code
Back in late September, ReversingLabs threat researchers discovered a new malicious package on the Python Package Index (PyPI), CryptoAITools, which was disguised as “a Python toolkit to create and manage crypto trading bots,” but actually embedded downloader functionality when deployed to victims. This week, Checkmarx published an in-depth analysis of that malware, which found that the package stole sensitive data from victims using both Windows and macOS, and drains assets from their crypto wallets, as reported by The Hacker News.
Checkmarx researchers found that the malware authors went to great lengths to disguise their malware, setting up “a deceptive graphical user interface (GUI)” to distract victims while the malware is downloaded to target systems. Attackers also designed the malware’s code so that upon initial installation, the malware would be able to detect which OS the target was using via the "__init__.py" file. Once the OS was known to the malware, it fully executed the malicious functions that are carried out in a multi-stage infection process. The payloads for this malware are downloaded from a fake website disguised as a cryptocurrency trading bot service, but to victims appears to be legitimate – making it easier for the attackers to not only evade detection but also modify the malware’s capabilities.
Researchers assert that the malware’s primary goal is “to gather any data that could aid the attacker in stealing cryptocurrency assets." This includes targeting a variety of crypto wallets, such as Bitcoin, Ethereum, Exodus and more, in addition to harvesting sensitive victim information like passwords, SSH keys, various files, financial information, and Telegram communications. All harvested information is uploaded to the attackers’ gofile[.]io file transfer service, and the victim’s local copy is deleted.
The malicious PyPI package was downloaded 1,300 times before being taken down from the repository, researchers noted. However, attackers went to extra lengths to ensure that their malicious campaign was on multiple platforms by posting the same malware on a GitHub repository named “Meme Token Hunter Bot.” According to Checkmarx, "this multi-platform approach allows the attacker to cast a wide net, potentially reaching victims who might be cautious about one platform but trust another.”
Detailed information about CryptoAITools can be found on Spectra Assure Community, RL’s free resource for finding malicious open source packages.
This Week’s Headlines
Security researchers circumvent Microsoft Azure AI Content Safety
Researchers at Mindgard uncovered two security vulnerabilities in Azure AI Content Safety, Microsoft’s filter system for its AI platform. The vulnerabilities create a potential means for attackers to bypass content safety guardrails when pushing malicious content onto a protected large language model (LLM) instance.
CSO reached out to Microsoft about the discovery, which the company acknowledged with a response to the news outlet. However, CSO deemed that the company’s response “downplayed the seriousness of the problem,” because Microsoft argued that the ‘techniques’ (rather than vulnerabilities) uncovered by the researchers are limited to the user’s individual session and do not pose a security risk to other users. Despite Microsoft’s mild assessment of the discovery, Mindgard believes that the gen AI firewall built into Azure would neither block the generation of harmful content, nor offer a reliable defense against jailbreak attacks. (CSO)
Hackers steal 15,000 cloud credentials from exposed Git config files
Researchers at Sysdig have been tracking an ongoing software secrets exposure campaign they’ve dubbed “EmeraldWhale,” in which threat actors scan for exposed Git configuration files to steal various credentials. The campaign has so far collected over 15,000 cloud account credentials from thousands of private repositories. The attackers pulled this off using automated tooling that scans IP ranges for the exposed files, which may include authentication tokens, researchers noted. Stolen authentication tokens are then abused by cybercriminals to download related private repositories across GitHub, GitLab, and BitBucket, which allows attackers to scan for more credentials. (BleepingComputer)
Recommended by LinkedIn
BeaverTail malware resurfaces in malicious npm packages targeting developers
The Datadog Security Research team has been monitoring a new malicious campaign on npm that consists of three malicious packages delivering the BeaverTail malware to victims. The malware strain is a JavaScript downloader and information stealer linked to an ongoing North Korean campaign tracked as Contagious Interview (aka Tenacious Pungsan, CL-STA-0240, Famous Chollima). All of the packages have been taken down from the repository, but this discovery is a part of a greater, year-long campaign sponsored by the North Korean government, the Democratic People’s Republic of Korea (DPRK), that aims to trick job-seeking developers into downloading malicious packages. ReversingLabs discovered a similar incident back in September that is tied to the North Korean Lazarus Group. (The Hacker News)
Researchers uncover OS downgrade vulnerability targeting Microsoft Windows Kernel
SafeBreach researcher Alon Leviev has discovered a new attack technique that can be used to bypass Microsoft's Driver Signature Enforcement (DSE) on fully patched Windows systems, leading to OS downgrade attacks. Downgrade attacks are when an attacker downgrades an OS to a lower quality, less secure OS version in order to carry out malicious activities. An earlier discovery from Leviev published back in August included two privilege escalation flaw vulnerabilities in the Windows update process that could cause a downgrade attack. His newest discovery of the downgrade technique is made possible using these vulnerabilities to downgrade the "ItsNotASecurityBoundary" DSE bypass patch on a fully updated Windows 11 system. (The Hacker News)
Personal liability: A new trend in cybersecurity compliance?
This article from CIO details two regulations brought forth by the European Union (EU) that include provisions for holding individuals such as chief information security officers (CISOs) and other IT leaders accountable if the organization they are working for undergoes a cybersecurity breach. These EU regulations include the second version of the Network and Information Security Directive (NIS2) and the Digital Operational Resilience Act (DORA). Generally speaking, both pieces of legislation warrant that regulators may hold cybersecurity leaders personally liable if they assess that leadership has been “grossly negligent” in the wake of a cybersecurity incident. However, the article does warn that holding leaders personally liable would only occur in “cases of extreme or willful negligence,” and will not be common-place. (CIO)
For more insights on software supply chain security, see RL Blog.
The Best of RL
Blog | CISO Survival Guide: Commercial Software Supply Chain Risk
Today's enterprises run on commercial-off-the-shelf (COTS) software for nearly every critical function, from payroll and human resources to IT infrastructure - all provided by trusted vendors. But do you know how to manage all the risk that comes with that? Continue reading to learn how. (Read It Here)
Webinar | The MLephant in the Room: How to Detect ML Malware
November 6 at 12 pm ET
As the demand for AI capabilities grows, LLMs and other machine learning (ML) models are increasingly included in the software that we develop and consume. However, their popularity has attracted the eye of threat actors and become a new attack vector of choice. Join this live session to see how RL has developed the ability to detect malicious ML models, which can help protect your enterprise. [Register Here]
Webinar | YARA Rules 101
November 14 at 11 am ET
For over a decade, RL experts have been identifying key threats in the wild and creating open-source YARA rules to spot them. Only rules that meet the strictest of criteria are considered for RL’s GitHub for anyone in the community to use. In this webinar, hear directly from our experts about the role YARA rules can play in identifying threats, what distinguishes high-quality YARA rules from lower quality ones, and how to write and use them to meet your threat hunting and detection needs. [Register Here]
For more great conversations to watch, see RL’s on-demand webinar library.