Merlin Labs Memo -- Week of May 8-12

Merlin Labs Memo -- Week of May 8-12

No alt text provided for this image

Disruptive & Sophisticated Tech Drives 2023 Cybersecurity Trends

Technology is advancing at astonishing rates, impacting us in positive and negative ways. With mobile computing, artificial intelligence/machine learning, 5G, IoT, and quantum computing disrupting IT as we’ve known it – a new cybersecurity era has emerged. The attack surface continues to morph and expand, with new gaps, vulnerabilities, and exploitable opportunities being increasingly leveraged by both nation state and private adversaries. Headlining 2023 for disruptive cybersecurity trends is AI, with tools like ChatGPT enabling AI-driven code and generative AI social engineering tactics to permeate the threat landscape. According to the Forbe’s article, global cyber-attacks increased by 7% in Q1 2023, with organizations experiencing an average of 1248 attacks per week. “It is estimated that 560,000 new pieces of malware are detected every day and that there are now more than 1 billion malware programs circulating. This translates to four companies falling victim to ransomware attacks every minute.” Data security issues, healthcare cyber-attacks, supply-chain attacks, phishing campaigns and ransomware attacks top the list for 2023 cybersecurity attack trends, as well as a tendency for organizations to hide evidence of their cybersecurity attacks, robbing the broader population to benefit from lessons learned. -- Via: Forbes

Our Take:  The degree of sophistication in IT is exploding – and we are all along for the ride, ready or not. I’d prefer to be ready. AI means smarter attacks, 5G means faster attacks, IoT means more distributed attacks, and quantum computing means traditional encryption techniques will be rendered ineffective if not useless. I’ll get straight to the point – we can no longer assume we can keep the bad guys out. They are already here – lurking around or hanging out within our networks – and yesterday’s solutions won’t work for today’s technologies. It’s time to change our mentality around cybersecurity with an approach that assumes a “when rather than if” posture to attacks. Zero trust is a mindset and an approach that assumes anyone and everyone is a potential adversary; extends blanket trust to no user or asset - instead offering granularly authorized sessions for explicit purposes; protects data anywhere and everywhere from unauthorized access; and ensures that cybersecurity controls permeate throughout all components of an IT system down to the user, asset and session level. While there are many great and even critical tools and technologies on the market that combat various trending cybersecurity attack vectors and deliver elements important to zero trust, only those that are deployed and implemented within the context of well-designed end-to-end zero trust tenets will stand a chance against today’s adversarial threat landscape. -- Sarah Hensley

Additional Reading:


No alt text provided for this image

PaperCut Exploitation: Those Without Unpatched Servers Can Cast the First Stone

Servers running PaperCut are currently under attack from ransomware gangs. The fix is straightforward: patch them with a recently-announced patch available from the vendor. But while the fix is straightforward, it is not an easy one. Ask anyone who’s tried to keep current on patching why it’s not an easy job.

Our Take: PaperCut is a paperwork-reducing software solution that is popular with schools and colleges – organizations famous for not having a lot of money to spend on anything, including cybersecurity and skilled cybersecurity staff. For that patch to get applied in time, the person responsible for patching needs to be subscribed to notifications about that product, not miss the notification in a sea of emails that aren’t spammy enough to be caught in a filter, but still overload our inboxes, not be distracted by any number of other responsibilities, and then get the change approved… and the change window be soon enough to be relevant. For those of us who’ve been there, we know that’s not a likely convergence of events.

It’s not enough to dismiss the concerns with a cry of “Automation!” We have to be open to more vendors taking the Microsoft approach – automatic pushing-out of patches, with critical ones getting high priority and urgent prompts to apply and reboot. That disrupts the change cycle, but you know what? Ransomware attacks disrupt the change cycle even worse than emergency, automated patching. I know that “five nines of uptime” is a cherished goal of executives and managers everywhere, but that kind of toxic perfectionism has to subside if we want to properly secure our servers with vendor-pushed updates. -- Dean Webb  

Additional Reading:



No alt text provided for this image

ESXi Continues to be the Achilles heal

“Three security vulnerabilities affecting VMware's vRealize Log Insight platform now have public exploit code circulating, offering a map for cybercriminals to follow to weaponize them. These include two critical unauthenticated remote code execution (RCE) bugs. -- Via:DarkReading

It is the second time in 2023 that the software has been exposed to a pre-auth RCE vulnerability, with security researchers at Horizon3.ai in January of this year publishing a vRealize Log Insight exploit POC that chained three vulnerabilities to remotely achieve root on the platform even in default settings. -- Via:The Stack

Our Take: Its been said many times before, VMware is an essential piece of the infrastructures fabric, it plays one of the most important roles within the environment. However, it is a Linux OS that doesn’t allow 3rd party security to be installed so we are entirely reliant upon VMwares security. The number of CVE’s that are full remote-control exploits (RCE) of the Virtual Machine is simply staggering. Updating ESXi is a serious undertaking considering how many servers may be running on a single server and must be tested extensively still with the chance of bricking the machine.

Even without direct admin credentials there are a number of exploits that can take advantage of the vulnerabilities that have been widely written of.

How do we protect the infrastructure in this case? In theory if Zero Trust principles have been adopted then the possible lateral movement is limited to just that machines VM’s . Traffic monitoring, anomaly detection and tight access controls around the ESXi environment must be strictly enforced to detect (or prevent) cross infection. The VM’s deployed need also be tightly controlled to prevent or resist takeover from the host machine.

As usual Cyber Hygiene is the ideal solution, but just because a patch exists doesn’t mean that an organization can immediately apply it. It will be the supporting security that mitigates the risk of attack and severe consequences.

Good Luck. -- Jeremy Newberry

Additional Reading


No alt text provided for this image

AI Push or Pause: CIOs Speak Out on the Best Path Forward

Recent advances have highlighted AI’s incomparable potential and not yet fully fathomed risks, placing CIOs in the hot seat for figuring out how best to leverage this increasingly controversial technology in business. Here’s what they have to say about it.

With the AI hype cycle and subsequent backlash both in full swing, IT leaders find themselves at a tenuous inflection point regarding use of artificial intelligence in the enterprise.

Following stern warnings from Elon Musk and revered AI pioneer Geoffrey Hinton, who recently left Google and is broadcasting AI’s risks and a call to pause, IT leaders are reaching out to institutions, consulting firms, and attorneys across the globe to get advice on the path forward. 

“The recent cautionary remarks of tech CEOs such as Elon Musk about the potential dangers of artificial intelligence demonstrate that we are not doing enough to mitigate the consequences of our innovation,” says Atti Riazi, SVP and CIO of Hearst. “It is our duty as innovators to innovate responsibly and to understand the implications of technology on human life, society, and culture.”

That sentiment is echoed by many IT leaders, who believe innovation in a free market society is inevitable and should be encouraged, especially in this era of digital transformation — but only with the right rules and regulations in place to prevent corporate catastrophe or worse.

“I agree a pause may be appropriate for some industries or certain high-stake use cases but in many other situations we should be pushing ahead and exploring at speed what opportunities these tools provide,” says Bob McCowan, CIO at Regeneron Pharmaceuticals. 

“Many board members are questioning if these technologies should be adopted or are they going to create too many risks?” McCowan adds. “I see it as both. Ignore it or shut it down and you will be missing out on significant opportunity, but giving unfettered access [to employees] without controls in place could also put your organization at risk.” -- Via: CIO

Our Take: While there is a great deal of activity at the government level to determine how to best regulate powerful new generative AI technology, the most practical solutions will likely come from IT leaders themselves, across business and government. As this article discusses, they are focused on how to purposely gain advantages from the use of AI in a manner that is both safe and responsible. The IT leaders quoted are recommending an incremental approach, determining ethical and legal considerations in advance of testing; experimenting, but not rushing into investments; and considering the implications from the customer point of view.

Technology leaders will provide the guidance, and will adapt, and government regulators who don't fully understand the technology would be wise to listen to their advice. The process of establishing guardrails is already happening. For example, data privacy vendor PrivateAI has launched a redaction tool aimed at reducing companies' risk from inadvertently exposing customer and employee data. Private AI's new PrivateGPT platform integrates with OpenAI's high-profile chatbot, automatically redacting 50+ types of personally identifiable information (PII) in real time as users enter ChatGPT prompts.

Another example is how ERP provider SAP is partnering with IBM to help SAP exploit the natural language processing (NLP) capabilities of Watson AI, integrating it into the digital assistant inside SAP Start. Purposeful collaborations like this wll enhance the trustworthiness of the overall solution while limiting the risks and intellectual property concerns of AI.

For its part, the Federal government is focused on the managing risks of AI. To that end, the Office of Management and Budget will release draft guidance this summer on the use of AI systems within the Federal governemnt. In addition, the White House says AI developers — including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI — will participate in a public evaluation of their generation AI systems to examine how they meet the standards outlined in the National Institute of Standards and Technology’s AI Risk Management Framework, and the Blueprint for an AI Bill of Rights | OSTP | The White House. As a senior administration offical acknowledged, this draft guidance reflects the realty that “AI is coming into virtually every part of the public mission.”  

History shows that the technology community adapts to new technologies, to risks, and is very good at fixing problem in it's path. IT leaders will continue to be on the forefront of responsibly harnessing this powerful new technology. -- Joe DiMarcantonio, PMP

Additional Reading:

 

Readers of our Newsletter: What’s working, what’s not, and what’s on your mind? Leave a comment below or email labs@merlincyber.com. Thank you!  

To view or add a comment, sign in

More articles by Merlin Cyber

Insights from the community

Others also viewed

Explore topics