In the Weeds - Social Engineers, Red Teams and blindness
A couple of years ago at a conference in Texas I heard the phrase “in the weeds" and it bring to mind why so many great internal pen testing teams are hamstrung before they even start.
The phrase means being in the sort of situation where you get so emerged in the details “the weeds” that it is hard to get any kind of productive perspective, or non-biased view. The English equivalent phrase would be “can’t see the wood for the trees” and it reminds me a lot of an old director of mine who could spot a mistake in a spreadsheet of hundreds of columns and rows almost instantly, when the poor person who had spent the last three weeks of their life perfecting the document was blind to the errors. The person, often myself, was “in the weeds,” so familiar with the numbers, and the document, that they become blind to the errors a fresh pair of eyes could see instantly.
So being “in the weeds” is all about the blindness of familiarity, for example the tendency for everyday things not to really register in our consciousness, like a “mind the gap” sign, or a pothole in the road that we drive down every day, we don’t see them because we always see them. In a way, those mental “weeds” are a sedative, and we are asleep when we should be awake.
Pondering this I heard about a presentation an internal red team had given, on their in-house social engineering prevention program. The phishing attacks, the simulations, the pre-texting, the massive effort spent on trying to reinforce the “human element” of their business against social engineers, and all of the security issues that arise when people are involved at any level.
The problem with using internal teams is that unless they have been specifically trained to think like social engineers then they simply can’t and don’t replicate a social engineering attack. What you have is, at best, a group of employees trying to avoid the weeds that come with their day job, and simulate an attack that would, in reality, be carried out by someone who effectively has no weeds to worry about.
A social engineer looks at a target in the same way that director looked at my spreadsheets. He could see every flaw, every mistake, because he had fresh eyes on it, was unencumbered by any consequence of things not being right, and was, at heart, calculating and merciless in his criticism. A social engineer just doesn’t see what an employee sees, ever. They only see the errors, the weaknesses the problems to be exploited that is the entire point. For company employees, even in a huge global firm, trying to think like an attacker doesn’t really work a lot of the time because they have insider knowledge, are wrapped up at some level with the values, beliefs and politics of the business and – to put in bluntly – are still employees.
In-house teams clearly do great work and go some way to help and educate their colleagues, but the social engineering perspective tends to be wrong, because they are part of a wider team. If you work for the man, then at some level, the man gets your loyalty, calls the shots, and can call a day of reckoning. Whereas, no genuine social engineer ever worried about which font to use in the presentation explaining their actions!
Doesn’t happen. Different mindset. To coin another phrase, “there is no ‘I’ in team” and that is exactly how most social engineers prefer to work.
No team. No rules. No trust. Good.
For internal staff getting past the “team” mentality and the employee mindset is extremely challenging, psychologically, and is, if anything, reinforced by the language and mindset of most “tiger” or “red” teams. Who work together, plan their “exploits” together and report back together.
They celebrate their successes, support each other and present their results as a team. They have each other’s back, and exhibit all the normal behaviours and psychology of a close-knit group. You couldn’t get behaviour more remote from the way most social engineers and indeed hackers work.
The truth is, many social engineers are unlikely to consider themselves “team players” as this is simply not productive for the role. The job does not inspire or benefit from being especially trusting and the sort of people who do it tend to prefer, or at least not mind, working alone or with just one or two others. A social engineer has to take risks, makes mistakes, takes wrong turnings, and in this lies the reality, and also the effectiveness , or not, of the attack. It’s a different, malicious, perspective and relies heavily on not being tethered by restrictions, a lack of loyalty and most of all a lack of a regular pay cheque, especially from those we would be looking to breach.
So, whilst I admire and applaud the efforts of internal teams, it is important to recognize their limitations, because it is so very hard to get into the right mindset for an attack on your employer and colleagues. The secret would lie, as always in the right leadership and in the sort of irreverent attitude to rules and disregard for any kind of procedure, loyalty or inclusiveness, that is almost impossible to find within a company…
It’s not always wise to chop down weeds, especially if you live in the pond…
First published February 2015
Sounds like a TV show...get a likeable, scary smart and fatally flawed loner being all hacky/cracky/testy; then watch him on his journey as a cowboy in the wild cyber lands. Your point is good: Being too close is a problem. Being too close to one culture is a problem. Getting your butter where you get your bread is a problem.
Not normally one to post on LinkedIn I thought I would share my 2pence worth here :). So in as much as I agree with what you have written Jenny, I think it says more to ongoing reality that the various forms of Cyber Testing are only as good as the people and the relationship with the organisation being tested. I think you are hinting that an external group / individual is in a better position to simulate an offensive scenario, but really the only groups / individuals who don't have to answer to anyone is the real threat, anything else is a compromise of sorts. You mention an internal testing team has an employer to answer to, but so does the external consultant, you are not going to do anything to jeopardise your credibility and future work and reputation, perhaps more so than from within. In my opinion internal or external it requires a few things to achieve validity and success with any testing, be that Social Engineering or any other. The key component is TRUST. Trust is what is going to allow the simulated threat to push the boundaries of realism, from here you can provide comfort and reassurance in the right areas (employer, hr, legal, privacy, etc) that even though what you are doing is scary they can have trust in your professionalism, ethical and moral principles and especially with Social Engineering a level of empathy for the impacting results. Once you have this the talent, creativity, ingenuity and ability to think outside the box will be the bottleneck in what can be achieved. I agree that an internal team may be blinkered by position and knowledge (we can discuss insider threat another day), but again to me this says more about the people and the team not being effective in their capability, not everyone can think and act offensively, but with the right people and mind set it can be done. A key problem with Social Engineering comes back to something I have discussed before around the different types of Social Engineers that exist, and I think what you are talking about are the more "Opportunist" or "Natural Confidence" Social Engineer who hops away from doing spreadsheets when the need for a Cyber exercise arises :) Unless you are the "real" threat adversary there are always going to be rules and regulations you have to work within, you can push, manipulate and manoeuvre around them, but the need to collect a pay cheque keeps you in check. The real bad guys are not tied by such concerns.