Cyber criminals have many tricks up their sleeves when it comes to compromising sensitive data. They don’t always rely on system vulnerabilities and sophisticated hacks. They’re just as likely to target the an organisation’s employees.
The attack methods they use to do this are known as social engineering.
What is social engineering?
Social engineering is a collective term for ways in which fraudsters manipulate people into performing certain actions.
It’s generally used in an information security context to refer to the tactics crooks use to trick people into handing over sensitive information or exposing their devices to malware.
This often comes in the form of phishing scams – messages that are supposedly from a legitimate sender that ask the recipient to download an attachment or follow a link that directs them to a bogus website.
However, social engineering isn’t always malicious. For example, say you need someone to do you a favour, but you’re unsure that they’ll agree if you ask them apropos of nothing.
You might grease the wheels by offering to do something for them first, making them feel obliged to say yes when you ask them to return the favour.
That’s a form of social engineering. You’re performing an action that will compel the person to do something that will benefit you.
Understanding social engineering in this context helps you see that social engineering isn’t simply an IT problem. It’s a vulnerability in the way we make decisions and perceive others – something we delve into more in the next section.
Why social engineering works
Think of the human brain a security network and its susceptibility to be fooled as a system vulnerability. That makes social engineering the exploit that fraudsters use to take advantage of that vulnerability.
But instead of malware injection or credential stuffing, criminals use rhetorical devices – ways of speaking that persuade us to follow their direction.
For an idea of how they do this, let’s take a look at Robert Cialdini’s six principles of persuasion:
This is the notion that, when you do something for someone, they feel obliged to return the favour.
Cialdini uses the example of a waiter or waitress in a restaurant giving you a small gift with your bill, such as a mint or a fortune cookie.
This gesture has been proven to increase the tip customers use by as much as 14% – and when the item is implied to be a special reward (“For you nice people, here’s an extra mint”), the tip increases by 23%.
Reciprocity can be particularly dangerous in a cyber security context because it shows how rarely we think about the motives behind supposedly generous acts – or, if we aware of them, how we stick to our social obligations anyway.
This principle states that people are likely to want something if they know there’s a finite supply. It works particularly well when the person or organisation providing the service announces a reduction, emphasising how scarce the service is.
For example, when British Airways announced that it would be cutting back on its London–New York Concorde service due to a lack of customers, ticket sales jumped.
Nothing about the service had changed, nor had the price dropped. British Airways hadn’t reinforced the benefits of flying by Concorde or announced that it would be stopping the service altogether.
But what it did was imply that the service might not be available in the future.
This technique can also be seen when organisations market their product as “while stocks last”. The aim is to create a sense of urgency, forcing people to act now for fear of missing out.
This is the concept that people are more trust experts in their fields – particularly when they can back up their expertise with evidence.
Cialdini notes, for example, that we’re more likely to follow a medical professional’s advice if we’re aware of their credentials.
By highlighting their expertise – whether that’s by displaying their qualifications on the wall, referring to themselves as ‘Doctor’ or listing their professional experience – they assure the patient that they are trustworthy.
This principle exploits people’s unwillingness to be hypocritical. The social engineer nudges the victim into a seemingly harmless opinion or act, then uses that logic to force them into a larger, more consequential position.
Cialdini cites the example of homeowners who had agreed to place a small postcard in the front windows of their homes that supported a Drive Safely campaign.
A few weeks later, those people were far more likely to agree to erect a large, unsightly billboard in their gardens displaying the same message when compared to a neighbourhood that hadn’t first been asked to display postcards.
In another example, a health centre found that patients were 18% less likely to miss an appointment if they were required to write down the details instead of a receptionist.
The simple act of writing down the appointment details reinforced the fact that it was the patient’s obligation to turn up.
This technique can also be seen in the likes of the sunk cost fallacy – in which someone continues to spend time, money or effort on a something because they don’t want to waste their investment or accept that they made a mistake.
You see this error often in cyber crime. Once a scammer has someone on the hook, they will have a much easier time persuading the victim to comply with requests.
The fifth principle – that people are more likely to agree to something when asked by someone they like – is just as likely to occur accidentally as it is deliberately.
After all, some people are simply likeable, and through no conscious effort on their part, they find that others are more willing to do them favours.
But what makes a person likeable? Cialdini says that there are three important factors: we like people who are similar to us, who pay us compliments and who cooperate towards mutual goals.
Cialdini refers to a study in which a group of business students were almost twice as successful in a sales negotiation when they shared some personal information with the prospective investor and found something the two parties have in common before getting down to business.
However, there’s another factor at play in this example. It’s not just that the students asked the right questions; it’s the way those questions were asked.
Perhaps the most important thing that makes someone likeable is if they appear genuine. People are generally good at spotting when someone is disingenuous, so it can be very hard to affect likeability in face-to-face correspondence.
But via email, we have the time to curate what we say – something that’s particularly true for scammers, who sometimes spend hours crafting templates for their emails.
The final principle is consensus, which states that when people are unsure what to do, they follow the actions and behaviours of others.
Cialdini uses the example of a study in which hotels tried to get guests to reuse towels and linens.
It found that the most effective way of going that wasn’t to highlight the benefits of reusing towels (such as it being environmentally friendly) but to simply state that the majority of guests already do this.
It at first seems incomprehensible that we’re more effectively persuaded by an argument that’s essentially ‘everyone else is doing it’ rather than being presented with evidence, but it aligns with a lot of our other actions.
Consider the last time you were in an unfamiliar environment; did you not look at how others were acting and follow their lead?
The principle of consensus demonstrates that people don’t need to be given a reason to comply with a request; rather, they can be influenced by pointing the actions of those around them.
Common social engineering attack techniques
This technique forms the context of social engineering scams, referring to the pretext – or false scenario – that scammers use to contact victims.
In a typical social engineering scam, the pretext might be that there has been suspicious activity on your Netflix account or that you need to confirm your payment card details for an Amazon order.
This is a specific type of phishing scam in which the scammers claim they have something beneficial for victims if they follow their instructions.
Whereas the examples we listed above use fear as a motivator – ‘someone is trying to break into your account’, ‘your package won’t arrive’ – baiting relies on curiosity and desire.
For example, a scam might direct the victim towards a website where they can supposedly download music, TV series or films. However, that website is designed to capture personal information or trick people into downloading infected files.
Baiting has also been used in physical attacks, with scammers leaving infected USB drives lying around conspicuously, waiting for someone to pick them up thinking that there might be something interesting on them.
Similar to baiting, quid pro quo attacks claim to help the victim – usually by offering a service – in exchange for information. The difference is that these types of attacks are supposedly mutually beneficial.
The prototypical quid pro quo attack was the Nigerian prince scam: the attacker has vast sums of money they need help transferring, and if you give them the cash to do this, you’ll be recompensed.
Attacks have become more credible since then. For example, an attacker might phone up employees claiming to be from technical support.
Eventually they’ll find someone who was genuinely waiting for assistance and allow the scammer to do whatever they want to their computer under the assumption that they are a colleague who is solving the issue.
This attack is designed to trick people into buying unnecessary software. It begins with a pop-up ad – generally imitating a Windows error message or antivirus program – claiming that the victim’s computer has been infected with malware.
Alongside the message, the ad will claim that you need to purchase or upgrade your software to fix the issue.
Those that do end up installing bogus software that appears to scan your system but is in fact either doing nothing or installing malware.
This is a specific type of phishing in which scammers pose as customer representatives on social media.
They create accounts that imitate an official brand and wait for someone to post a complaint about that organisation on Facebook or Twitter.
The scammer will respond in one of two ways. They might link to an official complaint channel or offer the victim something by way of an apology, such as a discount on their next purchase.
Both these approaches are designed to steal the victim’s personal information.
How to protect yourself from social engineering
There are many ways you can protect yourself from social engineering attacks. For example, you should:
Organisations that want to address the threat of social engineering should conduct regular staff awareness training and test employees’ susceptibility with a social engineering penetration test.
With this service, one of our experts will try to trick your employees into handing over sensitive information and monitor how they respond.
Do they fall right into the trap right away? Do they recognise that it’s a scam and ignore it? Do they contact a senior colleague to warn them?
With this information, along with our detailed report containing our findings guidance, you can pinpoint your security weaknesses and fix them before you’re targeted for real.