Human and Identity Threats

Social Engineering

9 min read·Updated 2026-04-26
TL;DR

Social engineering is the manipulation of people into acting against their own or their organisation's interest. Phishing is one form of it. Pretexting, baiting, vishing, tailgating, and watering hole attacks are others. The recent breaches at Twitter, Caesars, MGM, Snowflake, and Coinbase all started with social engineering, not technical exploitation. Annual click-through training does not move the needle. Training that builds reporting habits and verification reflexes does.

What it is

Social engineering is the practice of manipulating people into doing something the attacker wants. That something might be giving up a password, approving an MFA push, holding a door open, transferring money, sharing a sensitive document, or revealing information that enables a later attack.

The umbrella covers many specific techniques:

  • Phishing. Email-based deception, the most prolific variant by volume.
  • Vishing. Voice phishing, conducted over a phone call. Often used against IT help desks or finance staff.
  • Smishing. Phishing over SMS, increasingly common as people respond faster to texts than to email.
  • Pretexting. Building a false context (a "pretext") that justifies the request. The attacker is not pretending to be a system or a brand. They are pretending to be a specific person with a specific reason for asking.
  • Baiting. Offering something attractive (a free download, a found USB stick, an unexpected gift) that contains the attack payload.
  • Quid pro quo. Offering a service or favour in exchange for information or access. "I'm from IT, I can fix your printer issue, just give me your password."
  • Tailgating (or piggybacking). Physically following an authorised person into a restricted area. Holding the door is a social courtesy that defeats badge access.
  • Watering hole attacks. Compromising a website the target community is known to visit, then using it to deliver malware or credential capture pages. Less interactive than other variants, but socially engineered in the sense that it relies on the trust the target places in the watering hole site.
  • Shoulder surfing. Looking over someone's shoulder in a coffee shop, on a train, or in an open office. Low-tech but effective for capturing PINs, passwords, and sensitive screen content.

The common thread is that the technical layer is incidental. The exploit is in the human's response.

Why it matters

Robert Cialdini's research on influence identified six principles that consistently move people to compliance, and every social engineering attack exploits at least one of them:

Authority. People comply more readily with requests from perceived authority. A caller who sounds like IT, a fake email from the CEO, an attacker in a high-visibility vest carrying a clipboard, all exploit this. The Milgram experiments showed how far this goes when the authority feels legitimate.

Scarcity. Time pressure and limited availability shut down careful evaluation. "This offer expires in two hours." "Only twenty people are getting this access." Urgency in a phishing email is the textbook scarcity play.

Social proof. People look to others for cues about how to behave. "Everyone in the finance team has already filled out this form." Fake reviews, fake testimonials, and fake colleague references are all social proof attacks.

Commitment and consistency. Once someone has agreed to a small thing, they are more likely to agree to a larger thing in the same direction. Vishing attacks often start with a question the target will answer "yes" to, then escalate.

Liking. People are more likely to comply with people they like. Attackers research targets on LinkedIn, find shared interests, and open conversations on common ground before pivoting to the request.

Reciprocity. A small favour creates pressure to return one. The attacker offers help, information, or a gift, then asks for something in return.

These principles are not exotic. They describe ordinary human behaviour that works fine in normal social contexts and fails badly when an attacker weaponises them.

The breaches that prove the point are not subtle:

  • Twitter (July 2020). Attackers vished employees, manipulated them into sharing credentials and MFA approvals, and gained access to internal admin tools. Multiple high-profile accounts (Obama, Musk, Apple) were used to push a Bitcoin scam. The attackers were teenagers.
  • Caesars and MGM (September 2023). Both casino operators were attacked by Scattered Spider, a group that excels at vishing IT help desks. In each case, the help desk reset MFA on a privileged account based on a phone call. MGM's losses exceeded $100 million.
  • Snowflake (2024). Customer environments at Ticketmaster, AT&T, Santander, and many others were breached through stealer-log credentials, but the lateral expansion involved coordinated social engineering against IT and security staff at multiple organisations.
  • Coinbase (May 2025). The exchange disclosed a contractor-led incident where overseas support staff were socially engineered or bribed into providing customer information that fuelled targeted phishing of high-net-worth account holders.
  • Uber (2022). An attacker MFA-bombed a contractor, then messaged them on WhatsApp claiming to be IT support and asking them to approve the prompt to make it stop. They did. The attacker had Uber-wide access shortly afterward.

The pattern is consistent. Technical controls were in place. Humans were the weakest link, and they were targeted directly.

How attackers exploit it

The lifecycle of a social engineering attack follows a few common stages.

  1. Target selection. Most attacks pick targets based on access. IT help desk staff have password reset authority. Finance staff have payment authority. Executive assistants have inbox access and meeting visibility. Contractors often have less training than employees but similar access levels.
  2. Reconnaissance. LinkedIn, company websites, breach data, social media, and conference talks all provide background. The attacker learns names, reporting structures, projects, jargon, and personal details.
  3. Pretext development. A specific story that justifies the request. The pretext has to match what the target expects to hear. Calling IT and asking about a router will not work. Calling and saying "this is Maria from accounting, I'm locked out of my MFA, my laptop crashed and I have a board presentation in twenty minutes" might.
  4. Initial contact. Email, phone, in-person, or through a chat platform. The first contact is often low-commitment, designed to build rapport and gather more information rather than to extract value immediately.
  5. Escalation. The attacker uses the foothold to ask for the real target: a password reset, an MFA approval, an MFA device transfer, a wire transfer, building access, or sensitive information.
  6. Use. The access or information gets used directly or fed into a follow-on attack.

Modern social engineering increasingly uses AI to scale personalisation. A vishing call in 2026 can use a voice-cloned version of a real executive, trained on a few minutes of their public speaking. A phishing email can match the target's writing style by analysing their public posts. A LinkedIn message can engage in a months-long fake business relationship before any malicious request arrives.

How to detect it

Detection is harder than for technical attacks, because social engineering does not produce malware signatures or unusual network traffic. The signals are behavioural:

  • Help desk and IT support call patterns. Sudden requests for MFA resets, password changes from unusual locations, or claims of locked-out devices that turn out not to match the user's actual situation.
  • Unusual access from real accounts. A real account doing things the legitimate user would not normally do, especially shortly after a help desk interaction or a reported "support call".
  • Out-of-channel communications. Internal requests arriving through WhatsApp, personal email, or SMS instead of the usual corporate channel. This is often a tell that the request is coming from someone outside the organisation.
  • Sudden changes in user behaviour. Logins from new devices, OAuth grants to unfamiliar apps, mailbox forwarding rules added, MFA devices replaced. Each is an indicator that account control may have been transferred to someone else.
  • Reports from observant employees. A culture that encourages reporting "this felt off" without fear of overreaction catches social engineering attacks earlier than any technical control.

Threat intelligence on active campaigns also matters. Scattered Spider, for example, has predictable tradecraft. Knowing which sectors they are targeting and how they open conversations gives defenders an early-warning advantage.

How to remediate

When social engineering is confirmed:

  1. Identify the affected accounts and access. Whose credentials were reset? Which MFA devices were transferred? Which accounts logged in afterward?
  2. Lock down the accounts. Reset credentials, invalidate all sessions, re-enrol MFA devices, audit OAuth grants.
  3. Pause the relevant business process. If finance was the target, pause outbound payments while you assess what is in flight. If IT support was the vector, restrict help desk authority temporarily.
  4. Investigate lateral movement. Once an attacker has one account, they typically move quickly. Check for new accounts created, new devices enrolled, new tenants added, new federation trusts established.
  5. Communicate to staff. A clear message about what happened reduces follow-on success. Attackers often run multiple attempts.
  6. Update training based on the actual technique used. Generic awareness training does not address the specific tradecraft you just saw. A short, specific message reaches further than a quarterly module.

Best practices

  • Train for verification reflexes, not just recognition. Recognition is "did this email look phishy?" Verification is "I'm going to call back on a known number before I do anything." The second one is teachable, the first one is unreliable.
  • Eliminate help desk authority that bypasses identity proofing. A help desk that can reset MFA on a privileged account based on a phone call is a help desk that will get socially engineered. Identity proofing has to be unspoofable. Some organisations use video verification combined with security questions only the legitimate user would know, plus a manager approval for sensitive changes.
  • Drop annual click-through training. The data on this is unkind. A one-hour module once a year does not change behaviour. Sustained, short, targeted communications based on real attempts that hit your organisation do.
  • Make reporting easy and rewarding. A button in the email client that reports phishing in one click. A clear "this looked weird" channel for non-email cases. Public recognition for employees who flag real attacks. Punishment-free reporting even when the user clicked first.
  • Run realistic exercises. A red team that includes social engineering, conducted with care for participants, reveals what training and policies actually catch. The goal is learning, not blame.
  • Limit exposed personal information. What attackers cannot find about your executives, finance staff, and IT admins, they cannot easily weaponise. Reducing the public footprint of high-risk roles is meaningful.
  • Plan for AI-augmented social engineering. Voice cloning, deepfaked video, and AI-generated written impersonation are all in the wild. Process controls (callback verification, multi-person approval, out-of-band confirmation) hold up against these in ways that pure recognition does not.

Why human nature is not the problem

There is a recurring tendency to blame users for falling for social engineering. The framing is wrong. Humans are exploitable in the ways Cialdini described because those traits are useful in normal social functioning. Treating the user as the broken component leads to controls that do not work.

The right framing is that humans are part of the system. Their cognitive limits are predictable. Defences should account for them, not pretend they will not exist. Process controls, technical controls, and well-designed training together produce results. Training alone, especially the punitive kind, does not.

ScruteX surfaces the brand impersonation, lookalike domains, and exposed personal information that fuel social engineering attacks against your organisation.

Learn more