top of page
Human Red Team Ethical Social Engineering Training / Consultancy
Human Red Team Ethical Social Engineering Training / Consultancy

Social Engineering Basics: Why Smart Employees Get Tricked (and How to Lower the Risk)

  • synovum
  • Dec 23, 2025
  • 6 min read

TL;DR

Social engineering succeeds by exploiting human shortcuts (trust, urgency, helpfulness), not by “outsmarting” people.


Real-world breach data shows the human element is involved in around 60% of breaches, so this isn’t an edge case. (Verizon)


Use a simple habit when something feels 'off': Pause → Verify → Report.

To reduce organisational risk, combine people habits + process controls + realistic practice (done ethically).


If you’ve ever thought “That looked legitimate… until it didn’t”, you’ve already met social engineering. These attacks are designed for real work: busy inboxes, fast approvals, helpful teams, and quick questions over email, chat, and phone.


This post explains why social engineering works on smart people, the common patterns attackers use, and a repeatable response that lowers the risk of a successful attack.

Social Engineering Targets Human Shortcuts, Not Intelligence


In a normal week, employees make hundreds of micro-decisions: open, approve, forward, confirm, reset, share, pay. We’re able to move quickly because our brains rely on shortcuts:

  • Trust-by-default: we assume internal tools and familiar names are safe.

  • Urgency bias: we prioritise “now” over “correct”.

  • Authority bias: we’re less likely to challenge seniority or “official” language.

  • Helpfulness: solving problems quickly is praised at work.

  • Context switching: moving quickly between tasks increases the potential for mistakes/not thinking clearly.

 

Attackers don’t need you to be careless. They need you to stay busy and treat a high-risk request the same way you would a routine one.

Why This Matters: The “Human Element” Is Involved In A Lot Of Incidents


A few reality checks for teams:

  • Verizon’s 2025 Data Breach Investigations Report (Executive Summary) notes that the human element in breaches hovered around 60%. (Verizon)

  • The FBI’s 2024 IC3 Annual Report lists Business Email Compromise (BEC) losses of $2.77B and Phishing/Spoofing losses of $70.0M (reported losses in 2024). (ic3.gov)

  • Microsoft’s Digital Defence Report 2024 says password-based attacks make up over 99% of 600 million daily identity attacks, reinforcing how often identity compromise begins with human-targeted tactics. (Microsoft).


The takeaway: the goal isn’t perfect detection. The goal is to build consistent, low-friction habits that prevent high-risk actions from occurring on autopilot.

Five Social Engineering Patterns You’ll See Everywhere


Learn the patterns, not just the example. Most lures are variations of these five.


  1. Authority + Urgency (“Do this now”)

What it looks like:

  • “I need this approved immediately.”

  • “This is Finance - process this payment before close.”

  • “IT here - your account will be disabled in 15 minutes.”


Why it works

  • Urgency/authority reduces the potential for verification/challenge by the recipient.


Safer response

  • Pause and verify using a known channel (call back via a trusted number; don’t reply to a message thread).


2.    Familiarity (“We’ve worked together before”)

What it looks like

  • A ‘colleague’ asking for a quick favour in Teams/Slack etc.

  • A supplier email thread that ‘continues’ a real conversation.

  • A request that makes reference to real internal projects or names.


Why it works

Attackers use publicly available information and “benign conversation” to appear legitimate. Proofpoint notes that more than 90% of “pure social engineering” APT campaigns pretend to be about collaboration and engagement. (Proofpoint.com).


Safer response

  • Treat friendly tone as neutral.

  • Verify identity + request further information, especially for access, payment, or sensitive data.


3.      Helpfulness (“Can you just…?”)

What it looks like

  • “Can you reset my password? I’m locked out before a meeting.”

  • “Can you share that file? I can’t access it.”

  • “Can you update these bank details for the invoice?”


Why it works

  • It feels good to be able to help someone.


Safer response

  • Move it into an official process (ticketing, approvals, identity checks).

  • If it’s “off-process,” treat it as high risk.


4.     Curiosity & Opportunity (“You’ll want to see this”)

What it looks like

  • “Updated contract attached”.

  • “New HR policy—review required”.

  • “You were mentioned in a document”


Why it works

  • Curiosity ‘works’ quicker. Verification is a ‘slower’ process.


Safer response

  • Avoid opening unexpected links/attachments.

  • Access the information via known systems (portal/bookmark) rather than the message content.


5.     Fear & Consequence (“Something bad will happen if…”)

What it looks like

  • “Your account is compromised - log in here.”

  • “Unusual activity detected - confirm now.”

  • “Compliance deadline missed - resolve immediately.”


Why it works

  • Fear narrows attention and pushes the fastest “fix.”


Safer response

  • Navigate via a known URL (bookmark/typed address), not the embedded link.

  • Report it.


The UK’s National Cyber Security Centre (NCSC) explicitly warns that spotting phishing is becoming harder, and even careful users can be tricked, so don’t rely on instinct alone. NCSC

Where Attackers Strike: Channel-By-Channel Tells

Email

Look for:

  • Pressure to bypass approvals.

  • Lookalike domains or odd reply-to addresses.

  • “Unexpected attachment + urgent ask” combinations

Chat (Teams/Slack)

Look for:

  • “Quick favour” requests involving access, credentials, or payments

  • “Can’t talk now—just do it” language

  • External users who sound internal

Phone (vishing)

Look for:

  • Caller ID “matching” but pushing urgency.

  • Verification questions that are really data collection.

  • Pressure to stay on the line while you take actions.

QR codes (quishing)

Look for:

  • “Scan to view document”, leading to login pages where attackers might be trying to harvest login credentials.

  • QR codes in emails/posters that bypass normal link scrutiny.

A 60-Second Response Framework: P.V.R.

When something feels off, don’t improvise. Use a repeatable script based on common guidance from the UK NCSC, GOV.UK, Microsoft, and CISA/NSA/FBI/MS-ISAC.

1.     P - Pause

Ask: “What are they trying to make me do quickly?”

2.     V - Verify

Verify identity and intent using a known-good method:

  • Call back using a trusted number.

  • Start a new message thread (don’t reply in-line).

  • Use official ticketing/approval apps and workflows.

3.     R - Report


Reporting isn’t “making a fuss.” It’s how an organisation can block repeat attack attempts and learn faster.

What “Good Reporting” Looks Like (So It’s Actionable)


Include:

  • The message itself (use your “Report phishing” button if provided by your organisation).

  • Sender address + display name + time received.

  • Link destination/attachment name (don’t click “just to check”).

  • Whether you interacted (clicked/replied/entered credentials/scanned QR).

  • What the request tried to achieve (payment, access, credentials, data).


In the UK, NCSC guidance includes forwarding suspicious emails to report@phishing.gov.uk (and the GOV.UK page reiterates this). (NCSC).

Lower The Risk By Training Both The Baseline And The “Builders”


Stopping social engineering at scale isn’t just “tell people to be careful.” It’s all about:

  • Implementing baseline habits for everyone: use a shared common language (urgency/authority/verification), consistent reporting, and give people the confidence to challenge ‘strangers’ politely.

  • Improving capability for practitioners: designing safe, ethical testing exercises; improving identity-proofing and approval workflows; measuring outcomes beyond click rates.


The second recommendation is important because attackers evolve faster than annual training can keep up. If your security, risk, or training teams can run ethical, well-governed practice scenarios, you can identify process gaps before criminals do and close them before they are compromised.


APMG’s listing for NCSC Assured Ethical Social Engineering – Advanced describes a practitioner-level course focused on conducting ethical social engineering engagements and identifying human risk areas. (APMG International).

Do This Now  / CTA

Pick actions from this list that you can implement this month, then choose the training path that fits your role.

  1. Save a verification script:

    “Happy to help—can I call you back on your known number to confirm?”

  2. Add a business process rule for money movement:

    Never change bank details or approvals in a single message—verify them out of band via a different channel.

  3. Find your reporting route:

    Know exactly how to report suspicious messages (button, mailbox, IT/security team, SOC) within your organisation.

  4. Reduce your digital footprint:

    Trim overshared details in public profiles/templates that enable targeted pretexting.

Training paths


For all staff (baseline habits): Build confident, consistent behaviours with our Foundation-level Ethical Social Engineering training.


For security/risk practitioners and security leaders (programme maturity): Learn how to design and run ethical engagements/exercises, strengthen controls, and improve organisational resilience with our NCSC-Assured Advanced-level Ethical Social Engineering training.


Our current training schedule for 2026 is as follows:

  • Foundation (one-day online).

    • 09 Mar.

    • 06 Apr.

    • 04 May.

    • 07 Sep.

    • 12 Oct.

  • Advanced (five-day in-person).

    • 13-17 Apr.

    • 01-05 Jun.

    • 07-11 Sep

    • 26-30 Oct.

    • 30 Nov-04 Dec.

 

Additional training delivery dates may be added to the above schedule, so get in touch with us to find out more and follow us on LinkedIn and X

Sources and Further Reading


 
 
 

Comments


bottom of page