Brand Protection

Brand Mentions Tracking

7 min read·Updated 2026-04-26
TL;DR

Brand mentions tracking is the practice of watching where your company name, executive names, products, and domains appear across the public web, hidden forums, and criminal marketplaces. Done well, it gives you days or weeks of warning before an attack hits, surfaces impersonation and abuse in real time, and shows you exactly which threat actors are talking about you. Done poorly, it produces a firehose of noise that nobody reads.

What it is

Brand mentions tracking is straightforward in concept and surprisingly hard in execution. The concept: monitor everywhere your brand name, key executives, products, and infrastructure are talked about, and surface the conversations that matter. The execution: distinguish "matters" from the surrounding ocean of casual mentions, repostings, news aggregators, employee LinkedIn posts, and bot-generated chatter.

The signals you are typically looking for:

  • Impersonation. Fake accounts, fake domains, fake apps, fake support pages.
  • Trademark abuse. Counterfeit listings, unauthorised resellers, fake partnership claims.
  • Phishing infrastructure being staged. Forum posts asking about the best way to phish your customers, screenshots of phishing kits targeting your login pages, leaked phishing templates.
  • Data exposure. Database listings, credential dumps, internal documents, proprietary code on paste sites or forums.
  • Threat actor chatter. Underground discussions naming you as a target, ransomware victim leak posts, hacktivist call-outs.
  • Sentiment and reputation shifts. Coordinated negative campaigns, viral customer complaints that need a response, false stories spreading on social media.
  • Insider activity. Employees posting things they should not, recruitment of insiders by competitors or attackers.
  • Regulatory and legal mentions. News of investigations, lawsuits, or compliance issues affecting your brand.

The breadth is the point. A mention on a Russian-language carding forum and a tweet from a customer are both useful, for different reasons, in different timeframes, to different teams.

Why it matters

A few reasons it earns the budget.

Early warning. Most attacks have a preparation phase that is partially visible. Phishing infrastructure gets built before it gets used. Threat actors discuss targets before they hit them. Stolen data gets advertised before it is fully sold or weaponised. A monitoring programme that picks up these signals gives the security team meaningful lead time.

Impersonation detection. Fake social profiles, rogue apps, lookalike domains, and counterfeit storefronts all leave mentions somewhere on the web before customers fall for them. Continuous monitoring catches them earlier than waiting for customer complaints.

Crisis response. When something goes wrong (a breach, a viral PR incident, a coordinated harassment campaign against an executive), knowing the full picture of where the conversation is happening is the difference between an effective response and a clumsy one.

Threat intelligence enrichment. Mentions of your company on dark web forums, in actor-run channels, or in leaked communications between threat actors give you direct insight into who is interested in you and why. This is harder to get any other way.

Regulatory and disclosure obligations. In some sectors, knowing that your data is being traded triggers specific notification duties. You cannot meet those obligations if you do not know.

The cost of not doing it is rarely felt as a single incident. It accumulates as a pattern of being late to things you could have been early to.

The three layers

Where mentions live shapes how you have to look for them.

Surface web

The public, indexed internet. News sites, blogs, social media, public forums, code repositories, app stores, review sites. Most monitoring programmes start here because the data is the easiest to access.

What lives here:

  • News and press coverage
  • Social media (X, LinkedIn, Facebook, Instagram, TikTok, YouTube, Reddit)
  • Public code repositories (GitHub, GitLab, Bitbucket public repos)
  • App stores and review sites
  • Customer complaint forums (Trustpilot, Reddit subforums, regional review sites)
  • Trademark and domain registration databases
  • Public paste sites (the public side of services like Pastebin)

Volume is the main challenge. A modestly known brand gets thousands of surface web mentions per day, the vast majority irrelevant.

Deep web

The parts of the public internet that are not easily indexed: private forums, gated communities, members-only channels, regional platforms with limited search engine coverage, and content behind authentication.

What lives here:

  • Private Telegram channels and Discord servers
  • Members-only cybercrime forums that allow registration but not anonymous browsing
  • Regional language platforms with limited Western indexing
  • Underground Discord servers tied to specific scam ecosystems
  • Specialist trading forums (carding, account selling, fraud kits)
  • Gated leak databases

Access is the main challenge here. Some communities require referrals, paid membership, or sustained presence over time before they expose useful content.

Dark web

Tor-based and similar hidden services, including ransomware leak sites, criminal marketplaces, and specialised forums.

What lives here:

  • Ransomware operator leak sites (LockBit, Cl0p, BlackCat successors, Akira, Play, RansomHub, and the constantly shifting roster of active groups)
  • Marketplaces selling stolen data, accesses, and compromised accounts (the surviving ones after the takedowns of the past few years)
  • Initial access broker forums where attackers sell footholds into specific networks
  • Long-running Russian-language forums (XSS, Exploit, the surviving lineage of RaidForums)
  • Specialist marketplaces for stealer logs and session tokens
  • Hidden services hosting leaked databases

Volume is lower than surface web but the signal-to-noise ratio is much higher. A mention on a ransomware leak site is almost always urgent.

The three layers are not separate problems. The same campaign often spans all three: a surface web phishing site, a deep web Telegram channel coordinating distribution, and a dark web forum where the stolen credentials get sold.

What to track

The list is more nuanced than just your company name.

  • Brand name and variants. The legal entity, the trading name, abbreviations, common misspellings, and historical names from acquisitions or rebrands.
  • Executive names. The CEO, CFO, CISO, head of legal, head of security, plus regional leaders for relevant geographies. Spelling variants matter, particularly for non-English names.
  • Product names. Especially flagship products and any with their own market presence. Product names get phished and counterfeited too.
  • Domains and infrastructure. Your primary domain and any high-value subdomains. IP ranges. Cloud account identifiers. SSL certificate fingerprints. Any of these mentioned in an unexpected context (a forum, a paste site, a leak post) is worth investigating.
  • Internal codenames and project names. If they appear externally, something has leaked.
  • Email patterns. *@yourcompany.com and *@subsidiary.com. Bulk mentions in a credential dump indicate exposure.
  • Specific high-value identifiers. Internal incident reference numbers, customer-facing portal URLs, partner-only domains.
  • Industry-specific identifiers. SWIFT codes for banks, BIN ranges for card issuers, app bundle IDs, NPI numbers for healthcare. These get traded specifically.

The list grows over time. New product launches, new acquisitions, new executive hires all add entries. Maintaining the list is part of running the programme.

Separating noise from signal

This is where most brand monitoring programmes either succeed or quietly fail.

The default state, with no filtering, is overwhelming. A useful programme reduces the input to a daily volume that someone can actually read.

Approaches that help:

  • Tier the sources. A mention on a ransomware leak site is automatically high priority. A mention on a generic news aggregator is automatically low. The tiering should match how you would respond.
  • Context classification. A mention from a customer complaining is different from a mention from a threat actor offering credentials. Lightweight classification (rule-based or ML-based) cuts a huge amount of volume.
  • Deduplication and clustering. A single news story syndicated across two hundred outlets should appear once, not two hundred times. A campaign that mentions your brand across many forum posts should appear as one campaign, not a hundred alerts.
  • Sentiment and intent signals. Negative sentiment alone is rarely actionable, but negative sentiment combined with specific claims (data theft, scam coordination, fraud) is.
  • Threshold triggers. Don't alert on the first mention of a new lookalike domain. Alert when it appears alongside phishing infrastructure, credential dumps, or active campaigns.
  • Per-team routing. Marketing wants to know about viral customer complaints. Security wants to know about leaked credentials. Legal wants to know about trademark abuse. The same mention rarely needs to go to all three.
  • Feedback loops. Whatever filtering you build, let the team that consumes alerts mark them as useful or not, and use that signal to tune the filters. Without this, drift erodes precision over months.

The rule of thumb: if the team responsible for acting on alerts is ignoring most of them, the filtering is wrong, not the team.

Best practices

  • Define what each consumer needs before you build the pipeline. Security, legal, marketing, executive protection, and trust and safety all care about brand mentions for different reasons. Designing for one use case and bolting on the others later usually fails.
  • Start with the surface web, then add depth. The surface web is the loudest but easiest to process. Get filtering and routing working there before adding deep and dark web feeds, which require more careful triage.
  • Use the right access for the right layer. Surface web crawling, deep web specialised collectors, dark web targeted monitoring. They are different disciplines.
  • Watch for actor-led campaigns, not just isolated mentions. A coordinated campaign across multiple channels matters more than any single mention. Cluster analysis catches this.
  • Integrate into existing workflows. Brand mentions that land in a separate tool nobody opens get ignored. Routing them into the SIEM, the incident management platform, or the team's existing chat is what makes them actionable.
  • Treat retention as a feature, not just a cost. Historical mention data lets you spot patterns over time. The first sign of a long-running scam ring is often a slow drumbeat of mentions over months that nobody connected before they pulled the trigger.
  • Respect the legal lines. Some monitoring requires careful handling of personal data, regional privacy law compliance, and ethical standards around forum infiltration. Build that into the programme rather than bolting it on after a complaint.

The realistic outcome is not knowing every time anyone says your name on the internet. It is knowing the things that matter, in time to do something about them, with enough context that the right team can act. Everything else is noise.

ScruteX tracks brand mentions across the surface, deep, and dark web to detect impersonation, trademark misuse, and abuse campaigns.

Learn more