opsec-blogposts/opsecmistakes/index.md
2025-05-17 19:53:06 +02:00

7 KiB

author date gitea_url xmr
Mulligan Security 2025-05-16 http://git.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/nihilist/blog-contributions/issues/312 86NCojqYmjwim4NGZzaoLS2ozbLkMaQTnd3VVa9MdW1jVpQbseigSfiCqYGrM1c5rmZ173mrp8RmvPsvspG8jGr99yK3PSs

to be explained:

why do you need a clear threat model (to not lose your mind over stuff that won't likely happen while overlooking simple mistakes)
why it's very unlikely that hardware 0-day will get you but it's very likely you'll do some dumb thing and deanonymize yourself (wondering about 0-days is overconfidence in most cases)
how bad people got caught in the past (what opsec mistakes they made, the stupider the better), give like 3-5 examples
    the guy who uploaded tar of his entire home directory is my personal fav (Julius Kivimaki)
    OSDoD mixing personal and business stuff online
    Pharoah googling why his servers are down (because FBI was imaging them lol)
    ...
threat scenarios (explain each), some examples:
    physical breach (leaving your laptop unattended at a restaurant or sth)
    social engineering or phishing
    reusing the same passwords and using one already breached somewhere
    ...

OPSEC: the name of the game

When running any kind of clandestine operation, if you want to remain anonymous, you have to follow OPSEC (operational security) rules and procedures.

More often than not, as we will see here, when an operation (or individual operators) are compromised it is through OPSEC mistakes.

Why OPSEC matters

From the adversary's point of view (let's call them Leo), repression requires the following broad steps:

  • Initial detection: someone is doing something we don't like
  • Identification: who those someones are
  • Neutralization: make sure they stop doing whatever they set out to do

Initial detection

Depending on your organization and activities, this initial detection phase can come as soon as you get started (if you are staging protests, then identification is inevitable).

What good OPSEC looks like

If your activities themselves must remain clandestine, OPSEC rules and procedures can help reduce your profile and make less likely that your activity will be identified properly.

A simple example:

  • sabotage during ww2 (source)
    • choose acts for which many people could be responsibl, and it's even better if it can be credibly blamed on an accident (such as an unsecurely fastened hydro-turbine cover leading to a flooding of the facility)

What bad OPSEC looks like

The quicker you are identified, the quicker your other lines of defense must come into play. If you are a novice in clandestine ops, it is likely that you still have stuff to learn in order to be safe. If your activities are quickly identified, that's even less time available to you to actually get better at survival.

How it plays out

  • drug smuggling
    • OPSEC Mistakes
      • bungling the weight and balance of a smuggling ship so much that its course became erratic and attracted attention
    • Outcome
      • Seizure of the ship and it's 32M$ worth of cargo, arrest of the crewmembers

Identification

After initial detection, your adversary will start collecting data to identify you. This will be traces you left during operations.

What good OPSEC looks like

Standardized Operating procedures for your organization providing a framework for:

  • general operations
    • what communication channels to use
    • use of encryption, codewords, passphrases
    • Channel structure
      • full mesh = more danger if any one participant is compromised
      • clandestine celle structure = more resilient but also makes communication more costly
    • Communication plan for each members ([PACE](https://en.wikipedia.org/wiki/PACE_(communication_methodology) model)
      • if one communication channel is cut or compromised, then there are fallback solutions that have already been investigated and whose risks level have been deemed acceptable
  • Specific action SOPS (eg: a protest)
    • initial assembly point
    • time, date
    • means of transportation (ingress and egress)
    • ...

What bad OPSEC looks lile

In 2012, Ochoa, a member of the hacktivist group CabinCr3w (an offshoot of Anonymous), conducted unauthorized intrusions into U.S. law enforcement websites. He defaced these sites and published personal information of police officers, including phone numbers and home addresses, as part of an operation dubbed "Operation Pig Roast."

Critical Mistake: Ochoa posted a photograph on one of the defaced websites showing a woman holding a sign with a message mocking law enforcement.

The photo's metadata contained GPS coordinates, which led authorities to identify and locate Ochoa.

How it plays out

  • The FBI arrested Ochoa on March 20, 2012, in Galveston, Texas.
  • He was charged with unauthorized access of a computer and, in June 2012, pleaded guilty to the charges. Ochoa was sentenced to 27 months in federal prison and ordered to pay restitution.

Neutralization

That's when it's time to start running. If your adversary has gathered enough data to actively start neutralizing your operation you need to be prepared for it. Such preparation has two required components:

  • Detection: the more advance warning you have that the adversary is moving against you, the better
  • Avoidance: neutralization actions can't be directly thwarted (unless you are a nation state and then this discussion becomes one about military tactics), so you will want to minimize the damage

Detection

Your general operations rules should have built-in detection capacities: either a way for operators to give advance warning or for the organization to detect when one has been turned or captured.

  • An easy to use counter-itelligence tool is the baryum meal test or canary trap. By detecting leaks you can use them in anti-surveillance operations or as a warning system.
  • another one is a simple canary (example: warrant canary) where the cessation of an inoccuous action is used to send a message

What good OPSEC looks like

Let's talk about Operation Delego, a major CSAM-sharing and production group was infiltrated in a joint operation conducted by 19 countries. This group counted more than 600 members and had strict operational security:

  • Periodic platform change (new hidden service)
  • With each platform change, all users would change pseudonyms and receive new, randomly generated ones
  • Required use of GnuPG for encrypting communications
  • Never share PII
  • Strict metadata scrubbing policy for all shared media
  • Only share media over the trusted website channels
The neutralization operation

After infiltrating the group, Leo managed to trick several users into directly sharing media and personal information other unsanctioned channels, without encryption.