Command Palette

Search for a command to run...

Blog/Honorable Hacking: How to Red Team Without Losing Your Soul
KA

Krishna Agarwal

6 min read

just because you're paid to think like a criminal doesn't mean you should abandon every moral principle you've ever held. The role of a red team operator/leader comes with a badge that says "authorized to break things" not "authorized to be a sociopath with a security clearance" Yet here we are, apparently needing to remind people that having permission to break things doesn't mean you get to break everything.

So let's talk about what it really means to operate without losing your soul in a field where your job description literally includes "simulate being the enemy."


Getting a signed contract and a scope of work doesn't transform you into some kind of digital villain to break out the chaos. Yet you'd be surprised how many red teamers seem to interpret "authorized penetration testing" as "permission to channel their inner chaos agent." The reality is far more nuanced, and frankly, far more interesting.

You can imagine like getting a driver's license — it gives you the legal right to drive, but that doesn't mean you can do donuts in a school parking lot or blast down a busy street at 200+ km/h just because your license says you're allowed to operate a vehicle. Having access is one thing; using it responsibly is another.

The confusion seems to stem from a fundamental misunderstanding of what authorization actually means. When a client signs off on a red team engagement or even a simple pentest engagement, they're not giving you permission to break chaos on their infrastructure. they're hiring you to find weaknesses and vulnerabilities in their defenses and systems, not to exploit every weakness you find to its absolute limit.

Authorization has boundaries. Those boundaries aren't just legal — they're ethical, professional, and frankly, they're what separate you from the actual bad guys you're supposed to be simulating.

Legal Ethics responsibilities

When Right and Wrong Get Messy

Let's start with the uncomfortable truth: while some ethical boundaries in red teaming are crystal clear, many situations fall into gray areas that anyone pretending to have simple answers is either naive or lying. Unlike legal requirements, which are helpfully written down for us to reference, ethical boundaries shift depending on culture, context, and individual values. What makes this particularly challenging is that red teamers operate in a space where traditional ethical frameworks get twisted into pretzels.

Some boundaries are non-negotiable: Never endanger human or living being's life or safety. If your testing reveals you can disable critical safety systems (e.g,fire suppression, emergency communications) in a manufacturing facility, you document the vulnerability without triggering it. This isn't ethically complex - it's a clear professional boundary that no legitimate justification can cross. No scope document, no client request, and no professional duty ever justifies putting lives at risk

But consider this grayer scenario: You've been authorized to phish employees and harvest their credentials. During your reconnaissance, you discover that impersonating IT support and calling employees directly would be incredibly effective - people readily give passwords over the phone when they believe they're talking to their IT department. Is it acceptable to exploit this human trust, even though it's within your authorized scope? This is where moral philosophers would feast.

Some operators take the "we're authorized, so it's fair game" approach — essentially arguing that organizational consent makes any action within scope ethically acceptable. They'd say the company authorized deception, so exploiting employee trust through social engineering is perfectly acceptable. Others lean into the utilitarian angle: "by demonstrating this vulnerability completely, we're preventing real adversaries from causing far worse damage." Then there's the professional duty argument: "my job is to test security thoroughly — if I don't fully exploit this vulnerability, I'm not doing my job properly."

Each viewpoint has value—and blind spots. Authorization ignores the fact that individual employees never consented to deception. Utilitarianism can rationalize exploiting human trust by focusing on preventing hypothetical future attacks. And professional duty risks prioritizing thoroughness over human dignity, treating psychological manipulation as an acceptable cost of complete testing.

The reality is that ethical red teaming requires you to think beyond simple rule-following. It means considering immediate physical safety, respecting individual human dignity even when testing organizational defenses, and maintaining your moral compass even when proving a point might require crossing dangerous or manipulative lines. If you find yourself justifying potentially harmful or exploitative actions with "but it's in scope," you might want to step back and reconsider whether you're still operating ethically.

Gray Area vs Black

Scope

thing that stands between you and federal prison: scope. If authorization is your permission slip, scope is the detailed instruction manual that tells you exactly what you're allowed to break and how you're allowed to break it. Without proper scope, you're just a criminal with good intentions.

Scope isn't just a legal formality; it's the foundation that everything else is built on. Every decision you make during an engagement should be filtered through the scope definition. Can you target that system? Check the scope. Is that attack vector permitted? Check the scope. Are you allowed to pivot to that network segment? You get the idea here...

scope isn't something that gets handed down from on high and must be accepted without question. You have not just the right, but the responsibility to refuse engagements with poorly defined scope. If the boundaries aren't clear, don't proceed with the engagement.

The scoping process itself isn't straightforward. There will be multiple conversations, revisions, and clarifications before everyone agrees on the final scope. The time spent getting scope right prevents misunderstandings that could derail the entire engagement.

This isn't about being difficult or perfectionist. It's about protecting yourself, your team, and your career. Operating an engagement without clearly written scope is professional malpractice, and the consequences could extend far beyond a single engagement.

Just Because You Can Doesn't Mean You Should

Even if something is legal and passes your personal ethics test, that doesn't automatically make it responsible. Responsibility is the final filter that asks: just because you can, should you?

Responsibility forces you to consider questions that legality and ethics might not address. Is this action safe? Does it demonstrate that you actually care about the client's wellbeing? Are you showing respect for their data, their people, and their business operations?

The challenge is that as a red teamer, you're constantly switching between two roles. One moment you're the adversary, thinking like an attacker. The next moment, you need to drop that role entirely and think like a consultant whose job is to reduce risk, not create it.

When you're in adversary mode, you might identify a vulnerability that could take down critical business systems. Responsibility kicks in when you decide how far to push that exploit. Do you demonstrate the vulnerability and document the impact, or do you actually crash the system to "prove the point"? The adversary might choose disruption; the responsible consultant chooses restraint.

This is where we struggle. The adrenaline rush of successful exploitation can make it easy to stay in adversary mode longer than necessary. But responsibility means knowing when to switch roles, when to exercise restraint, and when to prioritize the client's safety over your own satisfaction with a clever exploit.

Your actions effects everyone

Poor OPSEC (operational security) isn't just about you potentially getting caught - it's about burning bridges for every professional who comes after you. When you leave digital breadcrumbs leading back to your real identity, compromise client data through sloppy handling, or worse, start bragging about your exploits on social media (yes, this actually happens), you're not just risking your own career.

Every time someone in this field acts recklessly, it makes companies more hesitant to hire red teams or even penetration testers, creates more restrictive contracts, and feeds the narrative that security professionals can't be trusted with sensitive access. Your poor OPSEC becomes everyone else's problem when the industry tightens up because of your mistakes.

Good OPSEC isn't just about technical controls — it's about understanding that your actions reflect on the entire profession. That simply means keeping your mouth shut about client details, properly sanitizing your reports, and maybe not posting screenshots of your latest "epic hack" on LinkedIn or instagram without redacting properly.

The Ego Trap

being good in hacking can feed your ego in ways that aren't always healthy. When you can bypass security controls and demonstrate vulnerabilities that others missed, it's easy to start believing your own hype. This is where many professionals lose their way.

The ego trap manifests in different ways: showing off unnecessarily complex exploits when simple ones would suffice (Sometimes, it's good if the "complexity" of code is demonstrating "complexity" of vulnerability in simpler manner), or treating engagements like personal validation exercises rather than professional services. The moment you start measuring success by how impressive your vulnerability or exploit are rather than how much you've helped improve the client's security posture, you've lost the plot.

We need to remember that our skills are tools for solving problems, not toys for impressing people.

Don't Let the Role Change You

the psychological pressure of constantly thinking like an adversary or threat. When your job requires you to identify weaknesses, exploit trust, and bypass security controls, it can gradually shift your worldview in ways that aren't healthy.

Some operators report feeling increasingly cynical about security in general, becoming distrustful in their personal relationships, or developing an unhealthy fascination with the criminal mindset they're supposed to be simulating. Others struggle with the cognitive dissonance of being paid to break rules while maintaining personal integrity.

These psychological risks are real, and they're amplified by the ego boost that comes with success. When you can bypass technical controls, and demonstrate vulnerabilities that others missed, it's intoxicating. The rush of proving that security is an illusion can become addictive in ways that compromise your judgment.

we need to maintain clear boundaries between in our professional personas and personal identities. remember, that our ultimate goal is to help make organizations more secure, not to prove how insecure they are.

The moment we start deriving personal satisfaction from exploiting human trust or technical vulnerabilities, we have often crossed a line that's difficult to come back from.

Don't Be Part of the Problem

At the end, Red teaming is ultimately about making organizations more secure, not about feeding your ego or living out hacker fantasies. When you lose sight of that mission, you become part of the problem rather than the solution. The industry needs professionals who can be trusted with significant access and responsibility, not some random digital cowboys looking for their next adrenaline fix.

Your authorization comes with boundaries, your access comes with responsibilities, and your skills come with obligations to use them ethically. The moment you start treating those as suggestions rather than requirements, you've stopped being a professional and started being a liability.

The security industry is built on trust — trust that good guys act responsibly, protect client interests, and maintain professional standards even when no one is watching.

Share this article

Related Posts

Check out these related articles

Related topics:non-technical

Subscribe to Newsletter

Get notified about new posts and valuable content from me. No spam, unsubscribe anytime.