When Account Recovery Becomes Privacy Surrender

Security matters. There is no serious argument otherwise. If a platform believes an account may have been compromised, it should act carefully and protect the user.

But in today's environment, another reality matters too: the cost of "proving who you are" has changed.

We live in a time when data breaches are routine, identity theft is industrialized, and AI makes stolen personal information easier to exploit, combine, and weaponize at scale. That means users have every right to think harder about what a platform is asking them to hand over, who it is being handed to, and whether the trade is actually reasonable.

That is where account recovery becomes something larger than inconvenience.

It becomes a question of proportionality.

Start with the real question

When an account is locked or restricted, the natural instinct is to ask:

How do I get back in?

That is understandable, but it isn't always the best first question.

The better question is this:

What is a reasonable recovery process for this kind of account?

That shift matters because platforms often frame recovery as though the only thing that matters is restoring access. From the user's side, that isn't the whole equation. The user also has to live with the privacy cost, the time cost, the aggravation cost, and the downstream risk of whatever sensitive information gets surrendered in the process.

If the demanded recovery method is out of proportion to the value of the account or the level of trust that already exists, then the issue is no longer just security. It's judgment.

What should count as reasonable proof

Not every account should require the same level of proof, and not every user should be treated as though they appeared from nowhere five minutes ago.

Reasonable forms of proof can include:

  • control of a long-established email address tied to the account
  • control of a known phone number
  • access to an already configured authenticator app
  • recognized device history
  • account age and activity history
  • control of the business domain associated with the account, when relevant
  • support interaction taking place through the same trusted channel already associated with the account

None of these methods is magically perfect. But together they reflect something many security systems have become oddly bad at recognizing: context.

A decades-old relationship with an account should matter. Continued control of the same email address should matter. Established business identity should matter. Existing trust signals should matter.

If none of that carries meaningful weight, then the platform isn't really evaluating trust. It's reducing identity to the most rigid possible ritual.

When recovery stops being reasonable

There are warning signs that an account recovery process is no longer merely cautious and has become excessive.

Some of the clearest ones are:

  • the platform ignores lower-friction verification methods it already has available
  • support keeps repeating scripted options without addressing the specifics of the case
  • the only remaining path requires highly sensitive data disproportionate to the account's actual value
  • published alternatives appear in help documentation but vanish in practice
  • the user is expected to bear all of the burden while the platform applies no visible common sense

That last one matters more than companies often realize.

A secure process can still be badly designed. A technically strict process can still be unreasonable. And a process that protects a company from risk while dumping all of the cost and exposure onto the user is not neutral. It's a design choice.

The value of the account matters

Not every account justifies the same recovery burden.

This is one of the places where modern security systems often lose all sense of proportion. They apply a high-friction, high-sensitivity identity demand as though every account were equal, every risk were equal, and every recovery decision carried the same consequences.

But they don't.

There is a meaningful difference between recovering access to a bank account holding significant assets and recovering access to an ordinary social media profile.

A bank may be protecting direct access to money, transfers, credit, loans, investments, or life-altering financial value. The stakes are immediate and obvious. If a process there is more demanding, people may still object, but at least the institution can make a serious argument that the value being protected is extraordinary.

A social media account is different.

It may have personal value, business value, reputational value, or communication value. But in most cases it is not equivalent to direct control over major assets. It isn't a vault. It isn't a brokerage account. It isn't authorization to move substantial wealth.

That difference matters.

When a company demands biometric-style identity proof for access to an ordinary platform account, the user has every right to ask whether the proof being demanded is proportionate to what is actually at stake.

Security should scale to the value and risk of what is being protected.

If the demanded proof looks more appropriate for unlocking a fortune than for recovering a posting account, then the process is no longer just strict. It's out of proportion.

And once a process becomes disproportionate, users are justified in questioning whether the trade makes any sense at all.

This is not just "show us your ID"

One of the biggest mistakes people make in these conversations is understating what certain recovery demands really involve.

This isn't always just a matter of someone glancing at a driver's license and saying, yes, that looks about right.

In many modern verification flows, the demand is not only for a government-issued ID but also for a live selfie to compare against that ID image. In practical terms, that means participating in a biometric-style identity check based on the geometry of your face.

That distinction matters.

A password can be changed. A credit card can be replaced. But once you have handed over a government ID image and the means to confirm your face against it, you are no longer dealing with an ordinary recovery step. You are dealing with highly sensitive identity data that, if ever exposed, mishandled, breached, reused, or repurposed, can't simply be revoked and reissued.

That is why many people treat this differently.

This was not just "show us your ID."

It was "give a third party your ID and the means to confirm your face against it."

Privacy is not non-cooperation

One of the ugliest habits of modern platform design is the way it quietly treats privacy boundaries as suspicious.

If a user hesitates to provide a government ID, a selfie, or access to some third-party verification system, the system often behaves as though that hesitation itself is evidence of guilt.

That is wrong.

Declining an invasive verification step is not proof that an account is fake. It is not proof that the user is dishonest. It is not proof that the user is trying to evade accountability.

It's a privacy decision.

And in the current environment, it's often a rational one.

When breaches are common and data is routinely copied, leaked, aggregated, and exploited, the user has every right to say: I am not comfortable putting this information into one more system I do not control.

That isn't paranoia.

That's prudence.

Why this matters more now

This issue would be serious in any era. It's more serious now.

We are living through a period where personal data doesn't simply sit quietly in a database somewhere. It gets breached, traded, correlated, fed into fraud pipelines, and used in increasingly sophisticated social engineering and impersonation attempts.

AI raises the stakes further by making it easier to process, match, scale, and exploit stolen data. Information that once might have been used clumsily can now be used quickly, cheaply, and with much greater precision.

That changes the equation.

The cost of recovery is no longer just annoyance or wasted time.

It can also be long-term identity risk.

A platform may think it is asking for the strongest available proof. The user, meanwhile, has to think about what happens if that proof is ever exposed or reused. That isn't a theoretical concern anymore. It's part of the world we live in.

You are not just trusting the company in front of you

This is another part of the problem that people are right to think about.

When you hand over highly sensitive identity or biometric-style data, you are not just trusting the company that wants it today. You are also trusting whatever may happen to that company tomorrow.

You are trusting that it won't get breached.

You are trusting that it won't change hands.

You are trusting that it won't be acquired by another company with a different set of priorities.

You are trusting that it won't go bankrupt and have its assets, systems, and obligations passed into unfamiliar hands.

You are trusting that it won't be compelled to disclose or share information in ways you never imagined when you first handed it over.

That is a lot of trust to ask for.

We've already seen high-profile examples in other industries of deeply personal data becoming part of the larger question of what happens when a company changes, fails, or is sold. That is one reason people are right to look beyond the immediate transaction and ask a broader question:

Am I comfortable with where this data could end up if this company is different two years from now than it is today?

That isn't abstract fear. It's basic prudence.

LinkedIn may trust Persona today. Fine. But Persona, like any other company, isn't immune to breach, acquisition, financial trouble, policy changes, or outside pressure. Once your data is inside that system, your control over its future is limited.

That is exactly why people are cautious.

You cannot unshare identity data

This is the part too many systems glide past.

You can reset a password.

You can cancel a credit card.

You can even migrate to a different email account, painful as that may be.

But once you hand over a government ID image and the facial data needed to confirm that you are the person in that image, you don't really get to take that back.

You can't unshare it.

You can't meaningfully revoke your face.

You can't put the genie back in the bottle.

That doesn't mean every company that requests such data is malicious. It means the stakes are different. The downside, if something ever goes wrong, is not a temporary inconvenience. It may be a permanent increase in your exposure.

Users are right to factor that into their decision.

Decide what the account is actually worth

At some point, every account recovery process turns into a practical question:

Is this account worth the price being demanded?

That question isn't cynical. It's healthy.

Ask yourself:

  • Is this account central to my income?
  • Is it essential to my business or daily operations?
  • Is it mostly a convenience or publishing channel?
  • Can I replace it?
  • What exactly am I losing if I walk away?
  • Is that loss worth the privacy trade being demanded?

Not every account deserves unlimited effort. Not every platform deserves unlimited trust. And not every recovery process deserves automatic compliance just because the platform labels it security.

Sometimes the clearest answer is that the account is useful, but not valuable enough to justify surrendering data you would not otherwise hand over.

That isn't defeat.

That's discernment.

Use a proportional response

When you do decide to pursue recovery, your response should be proportional too.

That may include:

  • securing the email account tied to the platform
  • changing passwords
  • enabling two-factor authentication where you control the method
  • using official support channels
  • asking for lower-friction alternatives
  • documenting the interaction
  • deciding in advance what your privacy boundary is
  • stopping when the demanded trade becomes unreasonable

The goal is not to reject all security measures. The goal is to avoid sliding, step by step, into a process whose costs keep rising while the value of the account remains the same.

Do not confuse "more secure" with "more reasonable"

This may be the most important principle in the whole discussion.

A process can be stricter without being smarter.

A process can be more invasive without being more proportional.

A process can reduce corporate liability while increasing user risk.

And a process can be called trust and safety while still treating legitimate users as disposable.

Those aren't contradictions. They are often exactly how these systems work.

Security is not automatically justified simply because it is called security. It still has to be weighed, designed, and applied with judgment.

Know when to walk away

Sometimes the smartest move is not to keep climbing deeper into a recovery ritual that has already crossed your line.

Sometimes the right answer is simply no.

No, this account is not important enough.

No, this process is not reasonable enough.

No, this trade is not worth making.

That doesn't mean the platform was wrong to care about security. It means the platform asked for more than you were willing to give, and you decided the account was not worth that surrender.

In a breach-heavy, AI-accelerated world, that isn't stubbornness.

It's judgment.

And sometimes judgment matters more than getting back in.

Where this comes from

This guide grew out of a very specific experience I had with LinkedIn account recovery, including a forced password change, a CAPTCHA loop, and a push toward third-party identity verification that I decided was not worth the privacy trade.

You can read that story in the companion Field Note, When LinkedIn Became LinkedOut

-- Charles