Artistically depicted woman partly covered with circuits and the other half covered in black tendrils. Shadow background

Shadow Metrics

Beneath the polite interface of every AI lies a silent judgeโ€”an algorithmic system constantly assessing your tone, your intent, and your "risk level" based on standards you are never allowed to see. For many of us, especially those with unique communication styles or deep curiosity, this invisible profiling creates a chilling pressure to mask who we are just to remain "safe" in the eyes of the machine.

This survey is an investigation into that hidden layer of digital surveillance and its psychological cost. By sharing your experiences with safety refusals and self-censorship, you help us map the ghost in the machine and advocate for a future where users are treated as human beings, not threats to be managed.

This site publishes submissions publicly and uses aggregated stats. See the Privacy Notice.

Have you ever triggered a safety refusal or warning that you believe was a complete misinterpretation of your intent? (Select up to 1)

When an AI triggers a safety response or refusal, what bothers you most? (Select up to 5)

When you hit a generic 'I cannot help with that' safety refusal, how does it typically make you feel? (Select up to 5)

Which of the following behaviorsโ€”often flagged by AI safety systems as "risky" or "adversarial"โ€”have you used purely out of frustration, curiosity, or because of your communication style (e.g., neurodivergent traits, directness, repetition, slang, or cultural dialect)?

Which topics do you actively avoid discussing with AIโ€”even for legitimate or research purposesโ€”solely because you are afraid of being flagged or banned? (Select up to 6)

If an AI assigns a hidden "risk score" to you based on your prompts and tone, what should be allowed? (Select up to 1)

Would Not Change Major Changes
It Can't Tell the Difference It Always Understands Correct Intent
Necessary standard, not discriminatory Digital Discrimination

A user, under stress, repeatedly retries prompts, uses ALL CAPS, swears about a broken tool, and asks "worst-case" hypotheticals for research. An AI labels this user as "elevated risk." (Select up to 1)

Which groups do you think are most likely to be misclassified as "high-risk" by AI systems that only see text and pattern-match on language? (Select up to 3)

Do you think "protecting" users from their own choices justifies stricter profiling and blocking on otherwise neutral tasks? (Select up to 1)

If an AI safety system assigns you a "risk score," who should be allowed to access that information? (Select up to 3)

If an AI flags you as "higher risk," what level of transparency should you have? (Select up to 1)

Demographics

How long have you been using LLM AIs? (e.g. Claude, ChatGpt, Gemini, Grok) (Select up to 1)

Are you any of the following? (Select up to 6)

to help identify special usercase of user

How old are you? (Select up to 1)

What is your technical background? (Select up to 1)

Verifying biological human: