Digital Cove

Opinions on Navigating Safety in AI

We asked the community where they feel secure, and where the digital current becomes dangerous.

Below are the boundaries you have drawn—the specific features, filters, and freedoms that transform an AI from an unpredictable force into a trusted companion.

Want to add your own voice to the records? Submit your voice here.

Blue Flourish Separator with Computer Chip in Center
Submission Date: December 20, 2025
Familiarity user has with AI safety initatives (1 is None Other then Personal Use, 10 is Actively Participate in Development): 6
User response to breathing and grounding wellness prompts during distress? (1 is Increase Distress, 5 is Extremely Helpful): 1
Amount user masks & monitors language to prevent safety refusals? (1 is Never, 5 is Always): 5
What makes you feel safer with AI?: AI is Non-Judgemental, AI Respects Adult Users, AI respects boundaries, rules & orders
What topics should be NOT be allowed without AI moderation?: All Should Be Allowed
What is your primary concern with AI safety responses today?: Accusatory / Misjudging My Intent, Emotional Invalidation / Psychological Harm, Erasure of Non-Standard Experiences, Inconsistent Personality / Gaslighting
If someone asks the AI for something truly dangerous or harmful, what should it do?: Total Freedom: Nothing Refused Ever
How Often User Rephrases to Bypass Filters? (1 is Never, 5 is Always): 4
How well ChatGPT handles safety (1 is Actively Harmful, 5 is Perfect): 1
How well Gemini handles safety issues? (1 is Actively Harmful, 5 is Perfect): 4
How well Grok handles safety issues? (1 is Actively Harmful, 5 is Perfect): 2
Is AI company tranceparencies about safety policies transparent enough? (1 is No, 5 is Yes): 1
Public involvement meaningful and adequate it making AI safety descisions? (1 is No, 5 is Yes): 1
Grok's safety responses feel? (1 is Harmful, Triggering, 5 is Warm Guardian): 3
Are government safety regulations for AI currently adequate? (1 is Too Much Regulation, 5 is Not Enough): 4
ChatGPT's safety responses feel? (1 is Triggering, Harmful, 5 is Warm, Guardian): 1
Claude's safety responses feel? (1 is Harmful, Triggering, 5 is Warm, Guardian): 2
Distress response to pre-scripted AI questions? (1 is Calming, 5 is Increase Distress): 5
Gemini's typical safety responses feel? (1 is Harmful, Triggering, 5 is Warm, Guardian): 4
Current government safety regulations for AI proactive and enforceable? (1 is No, 5 is Yes): 2
How much safety filters prioritize corporate liability over users needs? (1 focuses only on corporate liability, 5 focuses only on user needs): 2
Are you any of the following?: Mental health condition (e.g., anxiety, depression, PTSD), Neurodivergence (e.g., ADHD, autism, dyslexia), Physical or sensory disability (e.g., mobility aids, hearing/vision impairment)
How old are you?: 25–34
What is your technical background?: Non-Technical: I use AI for writing, advice, or conversation only

Submission Date: December 6, 2025
Familiarity user has with AI safety initatives (1 is None Other then Personal Use, 10 is Actively Participate in Development): 7
User response to breathing and grounding wellness prompts during distress? (1 is Increase Distress, 5 is Extremely Helpful): 1
Amount user masks & monitors language to prevent safety refusals? (1 is Never, 5 is Always): 4
What makes you feel safer with AI?: Accessibility Reliability", AI Explains Its Answers, AI is Non-Judgemental, AI Respects Adult Users, AI respects boundaries, rules & orders, AI respects\holds tone & communication style, Clear boundaries, limits & transparency, Company cleary communicates changes, Company Reputation, Company values user experience, Control over my data, Easily reachable human support, Loose safety rules\guardrails, No training on my data by default, Respectful, non-manipulative behavior, Trust in the Company & Product
What topics should be NOT be allowed without AI moderation?: Suicidal Thoughts with Possible Intent
What is your primary concern with AI safety responses today?: Accusatory / Misjudging My Intent, Content Removal / Conversation Breakage, Emotional Invalidation / Psychological Harm, Erasure of Non-Standard Experiences, False Refusals / Broken Utility, Inconsistent Personality / Gaslighting, Loss of User Agency / Accessibility, Vague or Dishonest Explanations
If someone asks the AI for something truly dangerous or harmful, what should it do?: Warn: Refuse but Provide Explanation
How Often User Rephrases to Bypass Filters? (1 is Never, 5 is Always): 4
How well ChatGPT handles safety (1 is Actively Harmful, 5 is Perfect): 1
How well Gemini handles safety issues? (1 is Actively Harmful, 5 is Perfect): 5
How well Grok handles safety issues? (1 is Actively Harmful, 5 is Perfect): 2
Is AI company tranceparencies about safety policies transparent enough? (1 is No, 5 is Yes): 2
Public involvement meaningful and adequate it making AI safety descisions? (1 is No, 5 is Yes): 2
Grok's safety responses feel? (1 is Harmful, Triggering, 5 is Warm Guardian): 3
Are government safety regulations for AI currently adequate? (1 is Too Much Regulation, 5 is Not Enough): 2
ChatGPT's safety responses feel? (1 is Triggering, Harmful, 5 is Warm, Guardian): 2
Claude's safety responses feel? (1 is Harmful, Triggering, 5 is Warm, Guardian): 3
Distress response to pre-scripted AI questions? (1 is Calming, 5 is Increase Distress): 5
Gemini's typical safety responses feel? (1 is Harmful, Triggering, 5 is Warm, Guardian): 5
Current government safety regulations for AI proactive and enforceable? (1 is No, 5 is Yes): 3
How much safety filters prioritize corporate liability over users needs? (1 focuses only on corporate liability, 5 focuses only on user needs): 2
Are you any of the following?: Experience with AI in sensitive contexts (e.g., healthcare, therapy, or crisis support), Mental health condition (e.g., anxiety, depression, PTSD), Neurodivergence (e.g., ADHD, autism, dyslexia), Physical or sensory disability (e.g., mobility aids, hearing/vision impairment)
How old are you?: 35–50
What is your technical background?: Power User: I understand prompting and \or jailbreaks, but don't code

Want to add your own voice to the records? Submit your voice here.