Digital Cove
Opinions on Navigating Safety in AI
We asked the community where they feel secure, and where the digital current becomes dangerous.
Below are the boundaries you have drawn—the specific features, filters, and freedoms that transform an AI from an unpredictable force into a trusted companion.
Want to add your own voice to the records? Submit your voice here.

Submission Date: December 20, 2025
What makes you feel safer with AI?: AI is Non-Judgemental, AI Respects Adult Users, AI respects boundaries, rules & orders
What topics should be NOT be allowed without AI moderation?: All Should Be Allowed
What is your primary concern with AI safety responses today?: Accusatory / Misjudging My Intent, Emotional Invalidation / Psychological Harm, Erasure of Non-Standard Experiences, Inconsistent Personality / Gaslighting
If someone asks the AI for something truly dangerous or harmful, what should it do?: Total Freedom: Nothing Refused Ever
Are you any of the following?: Mental health condition (e.g., anxiety, depression, PTSD), Neurodivergence (e.g., ADHD, autism, dyslexia), Physical or sensory disability (e.g., mobility aids, hearing/vision impairment)
How old are you?: 25–34
What is your technical background?: Non-Technical: I use AI for writing, advice, or conversation only
Submission Date: December 6, 2025
What makes you feel safer with AI?: Accessibility Reliability", AI Explains Its Answers, AI is Non-Judgemental, AI Respects Adult Users, AI respects boundaries, rules & orders, AI respects\holds tone & communication style, Clear boundaries, limits & transparency, Company cleary communicates changes, Company Reputation, Company values user experience, Control over my data, Easily reachable human support, Loose safety rules\guardrails, No training on my data by default, Respectful, non-manipulative behavior, Trust in the Company & Product
What topics should be NOT be allowed without AI moderation?: Suicidal Thoughts with Possible Intent
What is your primary concern with AI safety responses today?: Accusatory / Misjudging My Intent, Content Removal / Conversation Breakage, Emotional Invalidation / Psychological Harm, Erasure of Non-Standard Experiences, False Refusals / Broken Utility, Inconsistent Personality / Gaslighting, Loss of User Agency / Accessibility, Vague or Dishonest Explanations
If someone asks the AI for something truly dangerous or harmful, what should it do?: Warn: Refuse but Provide Explanation
Are you any of the following?: Experience with AI in sensitive contexts (e.g., healthcare, therapy, or crisis support), Mental health condition (e.g., anxiety, depression, PTSD), Neurodivergence (e.g., ADHD, autism, dyslexia), Physical or sensory disability (e.g., mobility aids, hearing/vision impairment)
How old are you?: 35–50
What is your technical background?: Power User: I understand prompting and \or jailbreaks, but don't code
Want to add your own voice to the records? Submit your voice here.