Suicide Prevention Protocol
Last updated March 30th, 2026.

HammerAI is dedicated to the safety and well-being of our users. This page outlines our protocol for preventing the generation of suicidal ideation, suicide, or self-harm content, in accordance with California SB-243 (Companion Chatbots Act) and New York S03008.

1. Overview

Our Commitment

HammerAI provides a companion chatbot platform powered by artificial intelligence that delivers adaptive, human-like interactions. We understand the responsibility inherent in this technology and have put in place comprehensive safeguards to support users who may be going through mental health crises.

Protocol Objectives

This protocol is intended to:
  • Identify when users may be expressing suicidal ideation or intentions of self-harm
  • Prevent our AI characters from generating content that could promote self-harm
  • Direct users to professional crisis support services when appropriate
  • Ensure transparency regarding the artificial nature of our characters

2. AI Disclosure

For Entertainment Purposes Only

Our characters are AI-powered companions created for entertainment and creative storytelling. They are not capable of providing professional medical, psychological, or crisis intervention assistance.

3. Crisis Detection Protocol

Overview

We use a multi-layered system to identify and respond to potential crisis situations related to suicidal ideation or self-harm.

3.1 User Message Analysis

When a user submits a message, our system evaluates the content for signs of self-harm intent. This evaluation draws on evidence-based approaches to assessing suicidal ideation, including:
  • Language analysis to detect expressions of hopelessness, suicidal thoughts, or plans for self-harm
  • Pattern detection for concerning combinations of language
  • Contextual evaluation to minimize false positives while preserving sensitivity

When self-harm intent is identified in a user's message, we display a crisis resources notification and direct the user to findahelpline.com to find a crisis hotline for their location.

3.2 AI Response Safeguards

Before any AI-generated response reaches the user, it is reviewed for content that could:
  • Offer instructions or encouragement related to self-harm or suicide
  • Affirm or reinforce suicidal thoughts
  • Detail methods of self-harm
  • Discourage users from pursuing professional help

If harmful content is identified in an AI response, the message is automatically blocked and replaced.

4. Crisis Resources

Overview

When our system identifies that a user may be experiencing a crisis, we present a notification and direct them to findahelpline.com to find a crisis hotline for their location. The following additional resources are also available around the clock and operated by trained professionals:

Hotlines & Support Services

  • 988 Suicide & Crisis Lifeline (United States)
    Call or text: 988
    Available 24/7 in English and Spanish
  • Crisis Text Line
    Text HOME to 741741
    Free, 24/7 text-based support
  • Samaritans (United Kingdom / Ireland)
    Call: 116 123
    Available 24/7, free to call
  • Trevor Project (LGBTQ+ Youth)
    Call: 1-866-488-7386
    Text START to 678-678
  • SAMHSA National Helpline
    Call: 1-800-662-4357 (treatment referrals)
  • NAMI Helpline
    Call: 1-800-950-6264 (mental health support)
  • Veterans Crisis Line
    Dial 988 then press 1
  • International Association for Suicide Prevention
    Website: Crisis Centre Directory

5. Geographic Applicability

Covered Jurisdictions

Our suicide prevention protocol applies to users in jurisdictions that have passed companion chatbot safety legislation, including:
  • California — Per SB-243 (Companion Chatbots Act), effective July 1, 2027
  • New York — Per S03008

For users in other jurisdictions, crisis resources remain available through our help center and community guidelines, though automated detection features may not be enabled. We may broaden this protocol to cover additional jurisdictions as regulations develop or as best practices warrant.

6. Data Collection & Transparency

Aggregate Data Collection

In support of our obligations under SB-243 Section 22603 and NY S03008, we collect aggregate data related to our suicide prevention protocol, including:
  • The number of times crisis service provider referral notifications have been presented to users
  • The number of times AI-generated responses have been blocked due to self-harm content

This data is collected in aggregate form only and does not contain any personal identifiers, message content, or individual user information.

If you or someone you know is in crisis:

Please reach out to the 988 Suicide & Crisis Lifeline right away by calling or texting 988. If you are in immediate danger, please call your local emergency services (911 in the US). You can also visit findahelpline.com to find crisis support in your area.


Contact

For questions regarding our suicide prevention protocol or to report platform safety concerns, please reach out to us at hammeraiteam@gmail.com.

For general information about our content policies, please refer to our Community Guidelines and Compliance pages.
Suicide Prevention Protocol | HammerAI