Part Time Mental Health Expert

Mpathic

Mpathic

United States
USD 30-200 / hour
Posted on Oct 1, 2025

Position Overview:

mpathic is looking for part-time mental health experts to join our team! This role will report to the Project Manager of AI Safety & Mental Health Policy Implementation (Candice Balluru) overseen by the VP of Clinical AI (Megan Greenlaw).

We are seeking mental health clinicians that has worked with any of the following: severe mental health disorders, on inpatient units, private practice or community mental health focusing on treatment of PTSD, BPD, Bipolar, Schizophrenia, Depression, Suicide, issues facing LGBTQ, substance abuse and experience with youth and teens.

We are looking for clinicians to collaborate on a confidential initiative focused on AI safety protocols and mental health policy implementation for large language models (LLMs). This role involves role playing clinical scenarios while talking to AI agents, red teaming, identifying behavioral edge cases, and ensuring appropriate identification and support of users experiencing psychological distress or exhibiting concerning behavioral patterns. This role may also involve developing novel psychometrics, rubrics, behavioral taxonomies and analyses. Attention to safety and ethics and awareness of state-of-the-art guidance on AI agents in clinical settings is a key component of this role.

This role requires a background in mental health. Successful candidates will have a track record of being a proactive, reliable person that balances solving problems on their own and consulting with supervisors. Consistency, communication, reliability, and follow through are important at mpathic.

Key Responsibilities:

  • Design chat experiences with AI agents role playing in different clinical scenarios
  • Conduct qualitative analyses of conversations to derive taxonomies, personas and behavioral analysis of different types of responses
  • Synthesize domain expertise into structured prompt patterns and evaluation rubrics.
  • Provide expert clinical feedback on internal mental health policy development and modifications
  • Collaborate across teams to define evaluation metrics for psychological validity, tone, and appropriateness
  • Identify and document failure cases, edge behaviors, and model inconsistencies.
  • Contribute to rapid experimentation cycles and fine-tuning through targeted test sets and scenario modeling
  • Ensure all work adheres to strict confidentiality agreements and NDAs
  • Implement quality assurance protocols for conversation analysis
  • Participate in collaborative review sessions with engineers, researchers, and clinical consultants.

We require:

  • Current enrollment in a graduate program in Clinical Psychology, Counseling Psychology, Social Work, Psychiatry, or related mental health field
  • Clinical experience working with individuals experiencing mental health crises and severe pathology or similar exposure
  • Knowledge of crisis intervention techniques and risk assessment
  • Demonstrated familiarity with AI tools and large language models (LLMs)
  • Strong understanding of mental health ethics and confidentiality requirements; history of responsible handling of sensitive data
  • Ability to telecommute (e.g., internet and computer setup) and familiarity with multiple technologies including Slack, LLM chat tools and agents, and Google Workspace
  • Tolerates ambiguity and has tools for working through those moments of discomfort
  • Seeks out and integrates constructive feedback

Above and beyond:

  • A completed degree in Clinical Psychology, Counseling Psychology, Social Work, Psychiatry, or related mental health field.
  • Licensure as a mental health professional
  • Minimum of 5 years of clinical experience working with individuals experiencing mental health crises and severe pathology or similar exposure
  • Preference for clinicians who are a part of internet communities on Discord, Reddit and gaming communities and those have deep interest in AI, chatGPT and other chat-based AI tools (Character.ai, Replika etc).
  • Preference for creative clinicians that have experience in role play, theatre, and conversational design for online agents.
  • Background in trust and safety, content moderation or safety protocol development
  • Clinical experience with vulnerable youth and young adults
  • Experience with AI/ML applications in healthcare or mental health settings; Interest in NLP, AI, ML, speech signal processing and other automated systems of assessing human behavior
  • Experience with data classification and analysis projects

Compensation: $30-200/hr, dependent on licensure, years of education, and commensurate with experience

Additional Requirements:

  • Must be willing to sign comprehensive NDA and confidentiality agreements
  • Comfortable working with sensitive mental health content
  • Recurrent occasional team meetings and project coordination calls

To Apply: Please submit your resume along with a brief cover letter describing your relevant clinical experience and familiarity with AI technologies.