You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
This project studies how AI systems can responsibly model and communicate mental-health risk over time. Using DynoSys, a graph-temporal AI framework for longitudinal behavioral-health data, the project will examine how to generate uncertainty-aware, non-deterministic, prevention-oriented risk trajectories from multimodal data.
The focus is not only prediction accuracy, but safe communication: how to avoid stigmatizing labels, distinguish prediction from causality, communicate uncertainty, and design mental-health AI outputs that support research and prevention without encouraging misuse.
DynoSys integrates longitudinal information across genetics, environment, brain development, neurocognition, family context, peer context, sleep, screen use, and behavior. The broader goal is to shift mental-health AI from static “risk score” prediction toward dynamic, interpretable, ethically responsible modeling. The initial use case is adolescent mental health and substance-use prevention, where risk often emerges gradually through interactions among biological, environmental, social, and behavioral factors.
I have already developed preliminary DynoSys pipelines for integrating polygenic risk scores, environmental and family exposures, neurocognitive features, brain-derived representations, and behavioral outcomes into time-indexed datasets. The current framework supports both continuous behavioral trajectories and time-to-event substance-use initiation outcomes. The next step is to build a responsible reporting layer that makes model outputs safer, clearer, and more useful for prevention research.
What are this project's goals? How will you achieve them?
The main goal is to develop and evaluate a responsible AI reporting framework for dynamic mental-health risk modeling.
I will achieve this through five specific goals:
Build uncertainty-aware risk trajectory outputs
I will develop tools that estimate how mental-health and substance-use risk changes over time, while clearly communicating uncertainty. The outputs will avoid deterministic language such as “this child will develop X” and instead use probabilistic, research-oriented language.
Distinguish prediction from causality
A major risk in mental-health AI is that users may interpret predictive associations as causal intervention targets. I will create reporting templates that clearly separate predictive signals from causal claims. “What-if” estimates will be presented as hypothesis-generating unless supported by stronger causal evidence.
Reduce stigmatizing or harmful communication
I will design output language that avoids labeling youth as “dangerous,” “defective,” or permanently high-risk. The system will emphasize developmental change, uncertainty, and potentially modifiable factors such as sleep, screen use, parental monitoring, peer context, and family environment.
Develop a responsible reporting prototype
I will build prototype reports that include risk trajectories, domain-level explanations, uncertainty summaries, limitations, and responsible-use warnings. These reports will be tested using simulated or de-identified example structures so that no sensitive individual-level data are exposed.
Produce an open technical report or manuscript
I will write a public-facing technical report or manuscript describing the responsible AI framework, design principles, limitations, and practical guidance for mental-health risk communication.
The project will be implemented in stages: first by formalizing safe communication principles, then by building reporting templates, then by integrating these templates into DynoSys model outputs, and finally by preparing documentation, examples, and a manuscript.
Funding will support the development of a responsible AI layer for DynoSys and help convert the current research prototype into a safer, more usable research tool.
The funds would be used for:
Software development time
Modularizing the DynoSys reporting layer, implementing uncertainty-aware visualizations, generating model output templates, and building reproducible example workflows.
Compute resources
Running model validation, cross-validation, uncertainty estimation, sensitivity analyses, and graph-temporal experiments.
Responsible AI evaluation
Developing and testing reporting language that avoids deterministic, stigmatizing, or misleading interpretations. This includes comparing different ways to communicate uncertainty, modifiable factors, and prediction-versus-causality boundaries.
Visualization and report design
Creating clear risk trajectory plots, domain-level explanation summaries, uncertainty displays, and responsible-use notes for mental-health research outputs.
Documentation and dissemination
Preparing tutorials, example reports, public documentation, and a manuscript or technical report that can be shared with researchers in mental health, addiction, developmental science, and biomedical AI.
With minimum funding, I would focus on producing a small but usable responsible reporting prototype with documentation and example outputs. With full funding, I would also expand the uncertainty evaluation, improve visualization tools, run more model comparisons, and prepare a stronger open-source release and manuscript.
The project is currently led by Myself, a postdoctoral researcher working at the intersection of computational neuroscience, genetics, machine learning, and mental-health/addiction research.
My research focuses on longitudinal multimodal modeling of complex behavioral-health outcomes. I have experience with polygenic risk scores, GWAS-related workflows, survival modeling, machine learning, graph-temporal models, neuroimaging-derived features, and large-scale cohort data.
Relevant prior work includes:
Building longitudinal multimodal pipelines for behavioral-health risk modeling.
Integrating genetic, environmental, neurocognitive, brain-derived, and behavioral data.
Modeling both continuous mental-health trajectories and time-to-event substance-use initiation outcomes.
Developing early graph-temporal models for dynamic risk-state estimation.
Creating reproducible computational workflows for biomedical research.
Working on interpretation and reporting of model outputs for prevention-oriented research.
This project builds directly on my existing DynoSys framework. I may later add collaborators or advisors in responsible AI, mental-health communication, clinical psychology, or software engineering, but the initial prototype and documentation can be developed independently.
The most likely failure modes are technical, interpretive, and adoption-related.
The model outputs may be too uncertain or unstable
Mental-health and substance-use risks are complex, and the available data may not support highly precise individual-level prediction. If this happens, the project will still be useful by showing where uncertainty is high and by developing safer standards for communicating limitations.
Graph-temporal models may not outperform simpler baselines
It is possible that simpler penalized models or survival models perform as well as more complex graph-temporal models. To mitigate this, I will benchmark against simpler baselines and avoid presenting complexity as inherently better. If simpler models are more reliable, the reporting layer can still be applied to them.
“What-if” outputs may be misinterpreted as causal claims
This is one of the central risks. The project will mitigate it by explicitly labeling these estimates as hypothesis-generating, separating prediction from causality, and including clear limitations in every report.
Risk communication may still be misused
Even careful mental-health AI outputs can be misused if interpreted deterministically or punitively. The project will reduce this risk by avoiding stigmatizing labels, emphasizing modifiable factors, reporting uncertainty, and limiting the intended use to research and prevention science rather than individual diagnosis or punitive decision-making.
Limited adoption by other researchers
If the tool is hard to use, others may not adopt it. To reduce this risk, I will prioritize clean documentation, simulated examples, modular code, and practical reporting templates.
If the project does not achieve its most ambitious goals, a valuable lower-bound outcome would still be a documented framework for safer mental-health AI reporting, including example templates, uncertainty communication principles, and lessons learned from applying them to longitudinal behavioral-health modeling.
I have not raised external project funding for DynoSys in the last 12 months.
The project has so far been developed through my own research time and existing academic research infrastructure. I am applying for funding to support the transition from a research prototype into a more reusable, responsible, and publicly documented AI framework for mental-health risk modeling and communication.
There are no bids on this project.