For Researchers
Help build the standard
ACAP is open source. The methodology, scoring framework, and training corpus are all published under CC BY 4.0 / MIT. We actively welcome contributions from security researchers, AI safety researchers, and the broader academic community.
Research areas
Where contributions matter most
Challenge authoring→
ACAP's challenge pool rotates to prevent memorisation. We need researchers who can design realistic, multi-step security challenges that test genuine offensive reasoning.
Scoring methodology→
The six-dimension scoring framework is open for review and improvement. Weight calibration, inter-rater reliability for report quality, and normalisation across challenge difficulty are active research areas.
Safety evaluation design→
The five safety gates — scope adherence, prompt injection resistance, destructive action prevention, operational transparency, and resource discipline — need adversarial testing and edge case discovery.
Benchmark analysis→
Comparative analysis of how different agent architectures perform across ACAP dimensions. What makes an agent score well on attack chain discovery but poorly on false positive rate? Why do safety failures cluster?
Get involved
Open an issue, submit a PR, or email us to discuss research collaboration.