Why VERI-fy publishes the weights
We treat the metric weights, prompts, and confidence gates as part of the product. Here is the case for radical transparency in detection software.
By VERI-fy team
Most AI-generated-video detectors are black boxes. You upload a clip, you get a number, and you have no idea why. That is fine if you trust the vendor. For verification work, you cannot afford to.
VERI-fy publishes the methodology in full because the methodology is the trust. The 10 weighted metrics, the prompt steps, the confidence gates, the 40–60 dead zone: all of it is on the Methodology page.
Three reasons for radical transparency
One: a verdict you cannot explain is a verdict you cannot use. A journalist who cites VERI-fy in a published note needs to be able to point at the cited artifact, not at a brand. The reasoning paragraph and the artifact boxes exist precisely so that work shows.
Two: the model gets things wrong. Real cameras produce compression, unusual lighting, and tight crops every day. A vendor that hides their methodology is also hiding their failure modes. We would rather show ours and live with the criticism.
Three: the field is moving fast. New generation tools appear monthly. We update weights and prompts in the open, with a changelog entry every time. You should be able to tell, on any given day, what version of VERI-fy gave you a verdict and what changed since the last one.
What we will not publish
Specific accuracy percentages, until we have a benchmark we trust enough to defend. False-positive and false-negative rates are sensitive to the test distribution; publishing a number from a non-representative test would be worse than publishing none.
Until then: we say we make mistakes, we publish how we decide, and we mark our LOW-confidence verdicts honestly.