Trust Package Detail
AI usage and model handling summary
Where AI is used in the workflow, how outputs are validated, and where human review still applies.
Organizations reviewing model usage, AI boundaries, and operational handling expectations.
Why this matters
Partners choosing Blacklight are not just buying automation. They are trusting a workflow that should be clear about where AI is used, where it is bounded, and how those outputs are handled responsibly.
Current summary
AI is used inside the resume-generation workflow rather than as an open-ended chat product layered across the whole site.
Model processing is scoped to the request workflow, and Blacklight stores the service outputs it needs in its own system rather than relying on an OpenAI-hosted conversation history as the business record.
That means partners can review AI use as one bounded part of the service path: generation happens for the request, outputs are kept in Blacklight systems, and the platform record does not depend on a separate chat history living elsewhere.
How to use it in review
Start with this summary to align your reviewer on scope before sending a longer questionnaire.
Use the partner portal for active trial, billing, and support-configuration actions if your organization already has partner access.
Use the trust contact flow when your institution needs a deeper procurement, privacy, or questionnaire follow-up than the public trust center provides.