How do I generate an Agent Auto Complete Evaluation?
You can generate an evaluation for a specific interaction in one of two ways:
Generating Auto-Complete Evaluations Using AI Scoring Rules Management
To automate the generation of evaluations at scale, configure an Agent Scoring Rule using the AI Scoring Rules Management API.
Step 1: Create an Agent Scoring Rule
Use the following API:
POST /api/v2/quality/programs/{programId}/agentscoringrules
Example Request:
POST /api/v2/quality/programs/bd27fab3-6e94-4a93-831e-6f92e664fc61/agentscoringrules HTTP/1.1
Host: api.inindca.com
Authorization: Bearer *******************
Content-Type: application/json
Example JSON body:
{
“programId”: “bd27fab3-6e94-4a93-831e-6f92e664fc61”,
“samplingType”: “Percentage”,
“submissionType”: “Automated”,
“evaluationFormContextId”: “14818b50-88c0-4cc5-8284-4ed0b76e3193”,
“enabled”: true,
“published”: true,
“samplingPercentage”: 97
}
Field Explanations
- programId – ID of the Speech & Text Analytics (STA) program.
- evaluationFormContextId – The contextId of the automated evaluation form to use.
- samplingPercentage – Percentage of interactions that should automatically generate evaluations.
- enabled – Must be
truefor the scoring rule to be active. - published – Must be
truefor the rule to take effect. - submissionType – Set to
"Automated"to ensure evaluations are auto-generated.
Once the rule is active, evaluations will automatically be created for interactions that meet the rule’s criteria.
Additional Resources
- For instructions on creating manual evaluations, see Create a new evaluation.
- For guidance on designing, testing, and tuning auto-complete evaluations, see AI Scoring Best Practices.
[NEXT] Was this article helpful?
Get user feedback about articles.