Skip to main content
About the Resource Center

FAQs: AI Scoring

When Does AI Scoring Generate a Charge?

A charge for AI Scoring is incurred whenever a quality evaluation form includes one or more AI-Scoring-enabled questions and that form is used to evaluate an interaction. Charges apply regardless of whether the AI-Scoring-enabled questions are ultimately answered.

The only exception is when the evaluation encounters an AI Scoring–related error at the evaluation level. In those cases, no charge is generated.

Will Reports Include Auto-Complete Evaluation Data?

Q: Do current reports include data from auto-complete evaluations?

A: Not yet. Currently, reports do not include data generated from auto-complete evaluations. Support for this data is planned for both existing Quality Management reports and the new question-level reports, with availability targeted for mid-Q2 2026.

Q: What does this mean for supervisors and analysts?

A: Until reporting support is released, auto-complete evaluation data will not appear in dashboards or exported reports. Once the update becomes available, you’ll be able to review and analyze auto-complete evaluations alongside manually completed evaluations, providing a more complete picture of overall quality performance.

Q: Will any action be required to access this data once it becomes available?

A: No. Once the reporting update is released, auto-complete evaluation data will be included automatically in all applicable reports—no configuration changes or additional setup required.

Can I use Quality Policies to create Agent Auto Complete Evaluations?

No, Quality Policies do not support Agent Auto-Complete evaluations.

For more information on how to generate an agent auto complete evaluation, see How do I generate an Agent Auto Complete Evaluation?

How do I generate an Agent Auto Complete Evaluation?

You can generate an evaluation for a specific interaction in one of two ways:

Generating Auto-Complete Evaluations Using AI Scoring Rules Management

To automate the generation of evaluations at scale, configure an Agent Scoring Rule using the AI Scoring Rules Management API.

Step 1: Create an Agent Scoring Rule

Use the following API:

POST /api/v2/quality/programs/{programId}/agentscoringrules

Example Request:

POST /api/v2/quality/programs/bd27fab3-6e94-4a93-831e-6f92e664fc61/agentscoringrules HTTP/1.1
Host: api.inindca.com
Authorization: Bearer *******************
Content-Type: application/json

Example JSON body:

{
“programId”: “bd27fab3-6e94-4a93-831e-6f92e664fc61”,
“samplingType”: “Percentage”,
“submissionType”: “Automated”,
“evaluationFormContextId”: “14818b50-88c0-4cc5-8284-4ed0b76e3193”,
“enabled”: true,
“published”: true,
“samplingPercentage”: 97
}

Field Explanations

  • programId – ID of the Speech & Text Analytics (STA) program.
  • evaluationFormContextId – The contextId of the automated evaluation form to use.
  • samplingPercentage – Percentage of interactions that should automatically generate evaluations.
  • enabled – Must be true for the scoring rule to be active.
  • published – Must be true for the rule to take effect.
  • submissionType – Set to "Automated" to ensure evaluations are auto-generated.

Once the rule is active, evaluations will automatically be created for interactions that meet the rule’s criteria.

Additional Resources

Which Genesys Cloud regions support AI scoring, and how are they mapped to AWS Bedrock regions?

The following table shows the AWS region mappings used by Genesys Cloud for AI scoring with Bedrock models.

Genesys Region

Mapped to Bedrock Region for AI Scoring

us-east-1 

us-east-1 

me-central-1 

eu-west-1 

eu-west-2 

eu-central-1 

us-west-2 

us-west-2 

ap-southeast-2 

ap-southeast-2 

ap-northeast-2 

ap-northeast-2 

ap-northeast-1 

ap-northeast-3 

ap-northeast-1 

eu-west-2 

eu-west-2 

sa-east-1 

sa-east-1 

ca-central-1 

ca-central-1 

ap-south-1 

ap-south-1 

FedRAMP – us-east-2 

us-east-1 

us-west-2 

*Done via AWS using cross region inference

eu-central-1 

eu-central-1 

Is there a best practices guide for using AI Scoring?

Yes. To learn how to use AI Scoring effectively and get the most accurate results, see Optimizing Virtual Supervisor forms for AI Scoring.

How should I confirm that the agent closed the conversation properly?

Include a question about summarizing outcomes or confirming satisfaction before ending the interaction.


Example: “Did the agent confirm customer satisfaction or summarize next steps before closing the conversation?”
AI marks Yes when the agent checks resolution or restates next steps clearly. This confirms that the customer’s issue was addressed before the call or chat ended.

How can I design a question to handle dead air or silence?

Ask whether the agent acknowledged or explained any pause longer than a set threshold.


Example: “Did the agent avoid unnecessary dead air or long silences without explaining the reason?”
AI marks Yes when the agent explains pauses (for example, “I’ll place you on a brief hold while I check this”). Unexplained silence longer than 15 seconds is marked No.

How do I handle compliance or disclosure questions in AI scoring?

Compliance questions should reference required statements that appear in the transcript.


Example: “Did the agent comply with mandatory disclosure or compliance statements (for example, terms, disclaimers, or legal requirements)?”
AI marks Yes when mandatory phrases—such as legal disclaimers or security verifications—are found. Keep help text specific to your industry’s compliance standards.

How should escalation be evaluated by AI?

Write questions that specify the elements of a complete escalation explanation.


Example: “Did the agent explain the escalation process clearly, including who to contact, what information is needed, and expected response times?”
AI marks Yes when all three elements are present. Include examples in the help text to illustrate acceptable responses.