Learn how to properly test your inbound and outbound AVA projects using mock calls, safe test numbers, and structured evaluation techniques before going live.
Before going live with any agent or campaign, it’s critical to test your setup to ensure everything is functioning as expected. AVA allows for safe, internal testing of both inbound and outbound experiences via dedicated mock call tools. Below are the step-by-step instructions for each type of project.
Inbound Project Testing
For a detailed guide on how to create an inbound project, refer to the Inbound Project Setup section.
Select or Create an Inbound Agent
Use the 'Save and Test Agent' Option
Attach a Phone Number and Run the Mock Call
Outbound Project Testing
For a detailed guide on how to create an outbound project, refer to the Outbound Project Setup section.
Select or Create a Project
Configure Project and Campaign Settings
Test the Campaign
Select a Phone Number and Confirm Test Run
Testing your AI agent is more than just clicking “Run” — it’s about simulating realistic, high-stakes scenarios and using the outcomes to optimize your agent’s behavior. The goal isn’t to see if the agent “works,” but to evaluate how it performs under pressure, confusion, or resistance. Below is a breakdown of best practices, common pitfalls, and actionable examples.
It’s tempting to “help” the agent by prompting it with instructions during your test, but this defeats the purpose. AVA doesn’t remember what you say in the call. It relies solely on the fields you’ve configured.
What Not to Do:
“Hey Ava, tell me the three plans we offer.” — This won’t work unless you’ve configured the relevant field.
What to Do Instead:
Configure your Prompting field to say: “If the client asks about pricing, explain our three-tier plan options,” and enter those options in the Key Information field.
You want to find the edge cases — what happens if the client misunderstands? Is rude? Refuses the offer? The agent’s behavior in these scenarios is more telling than in ideal ones.
What Not to Do:
Only ask basic questions you know the agent can answer. Avoid softballs.
What to Do Instead:
Try: “This sounds confusing, can you explain it again?” or “I don’t trust this offer, who are you with again?”
Bulk data like pricing, service packages, or product options should be added in Key Information fields — clearly separated by lines or bullet points, not crammed into long paragraphs. Avoid inserting prompts or filler text in these fields.
What Not to Do:
“If the client asks about pricing, let them know we offer this, this, and this” (in an Information field).
What to Do Instead:
In Key Information, just add:
Never hardcode names like “Hi John” or “Speak to Sarah” in scripting fields. Use custom values such as {{client_firstname}}
or {{representative_name}}
instead. This ensures adaptability and prevents having to manually update multiple fields.
What Not to Do:
“This is Sarah from FitLife.” (hardcoded)
What to Do Instead:
“This is from .”
💡 Final Note:
Treat every test like a live scenario. Review transcripts, refine one field at a time, and test again. Consistency, structure, and proper field usage are what separate good agents from great ones.
Why is it important to simulate real client behavior during tests?
Simulating real client behavior ensures the agent is being tested under realistic conditions. You should avoid guiding the agent during the call — instead, observe how it reacts to natural responses, confusion, objections, or redirection. This helps identify gaps in your information and prompting fields.
Does the agent remember previous test call data?
No. AVA does not retain memory between calls. Each test is isolated. Use test call transcripts and logs to manually analyze outcomes and refine the agent’s configuration accordingly.
Can I use a real phone number during testing?
You can, but it’s strongly recommended to use a designated test number to avoid accidentally contacting actual clients or stakeholders. This ensures your tests remain internal and risk-free.
What should I do if the agent gives an incorrect or incomplete response?
Review your input fields — especially Key Information, Prompting, and Objection Handling sections. If the data or instructions are vague, unstructured, or overloaded, the AI may generate inaccurate responses. Clean up the input and retest.
Is it okay to mix prompts and scripts in the same field?
In limited cases, yes. For example, objection handling can combine a guiding prompt with a sample script. However, most fields are optimized for one type of input — sticking to the field’s intended use will deliver more consistent results.
How do I monitor and review my test calls?
After a test run, AVA will generate a call transcript and log URL. These allow you to review the AI’s responses and make informed updates to your agent configuration. Be sure to check for unnatural pauses, missed data, or off-brand messaging.
When should I stop testing and launch the agent?
When the agent reliably:
Can I reuse a single test number for both inbound and outbound testing?
Yes. A dedicated test number can be reused across scenarios. Just make sure the number is disconnected from any real client profiles or live campaigns.
For additional questions or guidance, try using our Virtual Support Agent! Available 24/7 at thinkrr.ai/support.
If you still need assistance, visit our support site at help.thinkrr.ai and submit a Ticket or contact our team directly at hello@thinkrr.ai.
Learn how to properly test your inbound and outbound AVA projects using mock calls, safe test numbers, and structured evaluation techniques before going live.
Before going live with any agent or campaign, it’s critical to test your setup to ensure everything is functioning as expected. AVA allows for safe, internal testing of both inbound and outbound experiences via dedicated mock call tools. Below are the step-by-step instructions for each type of project.
Inbound Project Testing
For a detailed guide on how to create an inbound project, refer to the Inbound Project Setup section.
Select or Create an Inbound Agent
Use the 'Save and Test Agent' Option
Attach a Phone Number and Run the Mock Call
Outbound Project Testing
For a detailed guide on how to create an outbound project, refer to the Outbound Project Setup section.
Select or Create a Project
Configure Project and Campaign Settings
Test the Campaign
Select a Phone Number and Confirm Test Run
Testing your AI agent is more than just clicking “Run” — it’s about simulating realistic, high-stakes scenarios and using the outcomes to optimize your agent’s behavior. The goal isn’t to see if the agent “works,” but to evaluate how it performs under pressure, confusion, or resistance. Below is a breakdown of best practices, common pitfalls, and actionable examples.
It’s tempting to “help” the agent by prompting it with instructions during your test, but this defeats the purpose. AVA doesn’t remember what you say in the call. It relies solely on the fields you’ve configured.
What Not to Do:
“Hey Ava, tell me the three plans we offer.” — This won’t work unless you’ve configured the relevant field.
What to Do Instead:
Configure your Prompting field to say: “If the client asks about pricing, explain our three-tier plan options,” and enter those options in the Key Information field.
You want to find the edge cases — what happens if the client misunderstands? Is rude? Refuses the offer? The agent’s behavior in these scenarios is more telling than in ideal ones.
What Not to Do:
Only ask basic questions you know the agent can answer. Avoid softballs.
What to Do Instead:
Try: “This sounds confusing, can you explain it again?” or “I don’t trust this offer, who are you with again?”
Bulk data like pricing, service packages, or product options should be added in Key Information fields — clearly separated by lines or bullet points, not crammed into long paragraphs. Avoid inserting prompts or filler text in these fields.
What Not to Do:
“If the client asks about pricing, let them know we offer this, this, and this” (in an Information field).
What to Do Instead:
In Key Information, just add:
Never hardcode names like “Hi John” or “Speak to Sarah” in scripting fields. Use custom values such as {{client_firstname}}
or {{representative_name}}
instead. This ensures adaptability and prevents having to manually update multiple fields.
What Not to Do:
“This is Sarah from FitLife.” (hardcoded)
What to Do Instead:
“This is from .”
💡 Final Note:
Treat every test like a live scenario. Review transcripts, refine one field at a time, and test again. Consistency, structure, and proper field usage are what separate good agents from great ones.
Why is it important to simulate real client behavior during tests?
Simulating real client behavior ensures the agent is being tested under realistic conditions. You should avoid guiding the agent during the call — instead, observe how it reacts to natural responses, confusion, objections, or redirection. This helps identify gaps in your information and prompting fields.
Does the agent remember previous test call data?
No. AVA does not retain memory between calls. Each test is isolated. Use test call transcripts and logs to manually analyze outcomes and refine the agent’s configuration accordingly.
Can I use a real phone number during testing?
You can, but it’s strongly recommended to use a designated test number to avoid accidentally contacting actual clients or stakeholders. This ensures your tests remain internal and risk-free.
What should I do if the agent gives an incorrect or incomplete response?
Review your input fields — especially Key Information, Prompting, and Objection Handling sections. If the data or instructions are vague, unstructured, or overloaded, the AI may generate inaccurate responses. Clean up the input and retest.
Is it okay to mix prompts and scripts in the same field?
In limited cases, yes. For example, objection handling can combine a guiding prompt with a sample script. However, most fields are optimized for one type of input — sticking to the field’s intended use will deliver more consistent results.
How do I monitor and review my test calls?
After a test run, AVA will generate a call transcript and log URL. These allow you to review the AI’s responses and make informed updates to your agent configuration. Be sure to check for unnatural pauses, missed data, or off-brand messaging.
When should I stop testing and launch the agent?
When the agent reliably:
Can I reuse a single test number for both inbound and outbound testing?
Yes. A dedicated test number can be reused across scenarios. Just make sure the number is disconnected from any real client profiles or live campaigns.
For additional questions or guidance, try using our Virtual Support Agent! Available 24/7 at thinkrr.ai/support.
If you still need assistance, visit our support site at help.thinkrr.ai and submit a Ticket or contact our team directly at hello@thinkrr.ai.