Oracle Cloud Infrastructure Documentation

The Skill Tester

The Skill Tester (The Tester icon.) lets you simulate conversations with your skill to test the dialog flow, intent resolution, entity matching, and Q&A responses. You can also use it to see how conversations would render in different channels. The Skill Tester lets you test the various functions of your skill in both an ad-hoc manner, and through test cases, which you create by recording conversations. You can create an entire suite of specific test cases for the skill. When developers extend skills, they can reference the test cases to preserve the core functionality of the skill.

To start the tester:

  1. Open the skill that you want to test.

  2. At the bottom of the left navigation for the skill, click the icon for the Skill Tester.

  3. To preview how the skill will render on a given channel, select a channel type from the Channel dropdown. The Tester simulates how the skill behaves within the limitations of a given channel. By default, the Tester simulates the Webhook, which renders the UI per the Oracle Web SDK.
    Note

    The Tester does not simulate all of the features for a selected channel. For example, the Microsoft Teams channel simulation does not render adaptive cards.
  4. In the text field at the bottom of the tester, enter some test text. The tester's Channel Limitations describes the features that are not supported by the Channel.

Typically, you’d use the Skill Tester after you’ve created intents, Q&A, and defined a dialog flow. It’s where you actually chat with your skill or digital assistant to see how it functions as a whole, not where you build Q&A or intents.

As you are creating, testing, and refining intents, you may prefer to use the Try It Out! tester in the Intents and Q&A pages.
This is an image of the Try It Out! option.

The Try It Out! feature helps you improve your training utterances iteratively.

Tip:

You should test each skill in your target channels early in the development cycle to make sure that your components render as intended.

Track Conversations

In the Conversation tab, the Tester tracks the current response in terms of the current state in the in the dialog flow. Depending on where you are in the dialog flow, the window shows you the postback actions or any context and system variables that have been set by a previous postback action. It also shows you any URL, call, or global actions.

Description of conversation_tester.png follows

In the Intent/Q&A tab, you can see the resolved intent that triggered the current path in the conversation.

Description of intents_bot_tester.png follows

When the user input gets resolved to Q&A, the Routing window shows you the ranking for the returned answers.

Description of qna_bot_tester.png follows

Finally, the JSON window shows you the complete details for the conversation, including the entities that match the user input and values returned from the backend. You can search this JSON object, or download it.

Description of json_bot_tester.png follows

Test Cases

You can create a test case for each use case by recording conversations in the Skill Tester. These test cases are part of the skill's metadata and therefore persist across versions. When you extend a skill – in particular, a skill from the Skill Store – you can run these test cases to ensure that your modifications have not broken any of the skill's basic functions. In addition to preserving core functions, you can create test cases for new scenarios and use cases, or disable any inherited test cases that fail because of the changes that were introduced by the extension.

Enable the Test Suite

Before you can create test cases, you need to enable the Test Suite. To enable this optional feature:
  • Click to open the side menu, select Settings, then Feature Management.
  • From the Current profile menu, select a profile where the Test Suite is enabled (such as Enable All).
After you enable the Test Suite:
  • The Run Tests button (located at the top left) becomes available.
    This is an image of the Run Tests button.
  • The Skill Tester has the Bot Tester, Test Cases, and Test Run Results tabs as well as the Save as Test Case option.
    This is an image of the Tester tabs

Manage Test Cases

The Test Cases page lists both the test cases that you've created and the test cases that were inherited (This is an image of the inherited test case icon.) from a skill that you've extended, cloned, or imported from another instance. Using this page, you can add and run test cases. You can also delete the test cases that you've created or exclude test cases from a test run by disabling them.
Description of test_cases.png follows

In addition to displaying the basic information for a selected test case, the Conversation field displays the JSON definition of the test case itself. While you can update this definition, for example, to fix a test run by substituting placeholders for variables, we do not recommend making extensive changes to this definition.
[
    {
        "source": "user",
        "type": "text",
        "payload": {
            "message": "I would like a large veggie pizza on gluten-free crust delivered to my home at 8pm"
        }
    },
    {
        "source": "bot",
        "type": "text",
        "payload": {
            "message": "OK, let's get that order sorted."
        }
    },
    {
        "source": "bot",
        "type": "text",
        "payload": {
            "message": "OK, so we are getting you a large Veggie pizza at ${TIME}. This will be on our gluten free crust. We are delivering to Buckingham Palace, The Mall, Westminster, London SW1A 1AA."
        }
    }
]
Add Test Cases

Whether you're creating a skill from scratch, or extending a skill, you can create a test case for each use case. For example, you can create a test case for each payload type. You can build an entire suite of test cases for a skill by simply recording conversations or by creating JSON files that define message objects.

Create a Test Case from a Conversation
Recording conversations is quicker and less error prone than defining a JSON file. To create a test case from a conversation:
  1. Click the Skill Tester in the left navbar.
  2. Click Bot Tester.
  3. Enter the utterances that are specific to the behavior or output that you want to test.
  4. Click Save As Test.
  5. Complete the Save Conversation as Test Case dialog:
    • Enter a name and display name that describe the test.
    • As an optional step, provide details in the Description field that help developers understand how the test validates the expected behavior by describing a scenario or a use case from a design document.
  6. Click Save Conversation.
    Description of save_conversation_history_test_case.png follows

Create a Test Case from a JSON Object
To create a test case from an array object of message objects:
  1. Click + Test Case in the Test Cases page.
  2. Enter a name and display name that describe the function that's tested.
  3. As an optional step, provide details in the Description that help developers understand how the test validates the expected behavior.
  4. Add the message objects within the array ([]). Here is template for the different payload types:
        {
            source: "user",             //text only message format is kept simple yet extensible.
            type: "text"
            payload: {
                message: "order pizza" 
            }
        },{
            source: "bot",
            type: "text",
            payload: {
                message: "how old are you?"
                actions: [action types --- postback, url, call, share],  //bot messages can have actions and globalActions which when clicked by the user to send specific JSON back to the bot.
                globalActions: [...]
            }
        },
        {
            source: "user",
            type: "postback"
            payload: {      //payload object represents the post back JSON sent back from the user to the bot when the button is clicked
                variables: {
                    accountType: "credit card"
                }, 
                action: "credit card", 
                state: "askBalancesAccountType"
            }
        },
        {
            source: "bot",
            type: "cards"
            payload: {
                message: "label"
                layout: "horizontal|vertical"
                cards: ["Thick","Thin","Stuffed","Pan"],    // In test files cards can be strings which are matched with button labels or be JSON matched  
                cards: [{
                    title: "...",
                    description: "..."
                    imageUrl: "...",
                    url: "...",
                    actions: [...]  //actions can be specific to a card or global
                }],
                actions: [...],
                globalActions: [...]
            }
             
        },
        {
            source: "bot|user",
            type: "attachment"  //attachment message could be either a bot message or a user message    
            payload: {
                attachmentType: "image|video|audio|file"
                url: "https://images.app.goo.gl/FADBknkmvsmfVzax9"
                title: "Title for Attachment"
            }   
        },
        {
            source: "bot",
            type: "location"       
            payload: {
                message: "optional label here"
                latitude: 52.2968189
                longitude: 4.8638949
            }
        },
        {
            source: "user",
            type: "raw"
            payload: {
                ... //free form application specific JSON for custom use cases. Exact JSON matching
            }
        }
        ...
        //multiple bot messages per user message possible.]
    }
    
  5. Switch on the Enabled toggle.
Run Test Cases
You can run one or all of the test cases listed in the Test Cases page. When you expect that an inherited test case will intentionally fail because of the changes that were deliberately made to a skill, you can exclude it from the test run by disabling it. You also temporarily disable a test case because of ongoing development.
Note

You can't delete an inherited test case, you can only disable it.
After the test run completes, click the Test Run Results tab to find out which of the test cases passed (The Test Passed icon.) or failed (The Test Failed icon.).
Description of test_run_results.png follows

View Test Run Results

The Test Run Results page lists the recently executed test runs and the results of each run. You can review the results for a particular test run by selecting it from this list, which by default, begins with the most recent run.
Note

The test run results for each skill are maintained for 14 days. They are deleted after this time.

You can filter the results by clicking the Passed, Failed, or In Progress tiles. The page provides a summary for each test case that's included in the run. Test cases pass or fail according to a comparison of the expected output, which is recorded in the test case definition, against the actual output from the test run. If the two match, the test case passes. If they don't, the test case fails. By expanding the summary, you can identify the cause of the failure using the JSON Pointer that locates the error and the comparison of the actual and expected values.
Description of json_bot_tester1.png follows

Review Failed Test Cases

The summary's JSON Pointer locates the message object in the test case definition where the failure occurred. Along with pinpointing the error, the summary also presents the comparison of the actual value from the test run to the expected value set by the test case.

In the following example, the JSON Pointer indicates that the problem lies with the URL value that's expected in the payload of the 8th response message issued by the skill in the test case conversation (/8/payload/url). At this point in the conversation, the URL value (photo1.png) does not match the updated URL (photo2.png) provided by the test run:
Json Pointer:
/8/payload/url
Expected Value:
https://www.example.com/photo1.png
Actual Value:
https://www.example.com/photo2.png
Fix Failed Test Cases by Applying an Actual Value

Some changes, however small, can cause many of the test cases to fail within the same run. This is often the case with changes to text strings such as prompts. For example, changing a text prompt from "How big of a pizza do you want?" to "What pizza size?" will cause any test case that includes this prompt to fail, even though the skill's functionality remains unaffected. While you can accommodate this change by either re-recording the test case entirely, you can instead quickly update the test case definition with the revised prompt by clicking Apply Actual. Because the test case is now in step with the new skill definition, the test case will pass (or at least not fail because of the changed wording).
Description of add_actual.png follows

Note

While you can apply string values, such as prompts and URLs, you can't use the Apply Actual function to fix a test case when a change to an entity's values or its behavior (disabling the Out of Order Extraction function, for example) causes the values provided by the test case to become invalid. The test case will fail because the skill will continually prompt for a value that it will never receive, thus causing its responses to become out of step with the sequence defined by the test case.
Fix Test Cases By Adding Variable Value Placeholders

The skill's responses can include dynamic information that can cause the test cases to fail when actual and expected comparisons are made. You can exclude dynamic information from the comparison by substituting a placeholder in the JSON definition that's formatted as ${MY_VARIBALE_NAME}.

For example, a temporal value, such as one returned by the Apache FreeMarker date operation, ${.now?string.full} will cause test cases to continually fail because of the mismatch between the time when the test case was recorded and the time when the test case was run.
Description of clashing_temporal_values.png follows

To enable these test cases to pass, replace the clashing time value in the JSON definition with a placeholder. For example, replace Monday, December 9, 2019 5:27:27 PM UTC in the following payload with ${ORDER_TIME}.
    {
        "source": "bot",
        "type": "text",
        "payload": {
            "message": "You placed your order on Monday, December 9, 2019 5:27:27 PM UTC for a Small Meat Lovers pizza. Your pizza is on the way."
        }
For example:
    {
        "source": "bot",
        "type": "text",
        "payload": {
            "message": "You placed your order on ${ORDER_TIME} for a Small Meat Lovers pizza. Your pizza is on the way."
        }
    }
]
The variable placeholders that you create are listed in the Variables field. For newly recorded test cases, the Variable field also notes the SYSTEM_BOT_ID placeholder that's substituted for the system.botId values that change when the skill has been imported from another instance or cloned.
Description of variables_field.png follows