Wednesday, October 31, 2018

IBM Watson Assistant: Chatbot tool now supports testing client actions

Test your chatbot
Are you (already) using the Watson Conversation Tool I wrote? There is good news because I added support for testing client side actions. As you might know, IBM Watson Assistant features both client and server side actions to enhance responses with data from external services. In a blog post from this Summer, I wrote that I added support for testing server side actions. They are implemented using IBM Cloud Functions. Client actions are, well, executed on the application side. Read on to learn how to test them.

Client Actions

The Watson Assistant documentation has this to say about use cases for client or server actions:

You can use an external service to validate information that you collected from the user, or perform calculations or string manipulations on the input which are too complex to be handled by using supported SpEL expressions and methods. Or you can interact with an external web service to get information, such as an air traffic service to check on a flight's expected arrival time or a weather service to get a forecast. You can even interact with an external application, such as a restaurant reservation site, to complete a simple transaction on the user's behalf.

In some of my chatbots, the actions perform searches in Db2 databases and validate data. Testing server actions is easy because only the credentials for executing the embedded IBM Cloud Functions are needed. The action itself runs on the server and hence can be utilized by any client app. Client actions, however, are executed by the app, by the client application. How can I easily test them during the development process?

Test Client Actions

Using the Python-based Watson Conversation Tool, client actions can be tested in two ways:
  1. Manually modify the session context: The tool saves the entire session context to a file session_context.json after receiving the response from Watson Assistant. It reads that file before sending the next message request. By using an editor, you could modify the session context and add, change or delete values. The new request would pick up those values and send them to Watson Assistant.
    The process works well in general, but does not strictly follow the protocol for the client actions.
  2. Provide code for / to simulate client action: As a new feature, the tool now accepts a parameter "-actionmodule name". That module or single Python file is then dynamically loaded and a function handleClientActions called. That function either provides the regular logic for the client action or could be some code to simulate the action by assigning test values.
    I have successfully used it to test some client actions with a Watson Assistant workspace. This way, you could test Watson Assistant independently of the actual client application.
Here is a sample invocation of how I use the Watson Conversation Tool to test the dialog flow. The handling of a client action is in and I am only interested in the regular output ("-outputonly").

python -dialog -id e5e9781f-xxxx-xxxx-9xx9-d9xxx405a0e4 -actionmodule handleClientAction -outputonly

For feature requests, defects or suggestions, open an issue or a pull request against the repository. If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik) or LinkedIn.