Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This is the dialog graph we will be creating:

600pxImage Added

Steps

  1. Start NPCEditor by starting the Virtual Human Toolkit launcher first and clicking on '<< Advanced' and then on 'Launch' in the NPCEditor row (that is under the 'Agents' section.
  2. Select the menu 'File->New' to start creating a new virtual agent. Save it a 'cake.plist'.
  3. Create a new person called 'cake vendor' under the People tab. This 'person' will handle the initial greeting exchange. Each person created defines what NPCEditor calls a domain. Each domain is associated with a different classifier. A domain can work like a state in a dialog manager, we will see later how to define which utterances are defined in a particular domain and how changing domain (i.e. state) works. Make sure u set both 'first name' and 'last name'. If your domain/person is just one word, you can set the last name or the first name to be a space ot you can split the word into 2 parts, one for the first name and the other for the last name.
  4. :Creation of the initial domain used in this example.Image Added
  5. Now define the connection NPCEditor uses to communicate with the rest of the Virtual Human Toolkit architecture. Select the 'Accounts' tab, then click on 'Add' and select 'Smarbody'. Test the connection by clicking on 'Connect': the button will change to 'Disconnect' and the row corresponding to the NPCEditor in the Launcher will become green. This should be defined once and associated only to one of the defined persons: the one that defines the initial domain.:Creation of the connection.
  6. Image AddedFile__CreateConnection.png
  7. Next select as type of dialog manager 'scriptable' in the 'Conversations' tab. A new 'Dialog Manager' tab next to 'Conversations' will appear containing an initial script for a dialog manager. The script is written in Groovy and can be edited to suit your needs.:Selection of the dialog manager type.
  8. Image AddedFile__SelectScriptable.png
  9. Set the parent property of 'cake vendor' to 'Anybody' (default value). This step defines an inheritance hierarchy among the various domains. In this case the 'cake vendor' domain inherits the utterances defined for the 'Anybody' domain.
  10. Create all the other persons defined in the dialog graph above: 'cavities', 'xylitol', 'diabetes', 'cake type', 'sponge cake flavor', 'cheese cake flavor'. Set the parent property of each of them to the 'Anybody' domain.
  11. Create a 'type' and 'speaker' category in the 'Setting' tab. The first is used by the default dialog manager script to handle off topic utterances from the user. The second (speaker) needs to be set for proper communication with the rest of the modules in Virtual Human Toolkit.
  12. :Creation of the type category.
  13. :Creation of the speaker category.
  14. Image Added
    File__CreateTypeCat.png
  15. Image AddedFile__CreateSpeakerCat.png
  16. Make sure the 'toss' category is set to be included in the answers and used by the classifier.:Add toss category to the utterance table.
  17. Image AddedFile__SetTossCat.png
  18. The previous steps end the setup, from now on we will add the utterances we want the agent to understand and we will define the appropriate answers. To define the reply the agent will give when it doesn't understand what the user said; add a new answer and select the type to be 'opaque'. Then define an appropriate text (e.g. "i didn't understand what you said, try to rephrase it"). For all answers you want the agent to be able to speak and animate, set the speaker to 'brad'.
  19. Next add the various utterances. The first defines the agent's answer to the user's statement 'Hi'. Here leave the type of the answer unset (because it's not an answer to be associated with non understanding). Set the speaker to 'brad' as usual and the domain to 'cake vendor'. On the user pane (the left half) set the text to 'Hi'. Finally click on both newly added utterances (they both become blue) and then set the link strength to 6 to define that the one on the right should be the answer for the utterance on the left. Similarly do for adding the replies for 'thank you' and 'bye'.:Adding the "I don't understand utterance".
  20. Image AddedFile__Utt1.png
  21. Now we will see how to make a change of state when the user says a particular utterance. We will change from 'cake vendor' to 'cavities' when the user says that s/he wants to get a cake from the agent. To do so, in addition to the setup done for the other utterances in the previous step, just set the toss property to the 'cavities' domain.
  22. :Adding additional utterance.Image AddedFile__Utt2.png
  23. Follow this procedure and complete all steps as represented in the dialog graph given at the beginning. You can see also the complete /wiki/spaces/VHTK/pages/14582324 file.
  24. Before running the example, remember to train the classifier. Select all the rows in the table in teh tab 'Classifiers'. Then check the box 'Test on training data' (because the training data is too small to be split) and then click on 'Start Training'. You can change the optimization criteria. Also, in this example, the Anybody classifier will fail because it contains no links to learn between user utterances and system answers.
  25. :Training of all classifiers.Image AddedFile__Train.png
  26. :If during the testing of the example, you see that the classifier returns multiple answers instead on just the correct one, you can adjust the threshold of that classifier by selecting the classifier in question and double-clicking the threshold displayed, as seen in this screen capture::Adjust threshold for a particular classifier.
  27. Image AddedFile__Threshold.png

Making The New Example The Default One

...

To modify the default behavior and move the tossing from the end of the utterance to the beginning of the utterance, move the two selected lines from the method <tt>npcBehavior_done</tt> to the method <tt>npcBehavior_begin</tt> as displayed in the following image:Changes to make to move the tossing from the end of the character's utterance to the begging of it.

File__TossBefore.png

Note On Debugging

To debug the dialog manager script one can add logging instructions. To do that one has to add this import: <tt>import com.leuski.af.Application;</tt> and use the following expression to log something: <tt>Application.getLogger().warning(a.toString());</tt> where <tt>a</tt> is the object we want to print in the log.

...