This is the second part of an article that we published recently on the combination of Oracle Intelligent Advisor and Oracle Digital Assistant. The first part, you may have read, focused on the basics of Intelligent Advisor Chat Service and Digital Assistant coming together. This part looks at the differences in approach when designing Intelligent Advisor interviews for Oracle Digital Assistant.
At the end of the last article, our interview was testable as a chat experience within the skill tester, but it was clear that it was not at all adapted to conversational style. Let’s look at some of the different issues and how, if possible, they can be mitigated.
Until you actually see it in action, it is harder than you might think to imagine just how much text is displayed in your average Intelligent Advisor Interview. In Oracle Digital Assistant, you will have to trim down these labels or remove as much as you can, or perhaps provide a simple link. In any case, it’s an easy starting point. The second is to decide whether this conversational format needs Screen Titles. These can be removed with a setting in the YAML file when setting up Intelligent Advisor interviews for Oracle Digital Assistant.
Next up are two fairly obvious points but they deserve to be mentioned. In my standard interview format, the result is shown in the form of an entity container, with read-only entity attributes:
In the Oracle Digital Assistant, the entity container will not render the content in that situation, and the read-only attribute will need to be redone as a label too. Simply put, read-only attributes are not shown. The final interview screen with modifications now looks like this, in the Oracle Policy Modeling Interview tab.
As you can see above, and read about in the online documentation, read-only components are not rendered. However labels are. So that means redoing the results as a series of labels. The output is considerably better. The content leverages sustitution and displays the content in a simpler format – given that some Digital Assistant channels are very limited in terms of output capability, this is a good thing, even if it means reworking our interview :
Whilst it is possible to open a link to other pages and potentially an external page could provide the same functionality, the task of recreating this in the Digital Assistant is too onerous for a demonstration, so an alternative needs to be found. The original looks a little like this, with clickable icons.
The quick solution is to transform the image into a format that can be delivered quickly, and the Form PDF turned out to be the easiest since it did not require any special handling:
As you can see in the above screenshot, the rendering of the link is completely identical to the standard Form delivery and the PDF file can be created using all the usual tools at your disposal. In the demonstration, the journey is listed step by step. And the PDF is a good choice since it renders in any channel for chat, even in completely text-only like SMS-based ones.
During this demonstration process, we did attempt to use explanations in the interview to show the station stops, but noticed that the silent and hidden parameters did not appear to be respected. Since our output represents thousands of tests to find the fastest route, the explanation is too big to be shown and an error occurred:
As you walk through the modifications and begin to approach the conversation in a different way to a traditional interview, several things become clear. In Oracle Digital Assistant, most of the rich user experience that we are used to in Oracle Intelligent Advisor no longer applies. And that is a perfectly logical and good thing, since the form factor and style of interaction is very different.
For example :
- sliders, text buttons, image toggles and switches all essentially render the same way – as buttons with a label to describe them.
- Read-only attribute controls are not displayed
- And the final issue that we will address in this part of the series is the most pressing one for our demonstration – drop-down lists are shown as a set of clickable links.
That’s of course fine if you have two or three choices. But in our case, the list shows every single station on the Paris Metro. And on a mobile device, this would require a good deal of scrolling.
So how can we improve the user experience in our very long lists? The most obvious approach is to break the list down into several chunks and to create a hierarchical list. Let’s break it down into some sections and ask for the first letter of the station:
So now the user chooses a set of stations to break the list up. And the result is more fluid and the user experience definitely enhanced.
This is implemented using a standard Oracle Intelligent Advisor filtered hierarchy, and the filter is applied on the attribute:
Our final issue for today is that in some text-only channels, HTML markup in labels is displayed as raw tags like <b>A Bold Phrase <b> so use the Skill Tester and switch to the relevant channel to test your output. So make sure you verify obvious things like an image that is displayed at the start of the interview. It’s probably not the kind of experience you want to send a customer:
There are lots more ways for us to improve the interview experience, and lots more details that you can dig into in the documentation. But it’s a good start. In the next chapter we will look at ways to improve the experience at the start of the session, to see how to use seed data.
PS, if you are interested in the algorithm behind this example, and many more interesting examples, we enjoyed reading this book (affiliate link) :
The third part of this series deals with seeding data into Oracle Intelligent Advisor interviews using Oracle Digital Assistant.