If you have not already read the recent blog article by Davin Fifield (just a reminder for those wondering, Davin Fifield is responsible for intelligent automation capabilities within Oracle’s CX Service suite. He has over 20 years experience leading the delivery of innovative business software products for public sector, financial services, legal and other industries) then I would recommend you do so – it’s all about how OpenAI (or another provider) and Oracle Intelligent Advisor can work together..
Recently I’ve been discussing on LinkedIn how my expectations in regard to Oracle Intelligent Advisor and “Artificial Intelligence” – the phrase is in inverted commas because I’m using it in the mainstream catch-all sense – have been centered around the concept of assistance – assisted rule creation for example whereby a model can understand how a piece of legislation is structured and assist the rule author in building the rules in Oracle Intelligent Advisor.
Davin’s article opens up another front in this discussion by looking at the problem of hallucination (aka, asking an AI engine a question and getting an answer that looks convincing, but is actually completely incorrect). This can happen in a lot of situations, but basically the model doesn’t have the exact match for what you’re asking anywhere so it spits out something fairly coherent but wrong.
OpenAI and others have worked to create a plugin concept, whereby custom tools (this is where Oracle Intelligent Advisor comes in) can be leveraged to provide a reliable service to the LLM to be able to answer a particular intent correctly. And the demonstration put together does just that. If you use OpenAI to ask questions about whether you need to file your tax return, in the demonstration it is actually calling out to an Oracle Intelligent Advisor Decision Service and using the definition of the Decision Service to come up with some clever answers and responses.
For example, in certain situations, only two inputs are needed to get an answer. But in other cases, further information is needed. Thanks to an OpenAI compliant definition of the Decision Service, the conversation can go back and forth as the model tries to get an answer from the Decision Service. It’s a whole new avenue of assistance.
The files that make this work, aside from the decision service that provides the back end, are the tools.js file with the OpenAI compliant definition and the llm.js file which uses the OpenAI completions endpoint to provide the chat experience.
The shopping list you will need for the demo is as follows
- An OpenAI Account and an OpenAI API Key
- At least 10$ credit on the account
- An Oracle Intelligent Advisor Hub for the Decision Service to be deployed to
- An OIA API Client with permissions on the Workspace where you deployed the Decision Service
- Access Permissions on the hub for the Node.js HTTP server (http://localhost:8080) to call the Assessment API so the provided scripts can call your Decision Service
- Edit the various files to include your Hub URL and API Client details
Here is the demonstration in action with my little conversation with OpenAI and Oracle Intelligent Advisor Decision Services! It’s a great example of exposing Oracle Intelligent Advisor services to OpenAI and how OpenAI can absorb the definition of the service.
We’re lucky to be working with the team at IPR with friend of intelligent-advisor.com Juris Terauds on some exciting things in this area. Look out for a second post on this topic.
For more information :
Check out the examples on GitHub
Check out the OpenAI API