Working with the Oracle Intelligent Advisor Batch Assess Service capability means that like all mass-throughput engines and decision automation, you are interested (not to say obsessed) by the performance of the batch. Given that you are probably, by association, doing some heavy lifting and have seriously large numbers of cases to process, you may even be wondering whether you have enough time to do everything you need to do – for example, will your overnight batch run actually finish before the next morning? And since you most likely have other downwind processes waiting for the Batch Assess to finish, you start to worry.
Thankfully the Batch Assess is not just incredibly fast, it is also very good at giving feedback as to the processing times of the different cases in your load. For example, each case response comes back complete with the processing time embedded in the response:
"cases": [
{
"@id": 627370145,
"@time": 0.022,
In addition the Batch Assess gives you statistics at the end of the run : the cases per second and the total processing time:
"summary": {
"casesRead": 1057,
"casesProcessed": 906,
"casesIgnored": 151,
"processorDurationSec": 4.56,
"processorCasesPerSec": 231.85,
"processorQueuedSec": 0
}
During testing, it is important to not just execute the test batch once and assume that the performance you get then is the performance the Batch Assess will give you all through your millions of cases. You need to run a set of cases, in iterations, with representative loads per case (for example, if a case is made up of a family of people with financial resources as a child entity, make sure you have cases with decent volumes of financial data as well as outliers with truly small or large amounts). You want to test the performance in realistic conditions. So if you are running on-premise, ensure you test on an environment that is sized like your production.
One of the simplest ways to automate these tests is to use Postman. Since the Postman interface allows for the creation of collections of calls, and to set iterations / use data files / create scripts to run after each case, you have everything you need to automate some testing.

We’ve talked at length about this before but here is a handy script to use to calculate the average response time / the average number of cases per second based on the Batch Assess output:
function standardDeviation(values, avg) {
var squareDiffs = values.map(value => Math.pow(value - avg, 2));
return Math.sqrt(average(squareDiffs));
}
function average(data) {
return data.reduce((sum, value)=>sum + value) / data.length;
}
if (responseCode.code === 200 || responseCode.code === 201) {
response_array = globals['processing_times'] ? JSON.parse(globals['processing_times']) : []
var jsonData = JSON.parse(responseBody);
response_array.push(jsonData.summary.processorDurationSec)
postman.setGlobalVariable("processing_times", JSON.stringify(response_array))
response_average = average(response_array);
postman.setGlobalVariable('processing_average', response_average)
response_std = standardDeviation(response_array, response_average)
postman.setGlobalVariable('processing_std', response_std)
}
if (responseCode.code === 200 || responseCode.code === 201) {
cases_array = globals['casespersecond'] ? JSON.parse(globals['casespersecond']) : []
var jsonData = JSON.parse(responseBody);
cases_array.push(jsonData.summary.processorCasesPerSec)
postman.setGlobalVariable("casespersecond", JSON.stringify(cases_array))
casespersecond_average = average(cases_array);
postman.setGlobalVariable('casespersecond_average', casespersecond_average)
casespersecond_std = standardDeviation(cases_array, casespersecond_average)
postman.setGlobalVariable('casespersecond_std', casespersecond_std)
}
This will give you two arrays of values and calculate the average and the standard deviation for your two metrics, response time and cases per second:

Kudos to Romcaname for the original script.
Running Postman Runner for N iterations will get you a decent set of values to work from and port into Power BI or just plain old Excel!