The answer to this question will be due in part to the randomness of the call arrivals. One would naturally expect greater accuracy from a forecast based on weekly actuals of 10,000, 10,003, 9,998, 10,005 and 10,002 than from a forecast based on weekly actuals of 10,000, 5,000, 19,500, 12,000…
This should immediately suggest a clue: we can track our actual data and look at the standard deviations within it. The smaller the standard deviation, the better you should expect your forecast accuracy to be; larger standard deviations suggest more unpredictable data.
Given that you are dealing with a system with an element of randomness in it, it’s appropriate to express forecast accuracy targets as confidence intervals, using a standard-deviation type calculation to set the intervals. For example, you might set a target of needing to be within 7% of the forecast on 85% of days, a target that allows for some randomness but limits exposure.
If your forecast has been built up (layered) using separate propensities for different customer lifecycle points, you might be able to combine the standard deviations of the component parts to produce an appropriate confidence interval that is tighter than simply taking the standard deviation of the total combined volume.