I think you overlook one of the primary reasons for "human in the loop" machine intelligence. Specifically, while having humans trust the machine's results is important, having the CFO trust the machine's results is critical. The CFO is worried about risk: what liability does the company face if the machine is wrong? That risk may be low if the software vendor accepts the liability, but that's rare: many software companies are too small to take on (or insure) the liability, especially for a highly configurable - and highly training-dependent - machine learning system. Having a human make the final call clearly deliniates who has liability... and leaves the CFO no worse off.
Also, much has been made about the "black box" of machine learning... some of it, at least, as FUD from traditional rules-based AI vendors whose systems are easier to analyze. But for many applications, the question is not "By what mathematics did the machine come to that conclusion?" Rather, many applications need to know "What inputs drove the machine to that conclusion?" That is, many applications simply need to know what aspects of what is on the other side of the black box are important. The first question comes from rule-based AI... it comes from the need to fix rules that are malfunctioning. The latter question comes from users; in an ML environment, you simply train the system more if it's not getting the answers you want.
At MultiModel Research - a machine learning company - the above strongly shapes what we do. We believe systems will eventually learn to have better accuracy than humans - although that will be difficult to demonstrate. Once that happens, the CFO can relax. Until then, we'll insist on a human in the loop!