It is cheaper to do ML wrong than it is to do it right
The common thread I am seeing in each of the 13 is that avoiding bad ML results is going to require a greater investment in time, people, or follow-up to ensure that the outcomes of applying ML remain positive. Too many of my clients have a habit of valuing low cost over high quality.
One of the critical steps in QA for any new system or feature is the comparison of actual results to expected results. If the corporate objectives of ML is to find the unexpected insight, how will incorrect unexpected ML outcomes be sifted out from the set of all possible unexpected outcomes?