For AI developers, TensorFlow.js’ support for Node.js lets them build ML, DL, natural language processing, and other models on a “write once run anywhere” basis. They can author these models in TensorFlow.js script that runs in both the front-end browser and any back-end website that implements Node.js. In addition to executing AI inside any standard front-end browser, TensorFlow.js now includes Node.js bindings for Mac, Linux, and Windows back-ends. Training and inferencing of TensorFlow.js models may execute either entirely inside the browser, in the back-end website, or a combination of both, with their data distributed across these locations.
However, recent TensorFlow.js benchmarks point to performance issues that are common to such toolkits:
Though client-side training can accelerate some AI development scenarios, it may not greatly reduce the elapsed training time on many of the most demanding scenarios. Accelerating a particular AI DevOps workflow may require centralization, decentralization, or some hybrid approach to preparation, modeling, training, and so on. For example, most client-side training depends on the availability of pretrained — and centrally produced — models as the foundation of in-the-field adaptive tweaks.
Even as they emerge, edge-AI benchmarking suites may not be able to keep pace with the growing assortment of neural-net workloads being deployed to browsers in every type of front-end (browser, mobile, IoT, embedded and back-end Web application. In addition, rapid innovation in browser-based AI inferencing algorithms, such as real-time human-pose estimation, may make it difficult to identify stable use cases around which such performance benchmarks might be defined.
The Web is AI’s new frontier. However, the end-to-end cloud application ecosystem must mature rapidly to support enterprise deployment of browser-based ML applications in production environments.
James Kobielus is an independent tech industry analyst, consultant, and author. He lives in Alexandria, Virginia. View Full Bio