Stop Using AI and ML Interchangeably
The definitions of artificial intelligence and machine learning have evolved, so it’s important for everyone to understand the distinctions.
There is a lot of hype around the use of artificial intelligence or machine learning in security products. Everyone wants to say they’re using it, but there’s a real lack of clarity when it comes to understanding how these terms are used. This problem is only made worse when these two terms are constantly conflated. Using precise and accurate language to describe a security product not only conveys to potential buyers that you understand your own technology but also enables buyers to ask the right questions to evaluate it.
AI is a broad field that aims to bring human intelligence to machines, while ML is a subset of AI that focuses on learning from data without explicit programming. Use of ML does qualify as use of AI, but use of AI does not imply use of ML.
Because the threshold for bringing human intelligence to machines is vague, an AI system might employ advanced statistical methods and traditional algorithms, or it might consist solely of a set of rules or heuristics. This vagueness is the fundamental problem with simply describing your product as “using AI.” You haven’t told anything about what the product is actually doing, why it qualifies as AI, and how they should evaluate it. If you’re a prospective buyer and you’ve only heard that the product uses AI, you should be asking a lot of questions: What are the components of the AI system? Why do they warrant classification as AI? How are they established, tested, and updated? If these kinds of details can’t be provided or seem thin and vague, be wary of snake oil and signatures repackaged as “AI.”
If your AI system is AI only because it is using ML, stop diluting its description by calling it an AI system and call it an ML system instead. To see this point from another angle, imagine that you sell squares, but instead of telling people they’re squares, you describe them as quadrilaterals. It is technically accurate, but with a quadrilateral, a buyer only knows that they’re getting something with four sides. If you were to tell them it’s a square, then they know it has four sides, the four sides are all the same length, and the angles between them are all 90 degrees.
You deny the buyers this same critical context if you interchange AI and ML when describing a product. There are specific questions to ask about ML systems: How is the data populated, labeled (if at all), and updated? What type of models are being used and how are they trained? What output do they produce, and how can that be tailored to specific performance goals and risk tolerances? But without knowing that they are evaluating an ML system, buyers may be asking generic questions about why a system qualifies as AI instead of getting to the heart of how it functions, which can prevent them from fully understanding the product and ultimately lead to a missed opportunity.
Using the right language is a critical step forward in navigating the buzzword hype around AI and ML. If AI is the right term to go in a product description, then use it -- but be prepared to justify why it is warranted and accurate. If a product description is better served with ML instead, then ditch AI and be precise. Product descriptions should cue buyers on what they need to ask in order to understand if a purchase is the right fit for them, and they should enable sellers to easily articulate the product’s strengths and use of technology. We will all start cutting through the hype effectively when we use AI and ML accurately and precisely.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022