Artificial intelligence (AI) may be the key to unlocking humanity’s problem-solving capabilities but engaging with AI is not necessarily intuitive. Business leaders may find it difficult to understand how specific AI advances can be applied to their organizations and how to begin integrating AI technology at scale.
Using AI may not always be the right -- or even necessary -- approach. Before engaging with AI, chief information officers and business leaders should map out the most pressing problems, prioritize them, and then determine which technology would best solve those challenges. Don’t overlook simple solutions or force the use of AI within your organization.
Once the problem is identified and it is determined that AI is indeed the right solution, start work in the most technologically advanced areas of your business because AI models need a rich data history (and ongoing data collection) to make beneficial recommendations. It can be helpful to prioritize the functions, verticals, assembly lines and assets within that problem domain that are further along in data readiness. As an added benefit, these areas are often the most critical to a business.
Starting to work with AI is no different than with any other technology: Understand the problem you are trying to solve, understand the capabilities of the technology, and reconcile the two.
We distilled the steps we share in this article from our application of AI across a variety of real-world projects. To make the steps tangible, we will deepdive into one: Google data center energy efficiency.
DeepMind wanted to reduce the energy consumed by Google data centers while maintaining a temperature that was safe for operations. The team started this project with one of Google’s newest, most optimized data centers. This provided a data-rich environment with the most up-to-date sensors and equipment, which helped baseline performance (and later measure impact). Applying AI is not easy, but your rate of success will likely be higher if you start in an advanced environment with clean(er) data.
DeepMind then followed the six steps we have outlined below. The result was an AI system that achieved 30% reduction in energy consumption while continuing to operate the data center in a safe, effective manner. The benefit was clear: Google reduced its data center energy consumption, environmental impact, and bottomline cost while improving the efficiency of the system.
Six steps to start with AI
The project was complex, but the process was simple. Let’s break it down:
Step 1: Identify the goal. Before you begin, define what you want to achieve and in which part of your organization. In AI, we call this the objective function. Keep in mind, there may be multiple goals that you need to balance.
In our example, the objective was to minimize the energy required to cool Google data centers while keeping the servers at a safe operating temperature. This objective was the most important for us, but we could have also tried to minimize the cost of cooling, reduce water usage, and so on. Prioritizing here is also key. Set your most important goal as the objective function, and then ensure the model also takes your secondary goals into account when it makes decisions.
Step 2: Define the set of possible decisions. Once you have determined your objective, outline what levers you have at your disposal -- which parts of a system you can (and want) to improve using AI. This is the action space.
For Google’s data center, we relied heavily on the data center operators -- the domain experts -- to understand which parts of the facility DeepMind could adjust, as some variables could not be directly set and were indirectly controlled by other parts of the system. Google facility managers told us that the biggest HVAC system energy consumers were the chillers, so we started there. These kinds of deep partnerships are vital for the successful application of AI and allow domain experts to use AI as a tool to enhance their impact.
Step 3: Keep the system safe. A critical step of any AI system setup is to understand the operating boundaries necessary to ensure the safety of the system. You should define these constraints at both the individual component and the overall system level.
In the data center example, data center operators outlined the operating range of individual pieces of equipment and how those components interact at a system-level. We then modeled the AI safety constraints from what was permissible to maintain safe operation of the data center. Constraining the system is important to avoid damaging the components, but too many safety measures may limit innovation. An advantage of AI is that it can explore options within the boundaries you place on the system, but the stricter the guardrails, the less it can explore. Balance, without missing core constraints, is key.
Step 4: Audit the data. AI depends on data to make decisions, so you will need the data necessary to measure the actions and objective you have chosen. At this stage, you can also address ongoing data questions, such as the cadence at which you need to capture system-level data, the latency of the data, maintenance and change logs, and so on.
For the Google data center project, we ensured we had all the data required to measure energy consumption, the AI decisions, the impact of those decisions on the physical system, and the external elements that influenced the system (in this case, weather, occupancy, etc.). We diligently checked the sensors collecting this data to ensure they were correctly labeled, calibrated, and that latency was not an issue.
Step 5: Clean the data. AI may depend on data, but success depends on good data. Identify and fix bad data. Merge data sources. Ensure data is representative of the problem you are trying to solve and that there is a diverse set of actions represented in the data history. A rich history and data variance are important for AI to make optimal decisions.
Data cleaning was a vital step in the data center project. It also tends to require strong domain expertise and so is another area where partnership between AI experts and domain experts is vital. We had to perform calibrations or fix broken sensors and other components that measure the data we collected. We also found missing data, backfilled it where possible, and unified data repositories and data inputs. As we tuned the models, this made it easier for us to incorporate additional data types into the system and improve the AI recommendations.
Step 6: Perform ongoing data maintenance. As the success of your AI project depends on good data, it is important to set-up routine, periodic checks to ensure ongoing data cleanliness.
These last three steps can be grouped into “data quality assurance.” They are the most time-consuming part of the process and where most organizations will need to focus before embarking on an AI project.
After following these six steps, DeepMind research engineers were ready to build models and begin engagement with AI across the Google data center. Choosing the right optimization metrics for the objective was very important. We also ensured that our AI system got feedback on the decisions it made so it could learn and improve over time.
The truth is there is no silver bullet
No single AI system will solve all your objectives. Most engagements require customization, so your journey will likely require different AI systems for different applications. The best way to find what works for your use case is to start building models, iterate, and expand as you progress.
AI is not always the right answer, but it can be a powerful tool for improving current systems, building new processes, and solving complex problems. Whether your organization is just starting its journey or has some experience applying AI, our aim is for these six steps to help simplify the process. Preparation for large AI engagements can take months and sometimes years, so it is important to start early. The key is knowing why preparation matters, how to prepare, and that you can start today.
Sims Witherspoon is a program manager for DeepMind who specializes in socially-beneficial applications of AI.
Mandeep Waraich is a product lead for Industrial AI within Google.
Massimo Mascaro is a technical director in the Office of the CTO for Google Cloud where he specializes in Applied AI.The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio