Advances in artificial intelligence (AI) are expected to change the way work is done across numerous organizations and job roles. This is especially true for generative AI tools, which are capable of synthesizing new content based on patterns learned from existing data.
This update explores the rise of large language models (LLMs) and generative AI. It examines whether data center managers should be as dismissive of this technology as many appear to be and considers whether generative AI will find a role in data center operations.
This is the first in a series of reports on AI and its impact on the data center sector. Future reports will cover the use of AI in data center management tools, the power and cooling demands of AI systems, training and inference processing, and how the technology will affect core and edge data centers.
Data center owners and operators are starting to develop an understanding of the potential benefits of AI in data center management. So long as the underlying models are robust, transparent and trusted, AI is proving beneficial and is increasingly being used in areas such as predictive maintenance, anomaly detection, physical security and filtering and prioritizing alerts.
At the same time, a deluge of marketing messages and media coverage is creating confusion around the exact capabilities of AI-based products and services. The Uptime Institute Global Data Center Survey 2023 reveals that managers are significantly less likely to trust AI for their operational decision-making than they were a year ago. This fall in trust coincides with the sudden emergence of generative AI. Conversations with data center managers show that there is widespread caution in the industry.
Machine learning, deep learning and generative AI
Machine learning (ML) is an umbrella term for software techniques that involve training mathematical models on large data sets. The process enables ML models to analyze new data and make predictions or solve tasks without being explicitly programmed to do so, in a process called inferencing.
Deep learning — an advanced approach to ML inspired by the workings of the human brain — makes use of deep neural networks (DNNs) to identify patterns and trends in seemingly unrelated or uncorrelated data.
Generative AI is not a specific technology but a type of application that relies on the latest advances in DNN research. Much of the recent progress in generative AI is down to the transformer architecture — a method of building DNNs unveiled by Google in 2017 as part of its search engine technology and later used to create tools such as ChatGPT, which generates text, and DALL-E for images.
Transformers use attention mechanisms to learn the relationships between datapoints — such as words and phrases — largely without human oversight. The architecture manages to simultaneously improve output accuracy while reducing the duration of training required to create generative AI models. This technique kick-started a revolution in applied AI that became one of the key trends of 2023.
LLMs which are based on transformers, like ChatGPT, are trained using hundreds of gigabytes of text and can generate essays, scholarly or journalistic articles and even poetry. However, they face an issue that differentiates them from other types of AI and prevents them from being embraced in a mission-critical setting: they cannot guarantee the accuracy of their output.
The good, the bad and the impossible
With the growing awareness of the benefits of AI comes greater knowledge of its limitations. One of the key limitations of LLMs is their tendency to “hallucinate” and provide false information in a convincing manner. These models are not looking up facts; they are pattern-spotting engines that guess the next best option in a sequence.
This has led to some much-publicized news stories about early adopters of tools like ChatGPT landing themselves in trouble because they relied on the output of generative AI models that contained factual errors. Such stories have likely contributed to the erosion of trust in AI as a tool for data center management — even if these types of issues are exclusive to generative AI.
It is a subject of debate among researchers and academics whether hallucinations can be eliminated entirely from the output of generative AI, but their prevalence can certainly be reduced.
One way to achieve this is called grounding and involves the automated cross-checking of LLM output against web search results or reliable data sources. Another way to minimize the chances of hallucinations is called process supervision. Here, the models are trained to reward themselves for each correct step of their reasoning rather than the right conclusion.
Finally, there is the creation of domain-specific LLMs. These can be either built from scratch using data sourced from specific organizations or industry verticals, or they can be created through the fine-tuning of generic or “foundational” models to perform well-defined, industry-specific tasks. Domain-specific LLMs are much better at understanding jargon and are less likely to hallucinate when used in a professional setting because they have not been designed to cater to a wide variety of use cases.
The propensity to provide false information with confidence likely disqualifies generative AI tools from ever taking part in operational decision-making in the data center — this is better handled by other types of AI or traditional data analytics. However, there are other aspects of data center management that could be enhanced by generative AI, albeit with human supervision.
Generative AI and data center management
First, generative AI can be extremely powerful as an outlining tool for creating first-pass documents, models, designs and even calculations. For this reason, it will likely find a place in those parts of the industry that are concerned with the creation and planning of data centers and operations. However, the limitations of generative AI mean that its accuracy can never be assumed or guaranteed, and that human expertise and oversight will still be required.
Generative AI also has the potential to be valuable in certain management and operations activities within the data center as a productivity and administrative support tool.
The Uptime Institute Data Center Resiliency Survey 2023 reveals that 39% of data center operators have experienced a serious outage because of human error, of which 50% were the result of a failure to follow the correct procedures. To mitigate these issues, generative AI could be used to support the learning and development of staff with different levels of experience and knowledge.
Generative AI could also be used to create and update the method of procedures (MOPs), standard operating procedures (SOPs) and emergency operating procedures (EOPs), which can often get overlooked due to time and management pressures. Other examples of potential applications of generative AI include the creation of:
- Technical user guides and operating manuals that are pertinent to the specific infrastructure within the facility.
- Step-by-step maintenance procedures.
- Standard information guides for new employee and / or customer onboarding.
- Recruitment materials.
- Risk awareness information and updates.
- Q&A resources that can be updated as required.
In our conversations with Uptime Institute members and other data center operators, some said they used generative AI for purposes like these — for example, when summarizing notes made at industry events and team meetings. These members agreed that LLMs will eventually be capable of creating documents like MOPs, SOPs and EOPs.
It is crucial to note that in these scenarios, AI-based tools would be used to draft documents that would be checked by experienced data center professionals before being used in operations. The question is whether the efficiencies gained in the process of using generative AI offset the risks that necessitate human validation.
The Uptime Intelligence View
There is considerable confusion around AI. The first point to understand is that ML employs several techniques, and many of the analytics methods and tools can be very useful in a data center setting. But it is generative AI that is causing most of the stir. Given the number of hurdles and uncertainties, data center managers are right to be skeptical about how far they can trust generative AI to provide actionable intelligence in the data center.
That said, it does have very real potential in management and operations, supporting managers and teams in their training, knowledge and documentation of operating procedures. AI — in many forms — is also likely to find its way into numerous software tools that play an important supporting role in data center design, development and operations. Managers should track the technology and understand where and how it can be safely used in their facilities.
John O’Brien, Senior Research Analyst, Uptime Institute jobrien@uptimeinstitute.com
Max Smolaks, Research Analyst, Uptime Institute msmolaks@uptimeinstitute.com
The post What role might generative AI play in the data center? appeared first on Website Host Review.