Download This Free Report Today!
Enterprises rapidly embracing artificial intelligence to support mainstream business activities need to re-examine their data center infrastructures. AI workloads place new and different demands on compute and networking resources compared to the traditional applications and workloads they have been running.
Certainly, some enterprises are using cloud services to run their AI efforts. But for many, this is not an option. They must keep operations like AI and machine learning model training on-premises to ensure data privacy and protect intellectual property.
The types of changes needed to do so are significant, covering everything, including processors, core networking elements, power consumption, and more.
Obviously, enterprises have gone through similar changes before, upgrading compute infrastructure by moving to faster processors, higher performance storage, and higher speed interconnect technology. However, two things make the current state of affairs different.
First, in the past, the companies that upgraded infrastructure to run more sophisticated applications were leading-edge organizations. So, most businesses were not impacted and did not have to change their infrastructure. That is not the case with AI. Companies of all sizes are rushing to use AI to improve operations, enhance customer experiences, increase revenues, and more.
Second, many AI applications are based on the collection and analysis of vast amounts of data from different internal and external data sources. In many cases, companies have no infrastructure in place to move the large volumes of data within and into their compute facilities.
With these points in mind, this Network Computing report looks at some of the issues AI introduces and the technologies that are being eyed to help with them.