AI in Business: The Practical Reality
When CIOs talk with me about artificial intelligence, they find the topic fascinating but often struggle to understand why they should care. Those who do not habitually invest in bleeding-edge technologies still tend to see AI as a technology of the hazy future, one they can worry about later.
That's a mistake. Every substantial business should be thinking about how AI will change their business and at least starting to experiment with it.
Stop thinking of AI as science fiction. You have only to look at the rapid advance of self-driving car technology to see how rapidly the impossible is becoming possible. Thanks to advances in cloud computing, GPU chips originally invented for gaming and graphics, and dozens of other innovations, machine learning and pattern recognition technologies are redefining virtually every industry and every discipline. AI technologies are emerging to meet the demands of mobile computing and the internet of things.
If you have not begun to formulate a focused AI strategy and thinking about ways to use the technology to improve your business practices and disrupt your industry, I think you should start polishing your resume.
That said, most of us can’t afford to launch a huge R&D effort that would put us in a league with Google or Apple or IBM when it comes to pioneering these technologies. I just believe you ought to be thinking ahead. You ought to be tinkering with AI-powered cloud services you can access via APIs and taking advantage of other low-hanging fruit. Take at least preliminary steps now and consider getting more aggressive soon.
The subset of AI software having the biggest impact today is machine learning: Software gets better with experience, refining its own rules as it is exposed to more data. This is what allows self-driving cars to learn not only the official rules of the road, but how things really work on city streets as they log more miles.
Google and Facebook use machine learning for things like recognizing faces in images, which they can do because they have access to so many images, combined with so much spontaneous user feedback on whether each instance of face recognition is accurate or useful. Machine learning is also powering a rapid improvement in the speed and accuracy of translation software, powered by an analysis of previously translated language plus, again, lots of user feedback on whether a translation is accurate and helpful.
These advances are building on each other. For example, self-driving cars need image recognition for street signs and stoplights, along with algorithms to help them make sense of inputs from other sensors. Eventually, they will need to understand spoken destinations, directions, and other instructions in multiple languages.
Many important innovations driven by AI are right at the tipping point, about to become more common in our lives. One example is AI embedded in consumer devices. True machine learning requires big farms of cloud servers. Apple’s Siri and Amazon’s Alexa get their smarts from the cloud, even though we talk to the device in our hand or in our kitchen. But we’re starting to see more smarts built into the devices -- not the whole networked application, but the distilled intelligence for face or voice recognition, for example. This is important because it allows for instant responses, without the delay of waiting for an answer from the cloud.
Hype and reality
While AI is certainly a much hyped (if not overhyped) technology, in terms of what it can and can’t do today, the long-term promise of AI is not overblown at all, and the immediate impact of machine learning is very real. Machine learning pioneer Andrew Ng has a good write up on what AI can and can’t do today. The immediate applications tend to fall into the category of software for classification or prediction based on a large number of inputs. “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future,” he says. Examples would include recognizing a face in the crowd, or words on a page, or the meaning of a sentence spoken aloud.
When Ng was leading the Google Brain Deep Learning project a few years ago, one of its accomplishments was software capable of deriving the concept of “cat” from a large number of YouTube videos, without being given explicit instructions about what to look for. Feed machine learning software a large number of sentences translated from English to French, and it will derive the rules for translating new sentences from English to French with surprising accuracy. Feed it GPS coordinates, maps, and traffic data, and it will suggest shortcuts and detours around traffic jams that no human navigator would have recommended.
Come to think of it, the potential to surprise us is one of the things that makes machine learning interesting. While Ng’s rule of thumb about “typical person” snap judgements helps explain the image and language recognition use cases, a computer system is not a typical person. It can consume far more data than we can at one time and generate conclusions and predictions far more rapidly. Apply it to your company sales forecast, and there is a good chance it will tell you something you don’t know.
How can your organization begin to derive real value from AI? I’ll have more to say about that next time.
Recommended For You
There isn’t a standard way of performing an application baselining or profiling. Here is a how-to video with suggestions on how to work through the process.
Hybrid and edge data centers are expanding the role of the traditional data center. This makes DCIM more important today. As with any management software, organizations need to know when it makes sense to keep it on-premise versus going with cloud-based DCIM.
The Interop 2019 speaker discusses ways that enterprises explore DevOps, the skills gap, and the rise of security as code.
Composable infrastructure provides a cloud-like experience for provisioning resources. Understand how it works and how it differs from Infrastructure as Code.
Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex.
Video overview on how to use a portable WAN emulator to validate bandwidth requirements to a backup server.