No enterprise wants to be a dinosaur when it comes to innovation, and today, AI is on the frontlines. With an estimated 80 percent of enterprises at the moment using AI in some form, the change to AI looks like as widespread as the transition from typewriters to PCs.
Despite the hype, enterprises ‘sense’ the challenge: in a recent study, 91 percent of companies foresee significant barriers to AI adoption, including a lack of IT infrastructure, and shortage of AI experts to guide the transition.
Nevertheless, a couple of organizations truly understand what lies ahead of them, and what it really takes to transition out of the AI Jurassic era. Let’s look more closely at the underlying realities of AI adoption that your internal AI group or consultant will never tell you.
The Use Case: Turning a Traditional Enterprise Into an AI-Enabled Organization
To paint a picture, let’s consider a hypothetical company, Global Heavy Industry Corporation (GHIC). Maybe their goal is to reduce costs and improve quality in its production facilities via a corporate-wide deployment of AI.
The company makes industrial machinery that needs skilled workers to assemble complex machinery from parts, and has a series of control checkpoints to retain production quality. At this point, the process is totally manual.
With the recent raise in AI awareness, coupled with competitive pressures from lower-cost producers, GHIC has established an aggressive roadmap of bringing in visual-based AI in their factories, by leveraging existing security camera infrastructures.
The first step? Collecting pertinent data for their models.
Myth No. 1: All the Data I Need for My AI Is Freely Available
The first hurdle GHIC faces is gathering and developing data for their visual AI. Data is AI’s DNA: neural networks and deep learning architectures depend upon deriving a function to map input data to output data.
The effectiveness of this mapping function hinges on both the quality and quantity of the data provided. As a whole, having a much bigger training set has been shown make it possible for more effective features in the network, resulting to better performance. In essence, large quantities of high-quality data lead to better AI.
But how do companies go about producing and preparing this data? Amassing and labeling (or annotating) is commonly the most time-consuming and expensive step in data preparation. This process enables a system to recognize categories or objects of interest in data, and defines the appropriate outcome the algorithm should predict once deployed.
In many cases, internal annotation is the sole option for enterprises, due to privacy or quality concerns. This might be because data can’t leave the facility or needs extremely accurate tagging from an expert.
Myth No. 2: I Can Easily Hire AI Experts to Build an Internal AI Solution
Just after the data is prepared and ready, the second task is to set up the initial implementation of the AI system. This is just where the next set of challenges lies for GHIC. While there are a plethora of AI tools for developers, AI expertise is nearly impossible to find. By some quotes, there's just around 300,000 AI experts worldwide (22K PhD qualified).
Definitely, the need for AI talent outweighs the demand. While the option of accelerating training in AI is unfeasible — it still requires four years to earn a Ph.D. — the only viable option is to lessen the bar to entry, by introducing software frameworks that sidestep the need for in-depth knowledge of the field. Otherwise, organizations risk waiting forever to find adequate AI talent.
Myth No. 3: I Have a PoC, Building a Final AI Solution Is Just ‘a Bit More Work’
If GHIC gets to the point of finding the internal/external AI resources to put into action a Proof of Concept (PoC), they may assume that they are only steps away from deploying a final solution.
The truth is, AI adoption requires a multi-step approach. For many organizations, the first step is a PoC. After many years of working in AI, I have seen countless PoCs don't succeed of implementation. In order to prevent wasted time and money, organizations need to set a timeline, and determine criteria in advance that will ascertain whether the tech should go into production. A simple benchmark such as “if the PoC delivers X at Y functionality, then we will launch it here and here” would go a long way to help enterprises define an actual deployment scenario.
Myth No. 4: When I Get a Good Performance From My AI, I Don’t Need to Touch It Anymore
Let’s assume GHIC gets past all the obstacles above, and successfully implements AI. In the long haul, GHIC will be challenged by frequently up and coming use cases or changing conditions, and the need to adapt their AI in a prompt and inexpensive fashion.
Successful organizations look beyond today, to ask how their AI solution can scale in the long run. With AI systems of greater complications, data storage/management, retraining costs/time, and overall AI lifecycle management tools are required to assure an AI project fails to become a mess, or worse, ineffective.
Beyond AI Myths: AI Is Not a One-Off, It Is Here to Stay
GHIC has learned the hard way that AI is not a simple, one-off project. To the contrary, it can come to be a long, costly endeavor.
To effectively implement AI, enterprises will need to build internal teams mixing engineering, R&D and product, that work closely in building, testing, and delivering the application, and will supervise maintaining and iterating on it in the future.
And new tools are letting a lot more organizations to adopt AI adoption. By taking back control of their AI strategy, enterprises’ teams will be able to swiftly build AI solutions, deploy and develop them over the AI lifecycle.