Artificial Intelligence (AI) is increasingly seen as a must-have technology that enables businesses to become agile and innovate at scale. IDC predicts global spending on artificial intelligence (AI) systems will increase from US $50 billion in 2020 to US $110 billion in 2024.
But Gartner research estimates that 50 percent of AI implementations are struggling to get past the proof-of-concept stage and be implemented at scale. The reasons vary from overhyped expectations and lack of vision to inadequate data infrastructure and lack of skilled resources.
Another important factor is the team that’s working on the AI programs. While AI teams may have the requisite tools and technologies, many lack other key capabilities – like mining for the right use cases and optimizing decision-making – that are essential for success.
[ Looking for strategies to build enterprise support for AI? Read How to evangelize Artificial Intelligence (AI) in your organization. ]
Successful AI teams that work at enterprise scale share the following traits:
1. They frame the problem well
Teams need to be able to sift through the complexities of the situation to frame the core of the problem accurately before they get to the right solution. This means playing the role of translator and bridging the gap between technology and the business case. It involves diving deep into data to make unexpected connections and find insights that shine a brighter light on the problem.
Along with understanding data and algorithms, successful teams also exhibit empathy for customers and other users, which helps in solving problems holistically. They are creative and curious; they look at the world from an exploratory perspective and are unafraid to challenge the status quo. These traits enable them to constantly consider how their work impacts the business that they are innovating for.
2. They think enterprise-scale right from the start
In most instances, AI pilot programs show promising results but then fail to scale. Accenture surveys point to 84 percent of C-suite executives acknowledging that scaling AI is important for future growth, but a whopping 76 percent also admit that they are struggling to do so.
The only way to realize the full potential of AI is by scaling it across the enterprise. Unfortunately, some AI teams think only in terms of executing a workable prototype to establish proof-of-concept, or at best transform a department or function.
Teams that think enterprise-scale at the design stage can go successfully from pilot to enterprise-scale production. They often build and work on ML-Ops platforms to standardize the ML lifecycle and build a factory line for data preparation, cataloguing, model management, AI assurance, and more.
3. They democratize AI and are diverse
AI technologies demand huge compute and storage capacities, which often only large, sophisticated organizations can afford. Because resources are limited, AI access is privileged in most companies. This compromises performance because fewer minds mean fewer ideas, fewer identified problems, and fewer innovations. In fact, the more diverse the team, the better it is at uncovering problems and making data connections.
At Infosys, we have addressed this by leveraging an AI cloud as a strategic platform for scaling computing resources and sharing knowledge to make AI accessible to all. We’ve also added diverse roles and skills within the AI team – not just technical like data scientists, data engineers, and machine learning experts, but also those with business domain, product management, user interface design, and software engineering skills.
With the AI cloud, we can now build larger and more able pools of expertise in AI because we are able to scale the computing resources needed to include more of our workforce in our AI programs as well as build more mission-critical AI applications. Ultimately, democratizing AI leads to better project outcomes.
4. They keenly appreciate the ethics of AI
Finding use cases, building AI systems at enterprise-scale, and democratizing adoption is but half the battle. Managing the ethical dimensions of AI implementations is serious business that involves input from regulators and policymakers too. The AI team must understand what it takes to work within the framework of regulatory compliance. They need to implement strong and auditable risk management practices throughout AI development, validation, and monitoring to build unbiased, interpretable, accountable, and reproducible AI systems that deliver business outcomes that are fair and transparent.
Truly, AI is as much about people as it is about programming.