The AI technology market is only expected to grow over the coming years and, according to IDC, investment in AI- is expected to grow at a CAGR of 20.5% over the next five years. Enterprises that cut corners or underinvest in their infrastructure risk their ability to compete in an increasingly digital world. Building an agile, cost-effective environment that delivers on business strategies requires that enterprise IT leadership understand how greatly AI infrastructure and architecture impacts performance. “AI capabilities are probably going to be 10%-15% of the entire infrastructure, but the amount the business relies on that infrastructure, the dependence on it, will be much higher,” says Ashish Nadkarni, IDC group Vice President and General Manager of infrastructure systems, platforms, and technologies. “If that 15% doesn’t behave in the way that is expected, the business will suffer.”
This post will outline a strategy and thought process for determining what AI infrastructure is required to sustainably fuel the enterprise’s growth. Within this strategy, we highlight three basic pillars to effectively deploy AI while mitigating these operational challenges: People, Process/Policy, and Technology. In this post, the third of a three-post series, we will discuss the Technology pillar. You can view the previous posts on the People and Process pillars, as well, for a holistic understanding of AI strategy.
Below, you’ll find a framework for building your technology backbone for AI.
-
Align Business Goals
Evaluate how well AI solutions align with your overall business goals and objectives. Technology should contribute to enhancing efficiency, productivity, and innovation within the organization. First, gather all your business stakeholders across the enterprise to ensure everyone is heard about how AI will be used. Ask your group these questions:
-
What business problems are you trying to solve?
-
Will the solution be internal-facing or customer facing?
-
What type of data will be needed to underpin this AI technology?
-
What is the timeframe to have a minimum viable product available?
-
Diagnose Your Needs
Context matters. Retail needs will differ from manufacturing needs. Factory automation will differ from edge computing. Due diligence and care are needed to find the right infrastructure that delivers optimal performance for the workloads in scope. This is where the Innovation Tiger Team can be leveraged (as discussed in our previous post on Process). The team will be responsible for taking the business ideas and solutions and turning them into a technical reality. As part of the Team’s charter, the team will assess (among other things):
-
What is the source of data within the organization used to power the AI solution?
-
Does the organization need to rethink database structures so that clean and readily accessible data can be used (should the organization consider a Data Lakehouse)?
-
What are the data security concerns?
-
Are there legal or ethical concerns around the proposed solution?
-
Should the business invest in off-the-shelf technology and build from within or use a partner to develop the solutions?
-
Assess Return on Investment (ROI)
When assessing ROI, consider four key metrics.
Cost savings:
AI can reduce labor costs and improve operational efficiencies. To measure cost savings, compare the expenses before and after AI implementation, considering factors such as reduced workforce requirements, decreased error rates, and optimized resource allocation.
Revenue increase:
Before and after AI implementation, the important metrics to measure performance are sales, customer acquisition, and customer lifetime values.
Efficiency gains:
To measure efficiency gains, assess (before AI implementation) and track (after AI implementation) metrics such as process cycle time, task completion time, and resource utilization rates.
Customer satisfaction:
AI plays a significant role in enhancing customer satisfaction. AI-powered chatbots, virtual assistants, and recommendation systems can provide personalized and timely support, improving customer interactions and overall satisfaction.
Measure customer satisfaction through surveys, feedback analysis, and continuous monitoring of customer sentiment across various touchpoints.
Keep in mind that the specific ROI metrics used may vary depending on the industry, business goal setting, and AI use cases. Regular monitoring and analysis of these metrics will help you optimize your AI strategies, deliver lots of tangible benefits, and maximize the return on your investments.
-
Identify Additional Metrics for Future-Proof AI Tech Stacks Scalability
Scalability
The ability of AI solutions to scale with the growth of the business is a significant factor to consider. To further underscore the need to build a scalable technology stack for AI use, survey data from Omdia reveals approximately 20%-25% of enterprises are scaling AI projects across multiple business units, up from only 7% in 2020. Scalability is determined by a variety of factors, briefly listed below.
-
Data Quality and Quantity
The link between data quality and quantity is not mutually exclusive. Data quality and quantity are interconnected pieces that can either make or break the AI system performance. When looking at data for AI, a balance between quality and quantity needs to be struck. Remember these key themes:
-
Quality data will enhance the quantity of the data.
-
Use cases matter. The right balance of data quality and quantity will depend on the specific use case. If using AI in autonomous vehicles, data quality would need to be very high. Other more menial AI tasks may require less massive data sets.
-
Finding the balance is an iterative process. Start with smaller, high-quality data sets to develop the data models and gradually expand the data set, all the while monitoring the data for decreases in quality and/or introduction of biases.
-
Data quantity can validate quality. Larger datasets can provide a broader spectrum for testing and validate accuracy of AI models.
-
Architecture Design
Selecting the right architecture includes designing the structure, components, and interactions of the system such as layers, nodes, and connections. Architecture determines efficiency, flexibility, and compatibility with other data, tasks, and environments.
-
Infrastructure Optimization and Integration
Consider AI solutions that can seamlessly integrate with your current technology stack without causing disruptions, such as over-complicating your technology stack or compromising AI’s capabilities. This may mean considering technologies being offered by existing vendors.
-
Algorithm Selection and Improvement
Consider algorithms which achieve the desired outcomes, and which can employ different learning techniques to continuously improve and scale the algorithms.
-
User Experience
Design and optimize the system with the user's needs, preferences, and expectations in mind. Be sure to collect and analyze user feedback to improve the system's usability, accessibility, and personalization.
We offer another piece of advice that has been stressed by many people - be precise in your expectations of this project, product, or business goals. Your scalability plans should support this. Whether you're introducing an AI solution to a specific market/geography and expanding to other regions or rolling out an internal company beta, having a clear plan is vital. This allows you to anticipate your data, algorithm, model, and infrastructure needs upfront.
Data Security and Privacy
Given the potential sensitivity of data, businesses are paying close attention to security and privacy features of AI solutions. Compliance with data protection regulations and implementation of robust security measures are paramount.
Ease of Use and User Adoption
User-friendliness and ease of adoption/utilization of AI tools is essential. Solutions that require minimal training and have intuitive interfaces are often preferred. An approach to this might be to develop a minimum viable product or a beta solution before general rollout. Collecting user feedback before release allows even greater flexibility in anticipating the needs listed above.
Feedback analysis and collection does not stop after beta testing. The strongest solutions continuously finetune the application through a commitment to user feedback, supporting greater data and algorithms.
Ethical Considerations
With the increasing awareness of ethical AI practices, businesses are evaluating whether AI technologies adhere to ethical guidelines and do not perpetuate biases or discrimination. Businesses should ask the technology company what they do to eliminate biases in their solution. Since humans build systems, it’s inevitable that bias will creep into the algorithms. However, that does not mean businesses shouldn’t attempt to mitigate it. Approaches to eliminate as much bias as possible include:
Interview your potential technology partner. As we have seen with Google’s Gemini, the introduction of bias into AI is a very real possibility. There is no specific blueprint that companies can follow to eliminate bias as bias will not be completely removed from AI. Managing bias is a more realistic approach.
Ask your potential partner what they do to manage biases. Do they have policies in place to ensure diversity in their technology? An AI technology vendor should have policies in place that foster diversity of information and are incorporated into various language models. Do they use tools to check for and mitigate biases? There are several tools (such as tools from the IBM AI Fairness Project) to check AI models for potential biases and help in the bias remediation process.
Learn what algorithms are used and how the technology is trained. The data that goes into the models determines how smart and efficient the AI system will be. Keep in mind, though, that more data does not necessarily mean smarter AI. In fact, if you are feeding your model too many samples and data sets, you could inadvertently add more bias. The key to ensuring accuracy in the AI system is selecting training data based on quality rather than quantity.
Maintain quality assurance. Monitor the AI in real time to identify a problem or bias as early as possible. AI excellence is an ongoing project you should be prepared to support long-term!
Vendor Reputation and Support
Should you choose to engage a third-party provider to design and create your AI solution, thoroughly evaluate your potential partner. This can be done with a vendor scorecard, using the same criteria for each vendor. Example evaluation criteria include:
Expertise and skillset. Look to understand the exact skillsets and fields of expertise that can be offered – such as what tools and programming languages the vendor uses and how strong the provider’s delivery capability is.
Reputation. Interview current and former clients and ask what went well, what did not, did the partner deliver what they said they would, etc.
Data security. Understand how the potential partner secures data and if they meet your data security policies.
Tech stack. What is the potential partner’s tech stack? Is it scalable?
Case studies. Have the potential partner provide case studies where they implemented various solutions. What was the business problem being solved? Is it similar to what your problem is?
Deployment and support. What does the potential partner’s deployment process and post-deployment support look like?