By Luca Collina
It is common to meet the surge in high expectations (AI Hype) and subsequent disillusionment in the journey of technological advancement. Some classic cases include the dot.com bubble and, more recently, the electric vehicles. With artificial intelligence capable of taking us into another deep transformative phase, it is prudent that we reflect on the past to guide the future. We investigate if the lessons of those past hypes have been learned, and if they call out extra considerations for today’s AI landscape. To start, this investigation uses the RAG (Red-Amber-Green) risk evaluation framework to then detail tactics and strategies practical for both large corporations and SMEs (Small-Medium Enterprises), developing a sustainable AI strategy.
Elements Related to AI for Each Hype Scenario from dot.com and EV
From the table above, I will share with you the strategies, and tactics that large companies/corporations and small-medium enterprises (SMEs) should implement and execute, to attenuate the impacts and risks emerging from each element to attain SUSTAINABILITY for their business.
Hype and Investment: High valuations and speculative investments, lead to market bubbles– AMBER
Let’s consider the very recent investment shifts from core producers. Elon Musk took the microprocessors out of electric vehicles and supported the development of AI systems. More recently, the investment by Microsoft in OpenAI shows the movement of that company toward AI and cloud services—such as adding AI capabilities to the Azure cloud and releasing the SLM, Small Language Model, for SMEs. Alphabet, the parent company of Google, is investing considerable amounts in AI to make it work on driverless cars and, at the same time, on its AI research.
Everything is aiming to create tools and services to be delivered to businesses.
There is another phenomenon related to investments in AI startups: the hype driven investment landscape has resulted in some concerns. For instance, there is a recognition that many AI startups might be overvalued, raising questions about the sustainability of these investments. High costs associated with training advanced AI models, which can reach tens of millions of dollars, further compound these challenges. This has led to a focus on companies that can offer more efficient AI solutions or support the infrastructure needed for AI development As for M&A, (see last news here) the question out-of-the-hype is if, how the integrations can/will work for businesses. Without forgetting that these activities imply some impacts on the training plans to upskill or reskill the workforce, including managers.
What are the strategies that companies can put in action to reduce the amber and avoid the red RISKS?
Large businesses and corporations: there will be a very granular due diligence process before investment in AI technologies: technology maturity, integration status within start-ups, possible ROI, and strategic fit. Create an internal AI review board with technical and financial expertise to review potential investments. Balanced Investment: Diversify AI investments to balance high-risk speculative ventures and stable, mature technologies. The budget allocation for AI must balance between a percentage invested in experimental projects and the rest put in established solutions.
SMEs – Selective Investments/expenditures: Focus on only the AI solutions that offer an immediate application.
Technological Maturity – AI development should avoid untested assumptions and ensure technological maturity before deployment. – Amber Green
Large Companies/Corporations- Implementing pilot programs to test AI solutions on a smaller scale before full deployment, in a specific department or function to evaluate performance and gather feedback, is the right strategy
Large Companies and SMEs- Vendor Validation means working with reputable AI vendors who can provide proof of concept and case studies from major industries. In different regulations (US, UK EU) there are plenty of suggestions for how to manage the procurement phase. The basic one is to request detailed demos, specific customer references about the features you want to test. If not available yet remember you are not there to help with resources and time to make the solution available for the sake of innovation….
The agility and scale-up with incremental implementation are suitable for both dimensions.
Focus Point 1 – Non-critical or critical areas?
Non-Critical Areas | Critical areas |
Most experts argue that the first place to apply AI in a business is in the non-critical areas. This means that it should apply in places not considered at high risk (Customer service, Marketing analysis, and administrative tasks), because in case anything goes wrong, it will not impact processes and reputation. (Sage (2023))1 This enables companies to adopt while testing AI features and learn how to best continue the adoption in critical areas. Low Risks, High Hanging Fruits — Slower Process |
AI is expected to quickly show value in supporting critical business processes. The strategy is aimed at only those areas where substantive improvements in efficiency and accuracy, at a cost-saving and delivery of value, can be made. On the other side, it is indicative of high risks since, in essential areas, mistakes count. The key is to ensure right from the beginning that AI systems are built robust, reliable, and compliant with all rules. (PwC, 2024)2 High risks- Low-hanging fruits -faster process. |
Focus Point 2 – Efficacy or popularity…? Which one drives choices?
I have considered this focus point AMBER/GREEN only because there are other methodologies (as trends) to be considered with d ifferent effectiveness and popularity criteria (the result of hype…). They are also subjected to the companies’ and leaders’ “risk appetite”.
Market Dynamics, ethical practices, and public perception (stakeholders) –RED
Large Companies/Corporations and SMEs – Develop and implement AI ethics policies to conform with industry regulations and standards. The use of Committees to ensure Governance can provide the benefit of creating intermediate functions that can monitor the adherence towards ethics, according to data and algorithms, as already suggested, in addition to governance, accountability rules and audits involving different internal and external roles can guarantee Ethical practices and thus sustainability. For SMEs, the respect of the rules and risk assessments can be performed with the support of local bodies as it happened with Privacy laws.
Regular reviews and updates ensure the identification and mitigation of risks across the organization, including vendor compliance. Still, companies are behind schedule to take the Governance and Accountability matters (Farnham, 2023)3
Here there is an example of Governance and Accountability we proposed (Collina, Sayyadi, and Provitera, 2024)4 as part of the “Data Quality for Decision-making Processes-Funnel.”
Negative effects of hype on public perception: communication to keep transparency with the customers and stakeholders on how AI is being used within the business is an effective way to handle the public perception of fear, and uncertainty, avoiding possible “disillusionment” coming from mismatching between expectations and reality.
Conclusions
Going forward, the AI market also is likely to start entering a phase of diminishing growth as development costs mount and the regulatory eye settles on this new opportunity. (Gartner, 20245; Future Processing, 2024)6. This is an area where safeguards regarding data privacy, transparency in algorithms, and regard for ethics may render compliance quite costly and operationally burdensome. Lastly, the high cost of training and maintaining large AI models could also become a barrier to entry for SMEs and start-ups on the way to sustainable innovation and market penetration (Campos Zabala, 2023)7. This concerns specific strategies to handle this level of complexity required either at the level of large corporations or of SMEs to counteract the risks of hype, manage technological maturity, and handle the market dynamics and public perception. The most prominent organizations can use their power for thorough due diligence, piloting at scale, and active involvement with academic institutions. SMEs must carry out selective investments, be robust in the vendor validation process, and do incremental deployment of AI. Finally, ethical guidelines, adherence to regulations, and transparency in the approach should aim for a sustainable and responsible practice of AI.
Only when businesses learn from the past and adapt to current challenges, will they be able to harness the full potential of AI while mitigating its inherent risks?
About the Author
Luca Collina is a transformational and AI Business consultant at TRANSFORAGE TCA LTD. York St John University awarded him the Business – Postgraduate Programme Prize and CMCE (Centre for Management Consulting Excellence-UK) for his paper in Technology and Consulting Research Prize. Author/External Collaborator of CMCE.
References:
- “Generative AI in 7 Easy Steps: A Practical Business Guide,” Sage Advice US. Available at: https://www.sage.com/en-us/blog/generative-ai-in-7-easy-steps-a-practical-business-guide/ (Accessed: 25 June 2024).
- 2024 AI Business Predictions. Available at https://www.pwc.com/us/en/tech-effect/ai-analytics/ai predictions.html , (Accessed: 25 June 2024).
- Farnham, K., 2023. Top corporate governance trends for 2024 & beyond. Diligent. Available at: https://www.diligent.com/resources/blog/corporate-governance-trends [Accessed 25 June 2024].
- Collina, L., Sayyadi, M., and Provitera, M., 2024. The New Data Management Model: Effective Data Management for AI Systems. California Management Review, March.
- Gartner, 2024. 3 Bold and Actionable Predictions for the Future of GenAI. [online] Available at: https://www.gartner.com/en/articles/3-bold-and-actionable-predictions-for-the-future-of-genai [Accessed 25 June 2024].
- Future Processing, 2024. AI Pricing: Is AI Expensive?. [online] Available at: https://www.future processing.com/blog/ai-pricing-is-ai-expensive/ [Accessed 25 June 2024].
- Campos Zabala, F.J. (2023). Responsible AI Understanding the Ethical and Regulatory Implications of AI. In: Grow Your Business with AI. Apress, Berkeley, CA