AI

By Luca Collina & Ben Warnes

AI has still to earn its spurs as a reliable tool in the locker of business. So what guidelines are available to executives to aid in determining the appropriate level of trust to apply towards AI systems? To date, different jurisdictions’ regulatory frameworks vary in their approach.

As artificial intelligence becomes more embedded across sectors, governments respond with new regulations to manage risks while enabling innovation. However, complex and fragmented regulatory approaches could undermine trust in AI among key decision-makers. This analysis compares emerging AI governance laws and regulations in the EU, US, and UK, specifically examining their potential impact on trust for executives, managers, and workers adopting AI systems.

The EU’s AI Act categorises risks and sets rules to protect rights while enabling innovation. The US has an AI Bill of Rights and order for safe AI, but no comprehensive laws yet. The UK takes a pro-innovation approach with guidelines for responsible AI use overseen by existing regulators.

Building Trust in AI

EU: The EU Act promotes accountability and transparency in AI. This can help executives trust AI more through audits to check processes. Managers have duties around monitoring systems to ensure progress and compliance. Restrictions on problematic AI protect workers while allowing innovation, although some uses could still undermine rights.

US: Over 90 per cent of AI executives say that AI improves their decision-making confidence, but others lag. Academic research shows that ethics shape trust in AI. Companies would use AI more with guidelines for fairness, explainability, and privacy. However, common values across industries do not yet exist.

UK: In the UK, the rules want companies to feel good about using AI if it’s open on how it works and is fair. However, there are lots of complicated regulations between industries, which is confusing. This may stop executives from adopting AI. There are worries about the economic impact, too. Pros and cons mean boosting or preventing AI trust for executives.

A deeper analysis and interpretation of the different laws and regulations among the countries and continents needs to focus on the pros and cons of executives’ trust in AI for adopting it.Figure 1 AI Governance

Regulations can build leaders’ trust in AI, but balancing risks and progress has trade-offs.

The EU prioritises responsibility through limited innovation. The US enables unfettered development yet breeds uncertainty. The UK is confused, with complex, sparse oversight. Striking the optimal equilibrium to sustain trust requires measured governance for principled AI expansion, not drastic swings between overbearing restrictions versus little accountability, which deter adoption. As priorities diverge locally, executives must weigh their context amidst competing aims. Steps upholding ethical standards, welfare, and technological advancement stand the best chance of motivating cross-regional public investment and leadership buy-in.

Regulations through elements related to trust and further actions expected by executives.

Figure 2 AI Governance

Why are further actions expected from the EU’s managers?

EU executives must take additional actions to build trust in AI beyond regulations due to two considerations:

The EU AI Act establishes accountability and restrictions to manage risks. However, achieving genuine adoption and confidence from executives requires further cultural leadership and commitment to ethical AI.

While regulations provide an oversight framework, progress depends on executives driving change through active capability building, risk management, and internal governance. Going beyond rules to instil ethical AI across operations builds authentic trust and acceleration.

Figure 3 AI Governance

A piece of recent positive news about Biden’s AI Bill of Rights is related to the initial pilot applications from the research done by NAIRR (National Artificial Intelligence Research Resource) to make AI tools and resources more available, secure, compatible, and accessible for everyone to learn.1 This can be seen as an additional support for management.

Why are further actions expected from US managers?

Top executives already trust AI to improve decisions. The problem is with the employees’ trust. Most staff lack confidence in the technology’s fairness and transparency. Without shared ethical guardrails in place across sectors, uncertainty persists.

Managers must translate high-level AI principles into understandable workplace policies and training. Openly addressing concerns about bias and job loss rather than ignoring them expands trust in AI. Cross-industry collaboration to align core values, cementing transparency and accountability, can give employees confidence that AI will be applied ethically.

Figure 4 AI Governance

The UK’s rules aim to make companies confident in using AI by promoting transparency, accountability, and other trust-building principles. However, regulatory complexity across sectors could reduce this confidence. There are also concerns about economic impacts. On the other hand, the generative AI framework for HM Government,2 even if directed to the public sector, supplies an additional point of reference for supporting businesses in topics related to the adoption and implementation from make or buy, to ethics, data and privacy.

Why are further actions expected from UK managers?

The UK wants people to feel good about using artificial intelligence by being open about how it works and ensuring that it’s fair. But lots of complicated rules between industries are confusing. This could make executives afraid to adopt AI. While the goals are good, too much red tape and most rewards going to companies that use AI early on may slow things down.

Regulations are not to be condemned or used to justify further delays in adopting AI because they can undermine executive trust; many regulations are part of a strategic approach to AI adoption anyway.

Thus, managers need to ask for simpler regulations in their field. Checking how the technology will impact workers and being honest about it counters fears. And taking the lead to spread AI’s money-saving powers more evenly brings everyone along. Removing obstacles this way helps get wholehearted buy-in across British businesses.

We discussed some of the new rules that different governments make for using artificial intelligence responsibly. These rules help make company leaders feel OK about putting the technology to use. However, while regulations set intentions, putting principles into practice presents challenges. Having explored high-level policy impacts, we now transition to additional considerations for responsible AI adoption.

Beyond regional differences, what are the common executive actions to include in an AI systems strategy?

Company leaders can directly act beyond laws to make staff and stakeholders trust AI more. Key steps include policies for ethics, training on AI, evaluating job impacts, and talking to officials about rules. If leaders visibly care about people through these concrete moves, it shows inside and outside the company that AI helps more than it harms.

  • Set organisational policies and culture focused on ethical AI – Leaders spelling out guidelines for fair AI in a company rulebook proves to all that they want self-checking rules, not just fast money.
  • Provide extensive training and internal guidance resources on AI – Leaders paying for their whole company to take classes on AI risks shows that they worry that AI could harm people by accident without learning.
  • Proactively evaluate workforce impacts and address employee concerns – Leaders studying how AI changes jobs, with care for staff livelihoods, makes employees feel that higher-ups don’t just coldly ignore their welfare for profits.
  • Collaborate with regulators and other stakeholders on sensible ground – Leaders respectfully meeting with them about practical new AI laws builds public trust more than ignoring worries.

Leaders put visible effort into the steps above to show that they walk the walk. Rules that are shown to be genuine build trust. Staff knowing that leaders invest to avoid AI harm means a lot, too. Evaluating work changes with heart gains loyalty when profits still matter, too. And fair rules forged together end suspicion. These concrete moves towards responsibility make the public and employees cheer executives forward on using AI to aid lives.

Regulations alone do not ensure executive trust; organisational culture and policies matter.

EU policymakers could further compel internal audits on existing models while requiring accountability for automated decisions. US leaders might demand rapid course-correction abilities if systems display unfair performance deviations post-deployment. UK governance could authorise external ethical inspectors to halt dubious projects based on mounting evidence.

Collaborate across sectors to align on AI best practices guided by shared values. Where specific regulations emerge, view compliance as a floor rather than a ceiling for responsible innovation.

When leaders show care for impacts on people rather than uncaring profits, staff and society reward the care by trusting leaders’ judgement on using AI moving forward. So responsible foundations laid with care then open doors to confident progress. Leaders get trust once trust is earned, boosting their confidence in AI systems.

“Earn trust, earn trust, earn trust. Then you can worry about the rest.” 3

Responsible AI relies on earning trust through ethical practices and inclusive governance. Organisational adoption hinges on leadership approaches within companies as well.

How executives steer emerging technologies proves critical for stakeholder confidence. The specific leadership styles implemented around AI strategy internally shape acceptance across levels.

Examining common models aids constructive analysis regarding systemic trust factors.

Leadership Essentials for Successful AI Adoption

Every workplace’s AI journey will differ. What are the key areas for leaders?

  • Share the compelling vision.
  • Enable ongoing learning.
  • Role-modelling values-based behaviour.
  • Empower accountable decisions.

Aligning AI to Vision

“Greatness is not a function of circumstance. Greatness is largely a matter of conscious choice.” 4

Keep aims bold but diligently embed ethics. Prioritise infrastructure and controls over aggressive AI deployment. Track quantifiable indicators – business returns, algorithmic bias, data quality, and stakeholder sentiment. Rapid innovations should align with unchanging ethics like fairness and transparency. Major achievements often come through small, steady efforts, such as adding community reps to boards and iterative data tweaks.

Foster Learning Culture and Psychological Safety

“Team learning happens when team members suspend assumptions and enter into genuine thinking together.” 5

Frame AI as an ongoing learning journey with all staff playing roles that uphold ethics and minimise harm. Welcome diverse voices in decisions, supportively respond to challenges, and enable collaborative troubleshooting without blame. Model openness about incomplete AI knowledge. Task cross-functional groups to identify risk controls; support experimenting with audits and oversight. Provide resources for employees to share literacy skills and stay updated.

Leading Ethically

“A leader knows the way, goes the way, and shows the way.” 6

Admit limitations in AI literacy and seek multi-disciplinary input to enable debate. Proactively surface potential biases, risks, or inequities in AI systems. Assess sociotechnical challenges; conduct impact redress when necessary, slowing rapid deployment. Prioritise ethics and people over trends or quick wins. Build connections to gather community perspectives on appropriate AI uses – accountability spans boundaries.

Enable Staff for Governance

“The best executive is the one who has sense enough to pick good men to do what he wants done, and self-restraint to keep from meddling with them while they do it.” 7

Emphasise staff governance roles. Share challenges openly, not only policies. Allow individuals to be AI-fluent and involve stakeholders in co-creating policies to distribute, then responsibility. Provide team authority to halt questionable deployments via accessible feedback systems enabling agile responses. Sustained, transparent communication and inclusive decisions distribute capabilities across an empowered workforce to uphold ethics.

Conclusions

Leadership commitment to transparency, ethics, and staff empowerment builds trust for AI adoption.

As new rules come out to oversee AI, leaders must take steps to allow people to trust the technology. Laws alone don’t build trust; company cultures matter, too. If leaders show they care about being responsible with AI, it helps staff and stakeholders gain confidence.

Rules for AI differ by country, but some worries and hopes are the same everywhere – like wanting AI to be fair, looking out for workers, and still seeing progress. Leaders across borders can work together to make balanced guidelines.

Inside companies, leaders need to:

  • Share an inspiring vision for using AI.
  • Encourage ongoing learning about AI.
  • Set a good example of values with AI.
  • Give staff the power to question AI systems.

Figure 5 From Regulations to AI Trust

There is still a lot to figure out on the best ways to balance new AI and accountability. Moving forward calls for balanced new rules, not drastic swings back and forth. As laws take shape, leaders must weigh trade-offs but stick to ethics. With care and wisdom, AI can be trusted. 

About the Authors

lucaLuca Collina’s background is as a management consultant in Italy in supply chain and manufacturing. In the UK, since 2012, he has managed transformational projects also at international level (Tunisia, China, Malaysia, Russia). He now helps companies understand how GEN-AI technology impacts business, use technology wisely, and avoid problems. He has an MBA in Consulting, has received academic awards for his research, and is a published author. Thinkers360 named him one of the Top Voices, Globally and EMEA in 2023. Luca continuously upgrades his knowledge with experience and research to transfer it. He recently developed interactive courses on “AI & Business” and “Human Centric with AI”.

benAfter earning a postgraduate degree in marketing in 1995, Ben Warnes worked in corporate communications before co-founding an NYC design and build firm in 1999. Returning to the UK, he worked as a consultant in communications for various blue-chip companies. Moving into construction, he created his design and built a high-end construction company. He attained MCIOB status in the construction industry, managing over £150 million in construction projects. Ben now runs a property investment fund. In 2020, Ben founded LMA Coaching 2020 to pursue his passion, drawing on 25+ years of experience and insights. He specialises in helping leaders foster purpose-driven cultures centred on psychological safety and intrinsic motivation. Ben holds an MBA in Leadership Management. His research on remote work guides organisations in flexible models.

References

  1. Joe Biden’s big AI science project gets pledges from Microsoft, Nvidia, and others; https://www.theverge.com/2024/1/24/24049467/national-science-foundation-ai-research-biden-eo
  2. https://assets.publishing.service.gov.uk/media/65a806bf94c997000daeb98e/6.8558_CO_Generative_AI_Framework_Report_v7_WEB.pdf
  3. Seth Godin – https://seths.blog/2014/02/the-most-important-question/ 
  4. Jim Collins – Good to Great
  5. Peter Senge – The Fifth Discipline: The Art and Practice of the Learning Organization
  6. John C. Maxwell
  7. Theodore Roosevelt