Why the Ethical Use of AI Matters for Your Career

Technology, AI integration and Business

By Jack McGuire, David De Cremer, Leander De Schutter, and Yorck Hesselbarth

In the contemporary digital era, innovations such as artificial intelligence (AI) are profoundly transforming the business landscape (De Cremer, 2020). The buzz surrounding ChatGPT, coupled with recent assertions about the sentience of Google’s LaMDA, a large language model, underscore the prominence of chatbot technology in these advancements (Adamopoulou & Moussiades, 2020; Ryu & Lee, 2018; Tiku, 2022). Customer-oriented chatbots, an emergent application of this tech, offer unparalleled efficiency and cost-effectiveness, operating ceaselessly and responding to client inquiries in real time (Salesforce, Research, 2019). Yet, amidst these advantages lies an ethical conundrum. Customers cherish genuine human interaction and can become quickly disillusioned when they realise they’re communicating with a bot, not a person (Ciechanowski, Przegalinska, Magnuski & Gloor, 2019). Balancing this desire for authenticity with the allure of operational efficiency poses a challenge, making it tempting for businesses to deceive customers by blurring the lines between human and machine. 

Specifically, organisations nowadays are confronted with a reality where chatbots demonstrate remarkable human-like qualities (Collins & Ghahramani, 2021; Leviathan & Matias, 2018). This reality makes the choice to cut costs by adopting human-like chatbots a rational one. However, this choice is not so straightforward for organisations to make. After all, customers prefer the real thing (i.e., interactions with a human) over the artificial one, and therefore making the rational choice requires organisations to adopt a strategy of deceiving their customers by not disclosing to them that chatbots are used. 

However, what are the risks when firms use chatbots without disclosure? What happens to the reputation of organisations engaging in these deceptive acts when customers find out what is really happening? And, even more important, what happens to the employees working for those organisations? When deception is found out, organisations are likely to suffer reputational damage, but will it also tarnish the careers of their employees? Several high-profile tech companies have faced backlash over the unethical use of emerging technologies. 

Consider the fallout from the Theranos fraud and misconduct scandal. While the company suffered legal and reputational damage, employees faced a backlash, too. Several of them reported difficulties in job transitions, with potential employers associating them with the scandal (Lapowsky, 2021). As companies carry responsibility for their employees, it is imperative from an accountability point of view that they are aware of any potential effects on the careers of their employees before succumbing to the allure of deploying chatbots under a veil of deception. To test whether employees indeed suffer in their career prospects when the organisation they work for engages in deceptive chatbot practices, we conducted several experimental and field studies (McGuire, De Cremer, De Schutter, Hesselbarth, Mai & Van Hel, 2023). 

The Ripple Effect on Careers 

First of all, our research unsurprisingly finds that organisations employing undisclosed chatbots are perceived as less ethical by customers when found out. Obviously, if you work for an organisation that is seen as unethical in its use of emerging technologies, it will affect your work identity. If this is the case, how will it affect the judgements and subsequent actions of these employees? The Uber scandal involving the suppression of sexual harassment allegations presents some useful insights regarding how to respond to that question. Employees at Uber, even those uninvolved, experienced that the company’s ethical breaches overshadowed their individual reputations and motivated many of them to resign (Kosoff, 2017). 

Organisations that deceive their customers by pretending to have humans handle customer enquiries are judged to be unethical by both customers and the employees working for those organisations.

To validate this idea, we ran a series of experimental studies where employees in a simulated company were asked to facilitate deceptive chatbot use. Putting employees in this situation made them more likely to perceive their organisation as cultivating a culture of making unethical requests to their workforce. In turn, because of these perceptions, we found that those employees wanted to quit their job more. 

So, organisations that deceive their customers by pretending to have humans handle customer enquiries are judged to be unethical by both customers and the employees working for those organisations. As a result, customers will show no loyalty to those organisations, and employees want to leave them. But where can those employees go? Are they contaminated for the job market? With today’s rapid transmission of information online, a company’s unethical practices can become widely known, and thus impact employees’ professional trajectories. 

To study this phenomenon, we conducted two more studies, where we assessed how those employees are seen by recruiters. Our results showed that employees that had worked for an organisation known to use chatbots deceptively were perceived by recruiters to be less trustworthy, were less likely to be offered a job, and were given a lower salary when offered one. The deceptive use of chatbots therefore has widespread repercussions. It harms not only the company, but also the people who work there. 

The Responsibility of Tech Professionals: A Call to Action

The case is clear. Tech professionals must champion ethical AI use. The broader societal implications of our creations cannot be ignored. Advocating for transparency and ethical guidelines protects both the company’s reputation and your own professional standing. The findings from our research offer two actionable takeaways: 

  1. The role of leaders. Leaders must recognise the lasting harm of deceptive practices. Ethical technology use can bolster company reputation, morale, and customer trust.
  2. The role of employees. Employees should be proactive, voice concerns about unethical technology use, and leave companies using deceptive practices before those deceptions are revealed. Communicating these concerns anonymously, in private with your manager, or publicly in team meetings and town hall sessions are all useful and should be considered. 

In conclusion, as AI’s role in business grows, its ethical use is critical. It’s not merely about company profits; it’s about the careers and reputations of those who make up the organisation. Prioritising ethical AI practices isn’t just a business imperative; it’s a career necessity. 

About the Authors

Jack McGuireJack McGuire is Jack McGuire is a Postdoctoral Research Associate at the D’Amore-McKim School of Business at Northeastern University (Boston). He received his PhD in Management & Organization from the National University of Singapore Business School and his MSc from University College London. Prior to this, he was an Experimental Lab Manager and Research Assistant at the University of Cambridge, Judge Business School. Jack’s research examines the psychological consequences of artificial intelligence and its increasing application in the workplace. This work has been published in Journal of Business Ethics, Computers in Human Behavior, International Journal of Human–Computer Interaction, and Harvard Business Review, among others. 

decremerDavid De Cremer is currently the Dunton Family Dean of D’Amore-McKim School of Business and professor of management and technology at Northeastern University (Boston), and an honorary fellow at Cambridge Judge Business School and St. Edmunds College, Cambridge University. Before moving to Boston, he was a Provost chair and professor in management at National University of Singapore and the KPMG endowed professor in management studies at Cambridge University. He is the founder and director of the Center on AI Technology for Humankind (AiTH) in Singapore, which was hailed by The Higher Education Times as an example of interdisciplinary approaches to AI challenges in society. He is one of the most prolific behavioral scientists of his generation, and a recognized global thought leader by Thinkers50. He is a best-selling author, including “Leadership by algorithm: Who leads and who follows in the AI era?”, and his newest book “The AI-savvy leader: 9 ways to take back control and make AI work”, which will be published by Harvard Business Review Press in 2024. 

Leander De SchutterLeander De Schutter is assistant professor at the Vrije Universiteit Amsterdam, the Netherlands. He is interested in leadership and decision-making in the workplace. 

York HesselbarthYorck Hesselbarth is building foundational models with European values at Nyonic AI, contributing to digital sovereignty on the continent. Previously, he conducted research in the field of human-computer interaction and led several cutting-edge AI projects for the German Armed Forces. 

References 

  • Adamopoulou, E. & Moussiades, L. (2020, June). “An overview of chatbot technology”. In IFIP International Conference on Artificial Intelligence Applications and Innovations (pp. 373-83). Springer, Cham. 
  • Bogost, I. (2022). “Google’s ‘Sentient’ Chatbot Is Our Self-Deceiving Future”. The Atlantic. Retrieved from: https://www.theatlantic.com/technology/archive/2022/06/google-engineer-sentient-ai-chatbot/661273/ 
  • Collins, E. & Ghahramani, Z. (2021, May 18). “LaMDA: our breakthrough conversation technology”. Google Blog. Retrieved from: https://blog.google/technology/ai/lamda/ 
  • De Cremer, D. (2020). Leadership by Algorithm: Who leads and who follows in the AI era. Harriman House. 
  • Kosoff, M. (2017, March 20). “Uber’s President Resigns as Employees Head for the Exits”. Vanity Fair. Retrieved from: https://www.vanityfair.com/news/2017/03/ubers-president-resigns-as-employees-head-for-the-exits 
  • Lapowsky, I. (2021, August 31). “What became of Theranos employees?”. Protocol. Retrieved from: https://www.protocol.com/newsletters/sourcecode/theranos-on-trial 
  • Leviathan, Y. & Matias, Y. (2018, May 8). “Google Duplex: an AI system for accomplishing real-world tasks over the phone”. Retrieved from: https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html 
  • McGuire, J., De Cremer, D., De Schutter, L., Y. Hesselbarth, Mai, K.E. & Van Hiel, A. (2023). “The reputational and ethical consequences of deceptive chatbot use”. Scientific Reports, 13, 16246. 
  • Ryu, H. S. & Lee, J. N. (2018). “Understanding the role of technology in service innovation: Comparison of three theoretical perspectives”. Information & Management, 55(3), 294-307. 
  • Tiku, N. (2022). “The Google engineer who thinks the company’s AI has come to life”. The Washington Post. Retrieved from: https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ 
The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of The World Financial Review.