In these uncertain times, the demand for psychological services is at an unprecedented level. While the emergence of artificial intelligence offers great promise in addressing the public’s needs, we highlight the increasing disruption to the nature and delivery of psychological practice, which positions the profession at a critical juncture.
The last three years have seen enormous pressures placed on communities stemming from climate-related disasters, the COVID-19 pandemic and the resultant social and economic turmoil, to even the threat of war escalating to unthinkable exchanges of weapons of mass destruction. With this assault on our lives there has been an associated appreciation of the role of psychology as a discipline. Psychologists in their many areas of practice are employing their skills across a range of diverse, impactful areas, from helping alleviate the stress and strains that now beset ordinary lives, to developing selection and training systems that enable the upscaling of national defence forces. Consequently, the demand for such services and the skilled practitioners and researchers that provide them are at an unprecedented level. Fortunately, emerging technologies are primed to help address any shortfall.
Despite initial projections of the impact of automation-enabling technologies on the psychology profession being rather modest (economists Frey and Osborne (2013) famously projected psychology to have a .0043% probability of automation), technological disruption in the profession has already been substantial. For instance, the automation of aspects of psychological assessment has seen the profession take enormous strides in the efficient appraisal of clients in numerous contexts (e.g., therapeutic, organisational etc.). Many psychological assessment methods, including tests, surveys and even interviews are now largely delivered online and unproctored, with data collection, scoring, interpretation and report-writing all being automated. More recently, machine learning (ML) techniques have extended our capacity to draw inferences about clients from more ‘organic’ sources of data that obviate the need for a human assessor (Groves, 2011). For instance, information drawn from social media may provide insights into clients’ depression (De Choudhury et al., 2013), while the text mining of narrative comments can predict aspects of workers’ job performance (e.g., performance ratings, promotions, etc.) beyond the capacity of traditional rating measures (Speer, 2018). Concerns that ML may introduce bias that prevents appropriate modelling of behaviour are being addressed and new models are emerging (D’Mello et al., 2022; Jaccobucci & Grimm, 2020;).
Even the delivery of psychological interventions is not off-limits, with ‘bots’ administering therapies such as Cognitive Behavioural Therapy (CBT) to alleviate clients’ depression and anxiety (Vaidyam et al., 2019). Other forms of therapist-guided internet interventions are also showing promise (Titov et al., 2015). Taken together these advancements call into focus the apparently ‘shrinking’ role of the human psychologist in the delivery of psychological services, as well as the researchers who develop and test psychological theories, which may yet prove redundant in the age of Artificial Intelligence (AI) and ML (see the recent discussions of a ‘post-theory’ science; Spinney, 2022).
It is now abundantly clear that there exist surging effects of automated technologies that can replicate and exceed the cognitive capacity of human workers. Such machines circumvent the well-established biases that plague human judgement, which are too often ignored by practitioners and professions at large. They also provide an antidote to the variability in competency observed among experienced practitioners (e.g., Vollmer et al., 2013). The development of deep learning algorithms and the emergence of predictive analytic systems (Rahwan et al., 2019; Sadler & Regan, 2019; Sejnowski, 2018), with access to big data and the mass of information available in the psychological and related literature, points to a more comprehensive and arguably safer application of the evidence base. Ultimately such advances may signal the restructuring of the psychology employment market, which has already been greatly affected by the onset of the COVID-19 pandemic.
Many of those who argue against the adoption of technologically dependent methodologies in the delivery of psychological assessments and interventions point to the likelihood of clients preferring human and face-to-face interactions in that delivery. The massive adoption of Telehealth and wellness apps in the post-COVID world, however, have drastically reduced face-to-face interactions and provides further evidence that clients will readily engage technology-mediated psychological services. Recent data in Australia (Oracle, 2020) reveal a preference among workers for technology-mediated counselling, with 68% reporting that they would rather talk to a robot over their manager about stress and anxiety at work. This preference extends beyond issues of mental health to other areas commonly serviced by psychologists like career progression, with 82% of respondents believing that robots can support their careers better than humans (Oracle, 2021).
As clients become increasingly used to these forms of service delivery and as the technology continues to rapidly improve, we can expect that client approval is very likely to further increase. At the very least the improved efficiencies will increase the number of clients that can be managed by a single human psychologist, but we cannot rule out the mass displacement of practitioners in the future. Although such ‘Doomsday’ projections may be premature for now, for fields such as psychology there is already the hallmarks of mass disruption to the nature and delivery of services. Such technology has the power to greatly alleviate the pressures placed on both members of the community and those of a highly valued profession, allowing a way to keep step with the demands of an increasingly volatile and uncertain world.
Whilst practitioners’ employment is assured for now, the profession rapidly approaches a judgement day of sorts; a time where we must establish what parts of a psychologist must remain exclusively human. For instance, human empathy is invariably cited as both a critical capacity of psychologists, and one which is presumed to fall outside of the capabilities of existing forms of ‘narrow’ AI. However, the mass migration of psychologists and their clients to highly systematised and automated forms of assessments and interventions (e.g., CBT; Leichseuring & Steinert, 2017) means that the expected value of empathy among future psychologists is likely to have been significantly over-stated.
Revealing the size and magnitude of components of psychological practice that are human-centred will shape the identity of the profession in the coming years. The nature of the education and training of psychologists needs to be examined and restructured to take account of the technological changes that are occurring and to recognise change in the preferences among the general community. We have previously pointed to the possibility of advanced technologies such as AI having a significant impact upon the profession of psychology (e.g., Innes et al., 2022; Innes & Morrison, 2021a, 2021b, 2021d; Innes & Morrison, 2017) but there has been little response from the profession to date.
We acknowledge that there are legitimate concerns about conceptual and methodological issues in the development of AI. For instance, we contributed to the debate about the validity of research into human-robot interaction (Innes & Morrison, 2021c). But we also realise the need to respond to the positive developments in AI that will undoubtedly transform the nature of psychological work. Indeed, we believe that there exists an extraordinary opportunity to both elevate and futureproof psychology as a human profession. In opening our eyes to the rise of the machines, we can best shape the trajectory of our expertise in a way that maximises our potential to help our clients.
About the Authors
Ben W Morrison is a Senior Lecturer in the School of Psychological Sciences at Macquarie University, Australia. He is an organisational psychologist with interests relating to expertise development and the impacts of emerging technologies, especially artificial intelligence. He has a particular interest in applied research, and as such regularly undertakes research projects within real-world work settings or simulations of such environments.
J Michael Innes is an Adjunct Research Professor at the EU Jean Monnet Centre of Excellence, University of South Australia. He has taught at the Universities of Edinburgh, Michigan, and Adelaide. He is a social psychologist, with interests in the psychology of social influence, group dynamics, psychological methodology and the social consequences of technological development, especially artificial intelligence.
- D’Mello, S. K., Tay, L., & Southwell, R. (2022). Psychological Measurement in the Information Age: Machine-Learned Computational Models. Current Directions in Psychological Science, 31(1), 76–87. https://doi.org/10.1177/09637214211056906
- De Choudhury, M., Gamon, M., Counts, S., & Horvitz, E. (2021). Predicting Depression via Social Media. Proceedings of the International AAAI Conference on Web and Social Media, 7(1), 128-137. Retrieved from https://ojs.aaai.org/index.php/ICWSM/article/view/14432
- Frey, C.B., & Osborne, M.A. (2013). The Future of Employment: How Susceptible are Jobs to Computerisation? Available online: https://www.oxfordmartin.ox.ac.uk/downloads/academic/future-of-employment.pdf
- Groves, R. M. (2011). Three Eras of Survey Research. Public Opinion Quarterly, 75(5), 861–871. https://doi.org/10.1093/poq/nfr057
- Innes, M., & Morrison, B. (2017). Projecting the future impact of advanced technologies: will a robot take my job? In-Psych, 39(2), 34-35. https://www.psychology.org.au/inpsych/2017/april/innes
- Innes, J. M., & Morrison, B. W. (2021a). Australian psychology in a post-pandemic world: the future of education, regulation and technology. In-Psych, 42(6). https://www.psychology.org.au/for-members/publications/inpsych/2020/Dec-Jan-Issue-6/By-Professor-John-Michael-Innes-FAPS1-and-Dr-Ben-W
- Innes, J. M., & Morrison, B. W. (2021b). Artificial intelligence and psychology. In A. Elliott (Ed.), The Routledge social science handbook of AI (pp. 30-57). (Routledge International Handbooks). Routledge.
- Innes, J. M., & Morrison, B. (2021c). Experimental studies of human–robot interaction: threats to valid interpretation from methodological constraints associated with experimental manipulations. International Journal of Social Robotics, 13(4), 765–773. https://doi.org/10.1007/s12369-020-00671-8
- Innes, J. M., & Morrison, B. W. (2021d). Machines can do most of a psychologist’s job. The industry must prepare for disruption. The Conversation. Academic rigour, journalistic flair. https://theconversation.com/machines-can-do-most-of-a-psychologists-job-the-industry-must-prepare-for-disruption-154064
- Innes, C., Innes, J. M., & Morrison, B. W. (2022). Social representation of the profession of psychology and the application of artificial intelligence: European Union regulatory authority and the application of psychology as a paradigm for the future. Australian and New Zealand Journal of European Studies, 14(1), 2-17. https://openjournals.library.usyd.edu.au/index.php/ANZJES/article/view/15852
- Jacobucci, R., & Grimm, K. J. (2020). Machine Learning and Psychological Research: The Unexplored Effect of Measurement. Perspectives on Psychological Science, 15(3), 809–816. https://doi.org/10.1177/1745691620902467
- Leichsenring, F., & Steinert, C. (2017). Is Cognitive Behavioral Therapy the Gold Standard for Psychotherapy?: The Need for Plurality in Treatment and Research. JAMA, 318(14), 1323–1324. https://doi.org/10.1001/jama.2017.13737
- Oracle. (2020). [email protected] Study 2020. https://www.oracle.com/a/ocom/docs/oracle-hcm-ai-at-work.pdf
- Oracle. (2021). Back in the Driver’s Seat: Employees Use Tech to Regain Control. https://www.oracle.com/au/human-capital-management/ai-at-work/
- Rahwan, I., Cebrian, M., Obradovich, N. et al. (2019). Machine behaviour. Nature, 568, 477–486. https://doi.org/10.1038/s41586-019-1138-y
- Sadler, M., & Regan, N. (2019). Game changer: AlphaZero’s ground breaking chess strategies and the promise of AI. Alkmaar.
- Speer, A. B. (2018). Quantifying with words: An investigation of the validity of narrative-derived performance scores. Personnel Psychology, 71, 299– 333. https://doi.org/10.1111/peps.12263
- Spinney, L. (2022, January 9). Are we witnessing the dawn of post-theory science? The Guardian. https://www.theguardian.com/technology/2022/jan/09/are-we-witnessing-the-dawn-of-post-theory-science
- Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. Canadian Journal of Psychiatry. Revue canadienne de psychiatrie, 64(7), 456–464. https://doi.org/10.1177/0706743719828977
- Vollmer, S., Spada, H., Caspar, F., & Burri, S. (2013). Expertise in clinical psychology. The effects of university training and practical experience on expertise in clinical psychology. Frontiers in Psychology, 4(141). https://doi.org/10.3389/fpsyg.2013.00141
- Sejnowski, T. J. (2018). The Deep Learning Revolution: Artificial Intelligence Meets Human Intelligence. MIT Press, Cambridge, MA.
- Titov, N., Dear, B. F., Ali, S., Zou, J. B., Lorian, C. N., Johnston, L., Terides, M. D., Kayrouz, R., Klein, B., Gandy, M., & Fogliati, V. J. (2015). Clinical and cost-effectiveness of therapist-guided internet-delivered cognitive behavior therapy for older adults with symptoms of depression: a randomized controlled trial. Behavior therapy, 46(2), 193–205. https://doi.org/10.1016/j.beth.2014.09.008