
It’s Not About You. It’s About the Value You Create
The leaders who stand out today aren’t focused on themselves. They’re focused on the difference they make for others. In today’s market, the most successful
It seems the impact of AI on employees in the healthcare and life sciences sector is being underestimated by many in the industry, despite its undeniable disruptive potential. Organisations must do more to embrace AI and ensure its proper integration, linking its adoption and use to clear business outcomes. In this article, we will take a deep dive into key insights from Hunton Executive’s Future of Work in Healthcare & Life Sciences report.
A recent report from Hunton Executive, The Future of Work in Healthcare & Life Sciences, revealed an interesting contradiction when it comes to perceptions around AI’s impact in the workplace.
On the one hand, just 9.2% of people said they were concerned about AI’s potential to replace people. Somewhat paradoxically, however, when asked about the biggest impacts on the workforce in the next decade, AI and automation came in firmly at the top of the list.
According to Amy Pillay, Executive Director of Health, MedTech and Life Sciences at Accenture, this suggests that many people lack a deep understanding of AI’s full potential as a disruptor, and that they may also not understand the urgency around the need to enhance their skills in working alongside this new technology.
“While AI’s potential to automate tasks and augment human capabilities is widely recognised, many people are not equating this with a risk of job displacement for several reasons,” she says. One reason is that organisations often frame AI as a tool to augment rather than replace human capabilities; an approach that may be calculated to ease anxieties around AI’s threat potential. While this is understandable, it’s vital that organisations remain aware of the very real dangers AI could pose to the workforce if not effectively harnessed.
Amy believes AI will be a springboard or enabler in fields like data science or ethics. In addition to the automation of routine tasks such as administration, data entry, and certain manufacturing roles, it also has the potential to extend its footprint well into areas traditionally thought of as non-automatable. This creates an urgent requirement to protect human capital by prioritising reskilling and adaptation, within a model that accounts for this. It’s no longer sufficient to expand human activity into areas where AI may be incorrectly thought to be untenable, but rather, workforce adaptation should centre the ability to collaborate with and use AI tools.
While many respondents have arguably underrated AI as a near-term threat, when it comes to impacts on the workplace over the next decade, they placed AI and automation firmly at the top of the list. This aligns with data from Accenture, which shows that generative AI has the potential to transform language-based tasks, which make up around 40% of the industry’s total workload. Of these, 17% can be fully automated, while 23% can be augmented to improve human productivity. Technologies like Large Language Models can automate language-heavy tasks that currently consume 10% to 30% of clinical workforce time[i].
If we break down these numbers, it’s easy to see just how high the risk of job displacement actually is, especially given the speed at which significantly improved generations of AI model are being launched. Without a solid understanding of AI’s actual potentialities, especially in areas traditionally thought of as ‘safe’ from replacement, companies and individuals run the risk of being blindsided.
A good example of an entirely unlooked for development in AI is its impact on psychotherapy. As various generative models became better at processing complex human emotions, there followed an explosion in the number of individuals using ChatGPT and other AI products for psychotherapy or counselling. A study conducted by the National Institutes for Health[ii] found that AI chatbots had a significant potential role to play in future mental health services, representing an innovative solution to supply and demand problems.
While this is arguably a positive outcome, it’s well worth examining this phenomenon more closely. In a field that seemed to be ipso facto human dependent, uncontrollable human and technological factors created a situation where major health institutions were compelled to create taxonomies of AI replacement of human therapists. If AI can fundamentally disrupt something so apparently dependent on human-to-human interactions, we would all be well advised to recalibrate our risk calculations accordingly.
The surprising reach and impact of AI isn’t confined to isolated cases either. Alex Lee, Accenture Australia’s Data and AI lead for Life Sciences, says there is a major gap in the understanding of AI’s ubiquity. Whether it’s checking the weather, touching up photos, or filtering email spam, AI is now a key enabler in so many of our daily activities. The fact that it’s embedded in trusted tools, however, can mask the extent of the technology’s spread, and muddy our perceptions of its potential to disrupt or perhaps entirely transform whole areas of future work.
Despite its clear potential to utterly transform the whole world of work, there are several factors slowing its adoption and integration. While this may be slowing the pace of change, it also represents significant risk that companies who have delayed adoption may be caught flat-footed, surrounded and outclassed by AI enabled competitors.
Alex from Accenture identifies several clear challenges to AI adoption that companies should address:
[i] https://www.accenture.com/content/dam/accenture/final/accenture-com/document-3/AI-Amplified-Scaling-Productivity-Final.pdf
[ii] https://pmc.ncbi.nlm.nih.gov/articles/PMC11560757/
Some of the common challenges in AI adoption in healthcare & life sciences include:
Alex points out that the real challenge of AI, however, lies in translating abstract concepts into practical, measurable value to drive adoption. This is especially difficult in decentralised organisations, where teams may adopt AI tools independently, which atomises risk and value, making them potentially more difficult to identify and manage. Effective AI governance requires clear internal guidelines and ongoing monitoring to ensure both off-the-shelf and custom models remain accurate and fit for purpose. As AI technologies evolve rapidly, organisations must build the internal capability to manage and measure AI’s risks and rewards.
This can help to create the mental space for teams to experiment. Many individuals may use AI tools like ChatGPT personally, while remaining hesitant to apply them professionally, often due to concerns around data protection, or a lack of understanding on how to do so in a safe manner.
At the same time, Alex says there’s a clear gap in knowledge and expectation. He recalls one instance where a client wanted to use AI to accelerate their patent application process. Upon further consultation, it became apparent that what they had envisioned was the automated documentation of early-stage concepts in their engineer’s brains. “I had to explain that we don’t have established brain-to-text interfaces, at least not yet,” he says. “The world of AI has evolved so quickly that what seemed like science fiction three years ago is now real, yet some people still over-estimate its current abilities.”
To scale AI effectively across large organisations, Alex says it’s essential to start from the basis of measuring return on investment. The key is to ensure that every instance of adoption is aimed at real business problems. This means starting with a clear problem statement and building a model in direct response. When AI is tied to business value, it’s easier to gain internal support and drive adoption through clear demonstrations of value.
Additionally, governance considerations must be built in from the very start, including ensuring data quality and questioning whether legacy data that feeds AI systems reflects the organisation’s current values. This is key, as governance after the fact is much more difficult to build, and may fail to counteract risks that arise during adoption.
With all this in mind, here is a list of simple but powerful actions that can help to ensure successful integration of AI tools, designed to not only minimise friction, but ensure strong ethical decision making, especially around the area of employee welfare.
AI represents both a threat and a significant opportunity, and organisations must ensure they gain a clear and accurate understanding of its current and future potential before formulating an AI strategy. An effective AI strategy will be one that accounts for human capital, technological infrastructure, adoption programs structured around clear business needs, and a ‘governance first’ approach.
[1] https://www.accenture.com/content/dam/accenture/final/accenture-com/document-3/AI-Amplified-Scaling-Productivity-Final.pdf
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC11560757/

It’s Not About You. It’s About the Value You Create
The leaders who stand out today aren’t focused on themselves. They’re focused on the difference they make for others. In today’s market, the most successful

Stuck or Soaring? Rethinking Your Role in a New Era of Work
The relationship between people and organisations has changed. It’s not a family. It’s a partnership – a mutual agreement to create impact, value, and growth.

Strategic Career Moves: Reframing What Matters in a New Year of Change
After five years of unrelenting disruption, perhaps this is the moment to pause – to rethink, reprioritise, and rediscover what truly drives you. For many

In more than two decades of executive search, leadership consulting, and board advisory, we’ve seen thousands of leaders rise – and others stall. The difference