Author
Senior Client Services Manager
5 minutes
AI technology is revolutionising recruitment and workforce management practices, increasing efficiency and accuracy by freeing up HR, TA and recruitment team members to work on more strategic aspects of their roles.
The ‘dark side’ of the growth of AI for these same departments is the emergence of deepfake applications, candidates and workers. A 2025 Gartner survey of 3,000 job candidates found that 6% of respondents admitted to participating in interview fraud. Gartner - possibly over-dramatically - also predicted that by 2028, it’s possible that one in four candidate profiles globally will be fake.
Whilst at time of writing, this isn’t yet a widespread threat we are seeing in the labour market. But as AI technology becomes more prevalent, there is significant potential for deepfake-related risks to increase.
This is an especially worrying progression for regulated industries such as Insurance and Financial Services, where the consequences of a deepfake infiltration could be severe and far-reaching.
People usually think of deepfakes as AI-generated synthetic media that convincingly mimic real people, usually for entertainment, social media, or in fictional stories and films.
However, can also be used by candidates and workers to misrepresent their identity, experience, or credentials during interviews and employment processes. Sometimes, they don’t even involve the face altering technology associated with this area, and purely mimic different experiences, qualifications, locations, and backgrounds.
There are multiple reasons for people to use these fake profiles for work:
Overemployment or ‘Daylighting’. This is when an individual works multiple remote jobs at the same time without disclosing them to each other. This has become more common as remote working practices make it easier to juggle overlapping roles undetected. However, the use of LinkedIn and other professional social networking sites means that workers can quickly get caught out.
Workers are creating fake identities and profiles for each application/role in order to prevent overlap. This requires the elaborate creation of entire personas, but with the potential outcome being multiple full-time salaries at the same time, this may well be seen by them worth it.
For employers though, this means poor value for money, essentially paying a full-time rate for a part time worker, and contravening many employer’s terms of employment.
Bypassing background checks and regulatory safeguards. This could be people without the qualifications or experience needed to apply for a job trying to ‘blag’ their way in – a high-tech and more elaborate version of lying on your CV.
This shouldn’t be taken lightly, especially in regulated industries where having people without the right qualifications carrying out sensitive roles leaves organisations open to situations as serious as litigation or regulatory fines.
Industrial espionage and criminal intent. The days of smuggling out flash drives in empty coffee flasks are dead. Deepfakes can be used by individuals to infiltrate workplaces for industrial espionage, through online interviews, digital onboarding, and remote working. Once (virtually) inside the organisation, these individuals can gain access to confidential information, proprietary technologies, or sensitive client data, concealing their true identity with minimal chance of getting caught in person. They're even paid for their trouble.
This sophisticated form of deception makes it significantly harder for companies to detect and prevent insider threats, especially in sectors where remote hiring, flexible working, and digital communication are the norm.
Organisations in regulated industries like Insurance and Finance are subject to stringent regulatory requirements concerning data privacy, fraud prevention, and customer protection. Deepfakes infiltrating their organisation for any of the above reasons is potentially catastrophic no matter what the reason.
For example, if an Insurance company were found to be employing a deepfake, the consequences could be severe and far-reaching.
Regulatory authorities would likely impose significant penalties or fines for failing to comply with industry standards related to identity verification and fraud prevention.
The company's reputation could suffer damage, leading to a loss of customer trust and potential withdrawal of business from key clients.
The organisation might face legal action from affected parties and could be subject to increased scrutiny in future audits, making ongoing compliance more costly and complex.
All organisations should remain vigilant about the growing threat of deepfakes, as these technologies can undermine the integrity of recruitment, onboarding, and day-to-day operations. However, for regulated industries the stakes are significantly higher, emphasising the need for heightened awareness and robust safeguards as well as ongoing monitoring of new approaches and technology as they develop.
To counteract these risks, organisations need to adopt advanced verification methods alongside traditional human oversight. For example, live video interviews may include prompts requiring spontaneous actions such as touching one’s nose on camera, to detect the use of video filters or deepfake overlays.
Face-to-face interactions, even if very infrequent or informal, remain a valuable tool for identity verification. For regulated sectors, additional steps such as biometric authentication, cross-checking official documents in-person, and leveraging AI tools to detect deepfake artifacts are becoming best practices.
While technology can assist in identifying manipulated media, human intuition is set to be the most important tool in the fight against deepfakes in the workplace. It is irreplaceable in recognising inconsistencies, subtle cues, or behaviours that automated systems might miss.
The best weapon against AI-driven threats isn’t just more technology; it’s working with a team that values keeping the human factor alive and tailoring strategies that fit the unique needs of regulated industries. That combination of human insight and industry-specific awareness is what truly strengthens an organisation’s defences.
"In a world where digital threats constantly evolve, partnering with a 'high touch' organisation such as Guidant Global ensures you receive the personal attention and tailored solutions that technology alone can't provide - because when it comes to trust and security, human connection makes all the difference." - Laura Browne
In industries where the stakes are high, such as Insurance, combining technological solutions with experienced human judgment is essential to prevent deepfake-enabled fraud and ensure that only qualified, authentic individuals are entrusted with sensitive responsibilities.
In an era shaped by relentless innovation and digital deception, only those organisations that adapt swiftly and decisively will remain secure. The future of regulated industry workforce management depends on unwavering vigilance, ethical leadership, and the ability to outpace the threats posed by deepfake technology.
If your organisation is ready to strengthen its workforce management, contact Guidant Global. We provide tailored, high-touch solutions that keep your team safe and your operations compliant.
Sign up for our newsletter with the latest workforce management news, insights, analysis and more.
Australia
Suite 1403, Level 14
309 Kent Street
Sydney
NSW 2000
United Kingdom
United States
27777 Franklin Road
Suite 600
Southfield
Michigan 48034