Are you hiring a robot or a human?

This article is part of our Opinions section.

The relentless march of AI innovation brings new challenges every week. Take OpenAI’s recent unveiling of ChatGPT-4o, a video avatar that can think across different media formats in real-time. This leap towards hyper-realistic AI raises critical questions about the impact of AI, especially on recruitment.

One major concern is verifying candidate authenticity. How can we ensure we’re dealing with real people and not AI imposters? What recruitment policies should govern the use of AI-generated applications? And what safeguards can we put in place to uphold the process’s integrity?

Today’s generative AI tools can already craft resumes, cover letters, and even complete applications. They also assist candidates with research, interview prep, and best practices in areas like body language and salary negotiations. Many recruiters have likely come across applications that seem suspiciously AI-generated – flowery language, odd synonyms, vagueness and occasional inaccuracies might be red flags.

As AI tools become more sophisticated, spotting AI-created content will become trickier. Candidates might fabricate digital histories, including deepfakes showcasing a fabricated work experience. Imagine a real-time deepfake interview where someone impersonates the candidate! This isn’t some futuristic nightmare; a Hong Kong company recently lost $25 million due to a deepfake video call, where a fraudster tricked a finance employee into thinking they were speaking to their CFO.

Recommended reading: Donny Chong, Product & Marketing Director at Nexusguard: “To get a handle on deepfakes, we need to hit them from all angles”

Generative AI in recruitment: boon or bane?

The spectrum of AI support for candidates ranges from innocuous research assistance to more contentious uses like generating CVs or application answers. While AI can level the playing field for candidates lacking writing skills, it may misrepresent the abilities required for certain roles. Generative AI also enables mass applications, raising questions about a candidate’s genuine interest in a position.

During interviews, using AI to tailor perfect responses might seem like efficient preparation. Candidates already modify their answers based on perceived expectations and research common interview questions. However, there’s a difference between memorising AI-generated responses and articulating personal insights. AI-prepared answers may hinder an applicant’s true skills and confidence, making it harder to differentiate between candidates.

To address these challenges, companies should develop rules governing their recruitment processes now. One approach is a blanket ban on AI-generated CVs, cover letters and applications unless writing ability is irrelevant to the role. Tools that detect AI content can be used to scan incoming applications.

Reviewing the interview process is also crucial. More businesses may use AI to generate interview questions, which candidates could predict using AI. Effective interviews should use tailored, evolving questions to create detailed conversations that are harder to fake with AI prep. This approach helps to accurately assess a candidate’s potential and fit. 

For online tasks and assessments, candidates might use deep fake videos or other AI tools to cheat. One solution is to conduct all assessments in person, though this isn’t practical for businesses with a global workforce. Instead, enhancing security procedures to include robust authentication and monitoring is essential. The same applies to phone and video interviews to prevent AI fakes.

The ethical challenge of generative AI in recruitment

Beyond the technical challenges, the ethical implications of generative AI in recruitment are significant. As AI becomes more sophisticated, it could inadvertently perpetuate biases present in the training data. For instance, if an AI system is trained on data that reflects gender or racial biases, it might favour certain candidates over others, exacerbating inequality in the hiring process.

To mitigate this risk, organisations should prioritise transparency and fairness in their AI tools. This includes regular audits of AI systems to ensure they do not discriminate against any group. Additionally, companies should maintain human oversight over AI-driven decisions, ensuring that technology supports rather than replaces human judgment.

One of the main attractions of AI in recruitment is its promise of lightning-fast efficiency. Automated systems can wade through mountains of applications, pulling out the most qualified candidates and streamlining the entire hiring process. However, this efficiency cannot come at the expense of genuine human connection. A candidate’s ability to showcase their true skills and experiences shouldn’t be overshadowed by their prowess in using AI tools.

Recommended reading: This Portland startup is using AI to root out bias from recruitment

To strike this crucial balance, organisations need to embrace a multifaceted approach to assessment.

Imagine AI-powered resume screenings acting as the first filter, identifying candidates with the right qualifications and keywords. From there, in-person interviews can delve deeper, allowing for human interaction and gauging a candidate’s soft skills, personality fit, and communication style. Practical skill tests can further validate a candidate’s capabilities, while reference checks provide valuable insights into past performance and work ethic.

By weaving together technology and traditional evaluation methods, recruiters can create a rich tapestry of information, ensuring they make well-rounded, informed hiring decisions.

As the landscape of AI shifts and evolves, we might see entirely new roles emerge within the recruitment space. These specialists would focus solely on managing and overseeing AI systems, ensuring they function ethically and effectively throughout the hiring process. Additionally, the skillsets required for many existing jobs may transform. Digital literacy, the ability to work seamlessly alongside AI tools, and the critical thinking necessary to interpret AI-generated data will become increasingly valuable.

Generative AI: a double-edged sword

Generative AI undoubtedly offers a treasure trove of benefits for the recruitment industry. It can drive efficiency, expedite processes and potentially level the playing field for candidates from diverse backgrounds. By allowing them to focus on a smaller pool of pre-qualified applicants, recruiters can dedicate more time to in-depth interviews and candidate nurturing, ultimately leading to better hiring decisions. Additionally, AI can unearth hidden gems – talented individuals who might lack a traditional resume or struggle to get past the initial screening hurdles.

However, it also presents substantial challenges regarding verifying candidate authenticity and upholding ethical recruitment practices. To mitigate these challenges, organisations must develop clear policies outlining acceptable uses of AI tools by both candidates and recruiters. They should also invest in robust security measures to detect and prevent AI-generated applications or deepfakes. Transparency is key – candidates should be informed about the role of AI in the hiring process, and recruiters should be able to explain their reasoning behind any decisions.

Ultimately, AI in recruitment presents a double-edged sword. While it offers undeniable benefits in terms of efficiency and inclusivity, it also raises concerns about authenticity and fairness. By embracing a balanced approach that leverages the strengths of both AI and human evaluation, while remaining vigilant about ethical considerations, organisations can navigate the complexities of AI-driven recruitment. This approach will ensure that they attract and retain the best talent, fostering a diverse and successful workforce for the future.

Iffi Wahla
Iffi Wahla

Iffi Wahla is the CEO and Co-Founder of global hiring platform Edge. He has contributed to TechFinitive under our Opinions section.