Article 02 - Are companies really making hiring more fair by using AI?


The use of Artificial Intelligence in recruitment is becoming more common. Organizations increasingly use AI-driven tools to screen CVs, rank candidates and even conduct initial interviews. This shift reflects the broader move toward data-driven HRM, where decisions are expected to be faster, more consistent, and aligned with business outcomes (CIPD, 2023).

"If human decisions are biased, then using algorithms should make the process more objective and fair."

However, this assumption overlooks an important issue. AI systems are trained on historical data. If that data reflects past inequalities, the system can replicate and even amplify those patterns (European Commission, 2021). In other words, Artificial Intelligence does not remove biases.

A well-known example is Amazon, which developed an AI recruitment tool that was later found to be biased against female candidates. Because the system was trained on past hiring data, it learned patterns that favored male applicants and penalized resumes linked to women. The project was eventually abandoned, showing that AI can reinforce existing bias rather than eliminate it.


In practice, AI recruitment tools rely on patterns such as keywords, past hiring decisions and performance indicators to identify “ideal candidates.” While this improves efficiency, I believe it also narrows the definition of talent. Candidates who do not fit the historical profile, even if they have potential, may be filtered out before they are even considered. Research suggests that while AI can reduce manual workload and improve speed, it may also limit diversity if not carefully managed (Deloitte, 2024).

This issue becomes particularly relevant in the banking sector, where recruitment is often structured, compliance-driven and risk-sensitive. AI tools can be used to screen large volumes of CVs for roles such as customer service officers, analysts  and relationship managers. From an operational perspective, this is highly efficient. However, if the system is trained on profiles of previously successful employees, it may prioritize similar backgrounds, educational paths or experiences. This can unintentionally reduce diversity and limit the entry of new perspectives into the organization.

When you look across different sectors, it’s clear that AI doesn’t affect recruitment in the same way everywhere. In the technology sector, companies tend to use AI more flexibly, combining it with human judgment to identify creative or unconventional talent. On gig-based platforms, AI is often used to make rapid hiring decisions with minimal human involvement.


Even though Artificial Intelligence has advantages, I think it also poses risks. 

One problem is that it is not transparent. Candidates might not understand how decisions are made or why they got rejected. 

Another issue is accountability. It is not clear who is responsible if something goes wrong.

There is also a risk that Human Resources professionals will rely too much on what the algorithms say and not use their own judgment.

In my view, AI should not replace human judgment in recruitment. Instead, it should act as a support tool that enhances efficiency while still allowing for critical evaluation. Human Resources professionals need to question the decisions made by algorithms, not just accept them, because recruitment is all about finding potential, understanding context and making the right decisions about people.

Are we still selecting the best people or just the most predictable ones?

๐Ÿ“š References 

Chartered Institute of Personnel and Development (2023) People analytics: Driving business performance with data. London: CIPD.

European Commission (2021) Ethics guidelines for trustworthy artificial intelligence. Brussels: European Commission.

Deloitte (2024) Global human capital trends 2024. Deloitte Insights.

World Economic Forum (2023) The future of jobs report 2023. Geneva: World Economic Forum.

Comments

  1. Excellent point on data-driven HRM. While efficiency is key, this post is a great reminder that 'faster' doesn't always mean 'fairer' if we don't audit the training data.

    ReplyDelete
    Replies
    1. Thank you for that sharp insight. I completely agree ๐Ÿ‘ Efficiency alone is not enough if fairness is compromised. Auditing training data is critical, because without it, faster decisions can still carry hidden biases. That is where HR needs to stay actively involved and not just depend on the system.

      Delete
  2. This article raises a very important and relevant issue in modern HR. I like how you challenged the assumption that AI automatically creates fairness. The Amazon example clearly shows how bias can still exist within systems. Overall, this is a well-balanced discussion that highlights both the benefits and risks of AI in recruitment.

    ReplyDelete
  3. I love your closing question: "Are we selecting the best people or just the most predictable ones?" It really highlights the risk of losing "wildcard" talent that could actually drive innovation. Do you think we’ll ever reach a point where AI is "smart" enough to value potential over patterns, or will it always be limited by the data we give it?

    ReplyDelete
  4. This article clearly demonstrates both the benefits and risks of using AI in recruitment.
    The discussion on efficiency versus diversity is particularly valuable for organizations adopting data-driven HR practices.
    I also liked how you compared different sectors, showing that AI must be applied differently depending on context.
    A strong and practical perspective on managing AI responsibly in HRM.

    ReplyDelete

Post a Comment

Popular posts from this blog

Article 01 - What’s Actually Happening Inside Modern HR?

Article 06 - HR Ethics in the Digital Age

Article 08 - Are We Managing People or Just Managing Systems?