Everything about HR and Explainable Artificial Intelligence (XAI)

Artificial intelligence (AI) is a rapidly growing field that influences many aspects of our personal and professional lives. However, as AI becomes more complex and sophisticated, it can be difficult to understand how AI systems make decisions.

To ensure transparency and accountability, Explainable AI (XAI) techniques are essential. 

In this article, we will explore what XAI is, why it matters, and how we at HROS use it to predict the success of a startup based on personality data.

What is Explainable AI?

In short: Explainable AI refers to the ability to understand and interpret how an AI system makes decisions.

Now the long version: XAI is a set of techniques and methods that are used to make AI systems more transparent and interpretable. These models are designed to explain how they arrived at their decisions, enabling humans to understand the reasoning behind the AI model’s predictions. 

XAI is especially important for AI systems that make decisions with serious consequences, such as predicting startup success based on personality traits. Without XAI, it can be difficult to trust the decisions made by these systems and to ensure that they are fair and unbiased.

I don’t know about you, but I think we all sleep a bit better at night knowing there are such models such as XAI in place when it comes to AI. Does anyone else remember iRobot? 

Why does XAI matter for HR? 

Explainable AI (XAI) is particularly important for predicting startup success based on personality data as it helps us identify the traits and behaviours most likely to drive success and make informed decisions based on that knowledge. 

Personality data can include a wide range of information such as psychological assessments, behavioural traits, and other personal characteristics. By analysing this data, AI models can help predict which candidates are most likely to succeed in the organisation. 

However, the use of psychological profiles in AI models can raise legal and ethical concerns related to privacy, bias, and discrimination. 

XAI can help address these concerns by providing transparent and interpretable explanations for how AI models make decisions. This has several benefits, including increased trust in AI models, improved decision-making, and reduced risks.

Overall, XAI is critical for us to make informed decisions and ensure that we are using personality data in a responsible and ethical way.

How we use XAI at HROS

At HROS, we recognise the sensitive nature of personality data and its usage for prediction models. Our ambitions for explainable models are being supported by the Software Competence Center Hagenberg through their new XAI research approach. Dr. Sobieczky designed the mathematical concept of Before and After prediction Parameter Comparison (BAPC) that aims to explain AI correction models.

How it works – summarised in 4 simple steps:

  1. Utilise a conventional and interpretable algorithm, called a base model.
  2. Add a limited amount of AI – just enough to enhance prediction accuracy.
  3. The AI model generates a new training set, which is again used in the base model.
  4. Compare the results from steps 1 & 2, the difference is explained as applied AI!

Sounds easy, right? Well, not exactly! 

Challenges of XAI models

Achieving XAI in AI models can be challenging. One of the main challenges is the trade-off between model accuracy and interpretability. Models that are more accurate may be less interpretable, and vice versa. 

Another challenge is the limitations of available data. In some cases, the data used to train the model may not be sufficient to understand how the model is making decisions fully.

Finally, XAI is an emerging field and there is still much to be learned about how to achieve XAI in different types of models.

This means that there may be a lack of standardisation and best practices for achieving XAI, which can make it difficult to implement XAI in practice.


In conclusion, Explainable AI (XAI) makes AI systems more transparent and interpretable. 

For us at HROS or other HR business it is particularly important for personality-based startup prediction models because it allows us to ensure that these models are transparent, interpretable, and fair

AI is here to stay, so XAI will become increasingly important and must ensure that these systems are used in a responsible and ethical manner.

We have learnt that building an explainable AI model is anything but easy, but we are committed to keep investing in this research field.

Get in touch