HR Strategy


BUILDING TRUST IN AI’S ROLE IN HR PROCESSES

AUTOMATED RECRUITING SYSTEMS, CHAT BOTS FOR EMPLOYEE CONVERSATIONS, AND MOOD-MONITORING SOFTWARE CAN ALL DRAMATICALLY IMPROVE HR’S EFFICIENCY. HOWEVER, COMPANIES MUST BE CAREFUL TO BUILD TRUST IN AI SYSTEMS SO THAT EMPLOYEES FEEL COMFORTABLE WITH THE NEW DIGITAL BRANCH OF THE HR DEPARTMENT.

As HR teams work to achieve greater efficiency, artificial intelligence (AI) systems have been an invaluable partner. They can quickly process thousands of incoming resumes, sift and sort hundreds of metrics to predict everything from productivity to turnover, and even provide employees with personalized, immediate responses to routine questions. However, if employees have low trust in the systems or view them with suspicion, the full benefits of AI systems will never be realized.

It is therefore imperative that HR teams work to build organizational trust in their HR systems. This doesn’t happen overnight, but once trust in the systems has been established, HR teams can do more to utilize the full capabilities of AI systems for the benefit of the department, outside employees, and the organization as a whole. Here, four key ways HR teams can build trust will be examined.

INVEST IN ROBUST, SECURE SYSTEMS
The first step in building trust in AI systems and AI-assisted innovations is to make sure the systems are high quality. Does that seem like an unusually basic first step? It might be, and yet many organizations are to this day dealing with distrust and suspicion stemming from a hasty rush into AI systems that proved to be faulty, data-leaking disasters.

According to data compiled by CMS Wire and HR Executive magazine online, workers can view the data held by AI systems as less secure than conversations with managers. As a result, HR teams need to look for systems that encrypt data, have two-factor authentication, or other trust-building systems built in so that workers can feel like their personal profiles, performance records, and benefits details are protected and secure at all times.

Train The Systems Specifically For The Organization – And The Employees Specifically On The System Another factor in building worker trust in AI systems is training. With AI systems, the training element works both ways. The AI program needs to be trained on an appropriate data set, and workers must be trained to interact appropriately with AI tools.

According to Oracle’s FutureWork survey, some 64% of HR teams are using some form of AI system, compared to just 50% of the general workforce. As a result, HR departments often have a greater comfort level with AI systems than other groups. Giving employees more chances to interact with AI systems and to use AI output in their daily work can help bridge this gap. Studies have shown that as employees gain familiarity with an organization’s AI tools, their view of the tools becomes more positive. Further, AI tools need to be trained more specifically to their home organization. Extra hours spent inputting data and give examples to the tool can help reduce predictive errors, bias, and improper assessments. Though it is an added expense, training the AI system to be more accurate and focused on the unique aspects of the organization boosts accuracy, safety, and trust in the protocols.

BE TRANSPARENT WHEN THE BOT MESSES UP – AND FIX IT
Of course, even with AI systems are extensively trained and employees are using them daily, “the bot” can make dangerous errors and miss key metrics or correlations.

A commonly cited example comes from the world of recruiting. Amazon found that their AI system was improperly judging gender when calculating odds of future success in the engineering function and eliminating female candidates from interview pools. This was completely contrary to the firm’s equitable hiring objective in bringing AI into their process.

Rather than hiding the problem, Amazon took a path that while initially embarrassing, contributed to long-term trust building within the HR department and the broader organization. The team behind the project owned the error and discontinued the use of the AI assistant in candidate screenings. As a result, employees and outside observers gained confidence that the AI systems had appropriate human oversight and that improperly performing AI tools would be quickly eliminated.

This transparency about the human oversight behind the AI system and the protocol for fixing errors can become an organizational advantage. When employees know that “the bots” can’t get too far out of line without being corrected – and that someone is paying attention so that bots ARE corrected – studies show that AI tools are more warmly embraced and trusted, even if they are still in “beta mode”.

ENCOURAGE EMPLOYEES TO LEAN INTO THE FUTURE
A final step is to encourage employees to be more aware of the role AI will play in their lives in the future. While some employees are fearful that AI systems might steal jobs, the reality is that thanks to demographic cliffs and international shortages of skilled talent, human workers are in little danger from AI tools. In fact, the AI tools may be the only way human employees will be able to meet the challenges of the future.

Providing this employee-centered and human-dominated role of the technological future coming can ease employees’ fears about working with AI and allowing AI to expand in their departments. Feeling safe in their jobs and able to optimistically explore how AI could help them succeed, workers are more positive and open to experimenting with new tools. In this way, the AI revolution that has moved from engineering teams to the HR department can safely spread to other parts of the organization in a way that maintains trust and goodwill at all times.