Considering the Risks of AI Chatbots in HR
The recent explosion of artificial intelligence (AI) chatbots, like ChatGPT, has many employers searching for ways to leverage this technology’s ability to create efficiencies, enhance workflows, streamline operations and improve customer experience. In fact, many organizations are already utilizing AI chatbots to aid and augment HR functions. These tools can help HR professionals screen job applications, generate job descriptions, improve onboarding efficiency and enhance employee engagement. However, this relatively new technology presents certain risks for HR professionals that should be seriously considered and addressed to protect their organizations from costly errors and potential legal risks. This article explores the potential risks of using AI chatbots for HR-related functions.
What Is an AI Chatbot?
AI chatbots are computer programs that use AI and natural language processing to understand to respond to user inputs in a conversational manner, allowing them to imitate human dialogue and decision-making. After being trained on large data sets, these chatbots can create new human-like outputs, such as text, audio, video and images, in response to user inputs. This can help users easily find important information by interacting with the chatbot.
How HR Can Leveraging AI Chatbots
AI chatbots can enable employers to run more efficiently and economically by streamlining many tasks traditionally performed by employees. These tools also allow users to locate information quickly without requiring human interaction. Many HR professionals are using AI chatbots to create job descriptions, conduct performance reviews, and answer employee questions regarding workplace policies or benefits information. This allows HR professionals to save time and resources that would normally be dedicated to these functions, permitting them to focus on other high-value priorities.
Risks of AI Chatbots
Despite the potential benefits and efficiencies of incorporating AI chatbots into HR workflows, there are many risks and limitations HR professionals should consider. Therefore, it’s vital that HR professionals continue to be involved and oversee any employment-related decisions, even when using this technology. The following are examples of limitations and vulnerabilities HR professionals should consider when implementing or using AI chatbots:
Revealing Employee Health Information
Utilizing AI chatbots can create concerns related to inadvertent disclosures of employee personal health information. For example, an employee may voluntarily disclose details of their health condition when interacting with an AI chatbot to obtain benefits-related information. In this situation, employers typically have an obligation to protect and keep employee health information confidential. However, AI chatbots may not be aware of these obligations and could improperly share this information with third parties.
Infringing on Copyright Protections
While many of the companies behind the most popular AI chatbots claim they don’t use copyrighted materials when training their chatbots, there’s generally no way to verify whether a chatbot has used copyrighted materials when providing answers to user inputs. As a result, HR professionals’ use of these tools could expose employers to claims of copyright infringement and plagiarism.
Violating Labor and Employment Laws
If organizations are not careful, using AI chatbots for employment-related decisions could violate labor and employment laws. These tools rely on the information and data sets used to train them. However, this information can be based on biased historical data that favors or disadvantages applicants or employees based on protected characteristics, such as race, color, religion, sex and national origin. Therefore, HR professionals must ensure that using AI chatbots to make employment decisions doesn’t lead to biased and discriminatory outcomes.
Organizations may use AI chatbots as the initial point of contact for employees with HR-related needs. However, when employees report issues or concerns related to discrimination, harassment and retaliation, this technology may not be equipped to provide timely responses that meet legal requirements. For example, AI chatbots may fail to recognize slang, emojis or images that create employment-related issues, such as a hostile work environment, and fail to correct or report these behaviors. Additionally, AI chatbots may ask candidates or employees about their health conditions, which can potentially violate the applicant or employee’s rights under the Americans with Disabilities Act (ADA). Further, this technology might not recognize information provided by an employee as a request for accommodation under the ADA or notice of request for leave under the Family and Medical Leave Act, exposing the organization to costly fines and potential lawsuits.
Maintaining Data Privacy
When employees interact with an AI chatbot, the tool may collect personal information, such as the user’s IP address, browser type and settings, and data on the user’s interaction with the chatbot. The tool might then share this information with third parties, creating data privacy issues. Creating privacy policies and disclosures can help employers comply with applicable data protection laws.
Disclosing Confidential Information
Confidentiality is a major concern when employees use AI chatbots for work-related purposes. If employees disclose confidential information or trade secrets when interacting with these tools, that information may then be shared with third parties. Employers risk breaching contractual obligations and potentially losing trade secret protections, as chatbots may not keep this information private and may even use this information for their own purposes, such as producing outputs. Therefore, HR professionals must ensure that employees’ use of AI chatbots does not disclose confidential information and protected information.
Considering AI Chatbots’ Limitations
While current AI chatbots can mimic human behavior, they’re still limited. For example, they may not be able to understand human emotions, and, as a result, they could provide insensitive responses to user inputs. This can be problematic when employees report instances of discrimination, harassment or retaliation. Therefore, it’s important that HR professionals maintain a human presence even when utilizing AI chatbots.
Additionally, since a chatbot’s capabilities are limited by the information used to train it, the responses and information it provides users may be low quality or outdated, or it may contain errors. As a result, HR professionals cannot be certain that the information this technology provides is accurate. In some cases, AI-generated errors can be costly, subjecting organizations to government audits, fines and penalties. Further, AI chatbots require extensive training and fine-tuning to perform at levels HR professionals need to be reliable and effective. However, it’s unclear whether AI chatbots can accurately assess the information it provides to users; thus, employers need to be cautious about using this technology for important or consequential matters. HR professionals will likely need to review information and content created by AI to evaluate its accuracy before it’s used.
Employer Takeaway
It’s best to proceed with caution when implementing AI chatbots for HR functions due to the risks. However, understanding how this technology works can help HR professionals anticipate and address potential issues before they become problems. HR professionals must continuously monitor these tools and remember not to neglect the human role in HR.
Contact us today for more workplace resources.
This HR Insights is not intended to be exhaustive nor should any discussion or opinions be construed as professional advice. © 2023 Zywave, Inc. All rights reserved.