18 September 2024
Artificial Intelligence (AI) chatbots have gained prominence for their versatility and efficiency. To fully realise their benefits, employees need to be equipped to use these tools effectively while being aware of the potential risks.
AI has become a cornerstone of modern business, revolutionising how companies interact with customers, manage processes, and make decisions. Chatbots are one of the many AI tools available, and they are widely used in customer service, human resources (HR), supply chain management, and as digital assistants in the workplace.
AI chatbots have transformed several sectors by automating routine tasks and boosting efficiency. For example, in customer service, they provide quick responses to inquiries, resolve issues, and anticipate needs, allowing human agents to focus on complex matters. In HR, chatbots streamline processes from recruitment to offboarding, freeing HR teams for more strategic work. In supply chain management, they track shipments, manage inventory, and predict demand through real-time data analysis. As digital assistants, they support employees with tasks like scheduling, managing emails, and conducting preliminary research, enhancing productivity and job satisfaction.
Despite these advantages, integrating AI chatbots into business operations carries significant risks, including data privacy concerns, potential inaccuracies, and over-reliance on automation.
A primary concern of AI chatbots is data privacy and security. For one, AI systems can become significant targets for cybercriminals. A recent report from the Dutch Data Protection Authority highlighted that personal data breaches occur when employees share personal data with chatbots, offering unauthorised access and opportunities for misuse.
Another significant risk is the possibility of providing inaccurate information. AI chatbots are only as good as the data and algorithms that power them. If not properly trained or updated, they can deliver incorrect or misleading information, potentially harming customer trust, leading to legal liabilities, or resulting in poor business decisions.
To harness the power of AI chatbots while minimising associated risks, businesses can take proactive steps. Employee training is crucial. Staff should be educated on how to use AI chatbots correctly, understand their limitations, and know which types of information should not be shared with these tools. Regular training sessions will help ensure employees stay up to date with best practices and any changes to the chatbot's capabilities or protocols.
Updating the company’s risk register to include AI-related risks is another important step. By systematically identifying, assessing, and monitoring these risks, companies can develop targeted strategies to address potential issues before they escalate.
Additionally, companies should complete or update their Data Protection Impact Assessment (DPIA) to cover the use of AI chatbots. This ensures that data privacy concerns are addressed and that appropriate safeguards are in place to protect sensitive information. The DPIA should also evaluate the chatbot’s data handling practices, security measures, and compliance with relevant regulations.
Effective AI governance is essential for responsible AI usage. The new EU AI Act promotes establishing a framework that categorises AI systems by risk, distinguishing between high-risk systems and general-purpose AI. Businesses should implement and enforce relevant AI policies while ensuring that AI applications align with appropriate use cases.
From a data protection perspective, the interaction of AI with GDPR requires careful consideration. It is important that companies select the appropriate legal basis for processing personal data through AI chatbots. For customer-facing businesses, this often involves obtaining consent from customers before processing their data. Ensuring that consent is properly collected and documented is essential to meet regulatory requirements and maintain customer trust.
In HR, introducing clear policies on AI usage is vital. These policies should be reflected in employment contracts and regularly updated. Businesses should prevent "Shadow AI"—the unauthorised use of AI technologies by employees, which parallels "Shadow Tech" issues in data protection. This requires vigilance and clear communication to ensure AI is used appropriately within the company.
AI chatbots offer transformative benefits across various business functions, but their successful implementation requires a strategic approach. Employees should be well-trained to maximise these tools while mitigating risks related to unauthorised use of information, data breaches, misinformation, and over-reliance on automation. By prioritising robust AI governance, data protection practices, and clear HR policies, companies can fully harness AI chatbots to enhance productivity, improve efficiency, and maintain stakeholder trust.
If you have any queries related to AI chatbots, please do not hesitate to contact our team below. We would be delighted to hear from you.
Head of Data Protection & Privacy, KPMG Law LLP
Director & Head of Technology & Digital Law, KPMG Law LLP
Head of Employment and Immigration Law
Director, EU AI Hub