The European Union's AI Act has introduced a comprehensive framework for companies dealing with artificial intelligence, building on the existing GDPR compliance processes that many corporate legal teams have developed. Keith Enright, co-chair of Gibson Dunn's AI practice group and former Google chief privacy officer, notes that companies' competencies under GDPR will help them demonstrate compliance with the new law.
The AI Act's emphasis on data governance is a key area of focus for companies who will need to adapt their existing processes to meet the new regulations. This may involve implementing more robust measures to protect workers' personal data and prevent potential biases in algorithmic decision-making. The AI Act requires risk assessments, focuses on fundamental rights, and emphasizes data governance.
The AI Act's provisions on risk assessments will require companies to carefully evaluate the potential impacts of their AI systems on workers, including the use of monitoring and control algorithms. This may involve conducting regular audits and implementing measures to mitigate any negative effects on employee rights. As companies navigate these new regulations, they must balance the benefits of AI with the need to protect worker autonomy and job security.