Artificial Intelligence is the technological frontier that today fully represents the balance between the concepts of risk and opportunity for the private sector. In the era of digital transformation, companies face a double-edged challenge: adopting Artificial Intelligence to remain competitive while simultaneously managing its potential impacts on human rights by integrating human rights due diligence into the development and deployment of technologies.
The applications of AI are diverse, ranging from information search and analysis (e.g., Gemini, Chat GOT, Co-Pilot) to natural language processing for task automation, and even the creation of new audio, video, or text content through Generative AI (GenAI), a subcategory of Artificial Intelligence.
The Scenario: Businesses and AI
This technology is also increasingly used in the workplace: at the European level, according to Eurostat data, 20% of businesses will have used AI-based technologies in 2025, a 6.5% increase compared to 2024. Adoption increases with company size: 17% of small businesses, 30% of medium-sized businesses, and 55% of large businesses. The information and communications sector is the one that uses them most.
In Italy, according to 2025 Istat data on Business and ICT, the use of AI in businesses has doubled, from 8.2% in 2024 to 16.4%, and involves large companies to a greater extent than SMEs, where use is nevertheless growing.
The business areas where artificial intelligence is most frequently adopted are:
- marketing and sales (33.1%)
- administrative process organization (25.7%)
- research and development or innovation (20.0%)
In these areas, AI is primarily used to extract information from text, produce written content, images, video, and audio, dictation and transcription, and data analysis. Among the main obstacles highlighted by companies that have not yet adopted this technology are a lack of skills, a lack of regulatory clarity, concerns about privacy and data protection, high costs, and a series of ethical considerations.

Risks and Opportunities for Businesses
At the corporate level, Artificial Intelligence offers significant opportunities for innovation, efficiency, and competitive growth. In particular, the integration of advanced data analysis and automation systems allows companies to:
- improve decision-making processes
- optimize operations
- develop new, competitive products and services
On the other hand, the growing adoption of these technologies also poses concrete risks to human rights and corporate governance. Among the main risks are:
- algorithmic bias
- discrimination in automated decision-making processes
- expansion of digital surveillance practices
- intensive use of personal data
- impacts on employment, linked to algorithmic workforce management, automation, and working conditions along the technological value chain
Regulations on the subject
In this context, the international and European regulatory framework is evolving rapidly in an effort to bridge the gap between technological innovation and the protection of fundamental rights. A key step forward is the entry into force of the EU AI Act in 2024, which introduces a risk-based approach and establishes stringent requirements for transparency, data quality, human oversight, and risk management, with specific obligations for AI systems deemed high-risk.
From August 2026, most of the provisions will become applicable, operationalizing the EU AI governance framework. However, the new rules complement other well-established European regulatory instruments, such as the GDPR and the CSDD, helping to define a framework of increasing responsibility for businesses.

The Key: Integrated Due Diligence
In light of these dynamics, it is crucial for companies to integrate robust human rights due diligence into their decision-making and operational processes, in line with the United Nations Guiding Principles on Business and Human Rights (UNGPs). This involves not only regulatory compliance but also the adoption of risk-based strategies that include the identification and assessment of human rights impacts throughout the entire lifecycle of technologies: from development to implementation, to use by customers and end users. Effective due diligence processes require stakeholder engagement, continuous monitoring, grievance mechanisms, and remediation measures, enabling companies to prevent and mitigate risks, strengthen stakeholder trust, and promote truly responsible digital innovation.
For more insights:
Digital & Emerging Technologies & Human Rights
Image source: Pixabay, UNGC