Objectives and scope of the initiative or experience
DDL 2316 aims at to ensure consistency with EU Regulation, preserving a precautionary approach in sensitive sectors, and to adapt the principles of transparency, human oversight, and accountability to the national context. While AgID has developed, in implementation of the PA Three-Year Plan 2024-2026, the guidelines for the Adoption of AI in Public Administration to define operational methods for the procurement, development, and governance of AI solutions in public administration, placing particular emphasis on regulatory compliance, risk assessment, and accountability mechanisms.
Technical methodology and operational approach
The DDL focuses on:
- National governance: two-year strategy coordinated by the Interministerial Committee and the Department for Digital Transformation, with AgID and ACN as responsible authorities, and oversight of cyber and technical aspects;
- Sectoral system: differentiated application of AI in the healthcare, employment, justice, and public administration sectors, with mandatory human supervision;
- Penalties: Specific criminal offenses introduced for deepfakes and algorithmic manipulation, with aggravating circumstances for unlawful use.
The guidelines focuses on:
- Risk assessment and compliance: formal pre-adoption analyses required, in line with AI Act and GDPR;
- Data quality standards: ISO criteria for accuracy, completeness, consistency, and security are specified, with a centralized architecture to ensure integrity;
- Internal public administration governance: defined roles, algorithmic transparency, decision traceability, continuous monitoring and auditing;
- Operational support: list of checklists, assessment models, and iterative monitoring to avoid “bureaucracy without true innovation”
Key challenges
- Precautionary approach vs. risk: the law favors a precautionary approach, with cross-cutting prohibitions and obligations such as limitations in justice and healthcare sectors, while the AI Act adopts a graduated regulation based on risk, posing the challenge of ensuring regulatory harmonization without excessive rigidity.
- Bureaucracy and operational burden: PA guidelines, while fundamental for governance, can place excessive regulatory burdens on smaller PAs – with operational solutions that are still not very granular.
- Capacity and data quality: Projects highlight significant gaps in technical skills, data quality, and consistent application of ISO standards, making a structured training and support program imperative.
- Bias and automation bias: without adequate human verification and control systems, there is a risk of excessive trust in AI systems (“automation bias”), potentially resulting in discrimination and limitations on citizens’ rights.
- Regulatory coordination: it is essential to ensure that national regulations and guidelines do not deviate from the European framework (AI Act, GDPR), creating misalignments and overlapping risks.
Implications
The new rules require a cultural and organizational leap. If well-supported, public administrations will be able to achieve greater efficiency and quality of services. Otherwise, they risk slowdowns due to regulatory complexity. The protection of fundamental rights for citizens is increasing with transparency and human oversight. But actual enforcement will depend on the effective implementation of regulations on the ground.
Conclusion
The Italian AI regulation process, with the draft law and AgID guidelines, represents an ambitious attempt to rigorously harmonize innovation, protection of rights, and security. While strengthening governance, transparency, and accountability, risks of bureaucratic inefficiencies and implementation weaknesses remain. It will be crucial to calibrate operational tools and training, ensuring that a flexible, modular, and contextual regulatory framework supports the growth of truly ethical AI solutions that benefit the community. [source] [source]
Contact point for GOVERNANCE project: Antonio Caforio, CINI
