Research and development - Seminars
The regulation of artificial intelligence (AI) has become essential due to the significant ethical, legal, and fundamental rights challenges posed by its growing adoption across both public and private sectors. Key concerns include privacy protection, transparency, non-discrimination, and integrity in automated decision-making—factors that demand a careful and well-structured regulatory approach. Various regulatory models have been proposed to address these challenges, ranging from market self-regulation to the implementation of ethical frameworks and pre-deployment impact assessments, all seeking to balance innovation with safety and public well-being. Internationally, the European Union has taken the lead by introducing a comprehensive regulatory framework for AI. This framework mandates audits, continuous oversight, and risk management throughout the AI system life cycle, emphasizing algorithmic transparency and respect for human rights. This progressive and targeted approach serves as an important reference, as it focuses on regulating specific risks associated with AI use rather than imposing broad restrictions on the technology itself. Moreover, collaboration has been identified as a critical pillar: cooperation among the public sector, civil society, and industry is key to developing an effective governance model that promotes responsible innovation and fosters public trust.
YouTube – Quantil Matemáticas Aplicadas
Not available
Get information about Data Science, Artificial Intelligence, Machine Learning and more.