Topic: Artificial Intelligence

You are looking at all articles with the topic "Artificial Intelligence". We found 2 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

🔗 Stochastic Parrot

🔗 Computer science 🔗 Philosophy 🔗 Philosophy/Contemporary philosophy 🔗 Philosophy/Philosophy of mind 🔗 Artificial Intelligence

In machine learning, "stochastic parrot" is a term coined by Emily M. Bender in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. The term refers to "large language models that are impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing."

Discussed on

🔗 Artificial Intelligence Act (EU Law)

🔗 International relations 🔗 Technology 🔗 Internet 🔗 Computing 🔗 Computer science 🔗 Law 🔗 Business 🔗 Politics 🔗 Robotics 🔗 International relations/International law 🔗 Futures studies 🔗 European Union 🔗 Science Policy 🔗 Artificial Intelligence

The Artificial Intelligence Act (AI Act) is a European Union regulation concerning artificial intelligence (AI).

It establishes a common regulatory and legal framework for AI in the European Union (EU). Proposed by the European Commission on 21 April 2021, and then passed in the European Parliament on 13 March 2024, it was unanimously approved by the Council of the European Union on 21 May 2024. The Act creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU, if they have users within the EU.

It covers all types of AI in a broad range of sectors; exceptions include AI systems used solely for military, national security, research and non-professional purposes. As a piece of product regulation, it would not confer rights on individuals, but would regulate the providers of AI systems and entities using AI in a professional context. The draft Act was revised following the rise in popularity of generative AI systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework. More restrictive regulations are planned for powerful generative AI systems with systemic impact.

The Act classifies AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI. Applications with unacceptable risks are banned. High-risk applications must comply with security, transparency and quality obligations and undergo conformity assessments. Limited-risk applications only have transparency obligations and those representing minimal risks are not regulated. For general-purpose AI, transparency requirements are imposed, with additional evaluations when there are high risks.

La Quadrature du Net (LQDN) stated that the adopted version of the AI Act would be ineffective, arguing that the role of self-regulation and exemptions in the act rendered it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI".

Discussed on