the United States seeks to reject the EU AI Act code of good practices, allowing companies to develop their own risk standards

découvrez comment les états-unis envisagent de rejeter le code de bonnes pratiques de l'acte sur l'ia proposé par l'ue, permettant ainsi aux entreprises américaines de définir leurs propres normes en matière de gestion des risques liés à l'intelligence artificielle.

The European Union is about to revolutionize the development of artificial intelligence. Facing growing challenges, the EU is introducing strict regulations to govern AI models. However, these initiatives have sparked international controversy.
The general AI applications (GPAI) are at the heart of this debate. Critics, particularly from the United States, argue that the regulation stifles innovation. They claim that the new rules are too restrictive and exceed the original scope of the EU AI Act. This divergence highlights the tensions between regulation and technological progress.

the united states is considering not adopting the good practice code of the eu's ai act, thus allowing companies to set their own standards for managing risks associated with artificial intelligence. discover the implications of this decision on the development and regulation of ai.

the united states contests the good practice code of the eu’s ai act

As the European Union finalizes the Artificial Intelligence Act (AI), significant opposition is emerging from the United States. American officials believe that the proposed good practice code from the EU could stifle technological innovation by imposing overly strict rules. This tension marks a turning point in global AI regulation, leaving companies in the American market to develop their own risk standards.

American President Donald Trump reportedly pressured European regulators to abandon this code, according to sources reported by Bloomberg. The U.S. mission to the EU has contacted the European Commission and several European governments to oppose the adoption of the regulation as currently drafted. Critics argue that these new obligations, such as third-party testing and full disclosure of training data, are not only burdensome but also exceed the scope of the AI Act, thereby creating additional rules deemed unnecessary.

Thomas Randall, director of AI market research at Info-Tech Research Group, explains that “large tech companies and government officials argue that the AI code draft adds extra obligations that complicate large-scale implementation.” This perspective highlights a major concern: the potential slowdown of AI development in Europe due to legislation perceived as constraining.

For American companies, this opposition creates an uncertain environment where they must navigate between two potentially divergent regulations. The absence of a harmonized framework could compel them to invest more in compliance or risk sanctions, as Randall notes, mentioning fines that could reach 7% of global revenue in cases of non-compliance.

This situation underscores the need for organizations to develop robust strategies for managing AI-related risks. “Any organization operating in Europe must have its own AI risk playbooks, including privacy impact assessments, provenance logs, or red team testing, to avoid contractual, regulatory, and reputational damage,” advises Randall.

For more information on how to create a brand authenticity that resonates with consumers, check out our in-depth guides.

what are the potential impacts on technological innovation?

The U.S. opposition to the EU’s good practice code could have significant repercussions on global technological innovation. By challenging European regulations, American companies may find their capacity to innovate diminished due to increased legal constraints. This situation could also lead to a shift in investments toward less regulated regions, thereby affecting the EU’s competitiveness in the global market.

The new EU regulations aim to establish high standards for AI transparency, accountability, and security. However, American critics argue that these additional measures could slow the development and adoption of new AI technologies. Indeed, obligations like third-party testing and training data disclosure may incur additional costs and operational constraints for companies.

Moreover, this divergence between American and European approaches could fragment the global AI market. Companies could face different requirements depending on regions, complicating their deployment strategies and increasing compliance costs. This uncertain context could discourage investments in innovative projects, thereby hindering the growth of the AI sector.

In response to these challenges, some American companies are considering prioritizing investments in regions where regulation is perceived as more favorable. This dynamic could exacerbate the technological gap between the United States and the EU, with implications for global competitiveness and innovation.

To learn how some companies have successfully navigated this complex regulatory landscape, read our case study on the link destroyer.

how can companies adapt to this new reality?

In the face of regulatory uncertainty, companies must adopt proactive strategies to adapt to this new reality where AI risk standards may vary considerably between regions. The key lies in establishing flexible and adaptable systems that allow for an effective response to local requirements while maintaining overall coherence.

First, it is essential for organizations to develop specific AI risk management playbooks. These documents should include detailed procedures for assessing privacy impact, managing data, and ensuring the security of AI models. Internally, companies can also establish dedicated committees for compliance and responsible innovation to ensure ongoing oversight and rapid adaptation to regulatory changes.

Second, companies must invest in training and raising awareness among their teams. By familiarizing employees with legal requirements and best practices related to AI, they can ensure that all members of the organization understand the stakes and expectations regarding compliance. This also involves establishing close collaborations with legal experts and regulatory consultants to anticipate and manage potential risks.

Finally, forming strategic partnerships with other industry players can offer significant advantages. By sharing knowledge and resources, companies can pool their efforts to meet regulatory requirements while continuing to innovate. Alliances with organizations specialized in AI regulation or industry consortiums can also strengthen companies’ positions in response to the challenges posed by regulatory divergences between the United States and Europe.

To explore risk management methods and best practices in AI further, check out our article on DMI Partners’ awards in online advertising.

what future for AI regulation between the EU and the United States?

The future of AI regulation between the European Union and the United States remains uncertain, marked by fundamental divergences in their respective approaches. The EU favors a strict regulation focused on accountability, transparency, and ethics, while the United States leans towards a more flexible approach, emphasizing innovation and economic competitiveness.

If the United States manages to influence European regulators to relax the good practice code, it could lead to a broader harmonization of AI standards. However, this possibility is far from certain, and the two regions may continue to evolve independently, creating a patchwork of global regulations that could complicate multinational companies’ operations.

Moreover, this tension between the EU and the United States could prompt other countries to adopt similar positions, thereby stimulating a global debate on AI regulation. Countries like China could also play a decisive role by proposing their own standards, thus influencing the global AI dynamic.

In this context, companies will need to continuously monitor regulatory developments and adjust their strategies accordingly. International collaboration and sharing best practices will be essential to navigate this complex and constantly evolving landscape.

For more information on recent legislative actions and their impacts, check out our analysis on legislators’ claims regarding the OPM email server.

what are the stakes for companies facing this regulation?

Companies find themselves at a strategic crossroads in the face of increasing AI regulations. On one hand, they must ensure compliance with existing legislation to avoid financial penalties and reputational damage. On the other hand, they must continue to innovate and remain competitive in a rapidly changing market.

One of the main challenges is data management. With strict regulations on transparency and training data disclosure, companies must adopt robust practices to ensure the security and confidentiality of information used by their AI systems. This involves not only investing in advanced security infrastructures but also revising internal procedures to ensure ongoing compliance.

Furthermore, constant oversight and evaluation of AI models are becoming crucial. Companies must integrate real-time monitoring mechanisms to detect and correct biases or errors in their systems. This requires in-depth technical expertise and dedicated resources for the maintenance and improvement of models.

Additionally, reputation plays a key role in this context. Consumers and partners expect companies to act ethically and responsibly in the development and use of AI. Poor management of regulations can not only result in sanctions but also erode stakeholder trust, impacting customer loyalty and partnership opportunities.

Finally, organizational adaptability is essential. Companies must be prepared to quickly adjust their strategies and operations in response to regulatory changes. This involves a flexible corporate culture and an ability to innovate while adhering to legal standards.

To learn more about how Apple contests the punitive fine imposed by the EU, read our report on Apple against the EU.

how AI regulation could influence the global market?

The EU’s regulation of AI and the U.S. response have the potential to redefine the dynamics of the global artificial intelligence market. A strict regulation like that proposed by the EU could establish high standards in terms of ethics and security, naturally influencing the practices of global companies and pushing other regions to adopt similar approaches.

European companies, by complying with these regulations, could become leaders in responsible AI development, thereby strengthening their competitive position in the international market. This advancement could also attract partners and investors who value transparency and ethics in technology.

Conversely, a more liberating approach as advocated by the United States could accelerate innovation and attract talent and investment towards less regulated regions. However, this strategy comes with risks, particularly in terms of reputation and consumer trust, which increasingly demand responsibility and ethics from technology companies.

Moreover, the fragmentation of regulations could pose challenges for multinational companies that will need to navigate diverse and often contradictory requirements. The necessity to develop differentiated compliance strategies could increase operational costs and slow the deployment of new technologies globally.

In the long term, this dynamic could contribute to a fragmentation of the global technological ecosystem, where different regions specialize in specific niches of AI based on their local regulations. This could also stimulate international competition to define global AI standards, with each region trying to impose its model as a global reference.

To understand how legislators influence these dynamics, check out our analysis on legislators’ claims regarding the OPM email server.

#>

Articles similaires

Partager cet article :
Share this post :