
1. The Rise of the Digital Economy and the EU’s Digital Single Market Strategy
The expansion of the digital economy is a global trend. In particular, the spread of COVID-19 rapidly accelerated digital transformation across all sectors of society, with e-commerce and online activities becoming an essential part of daily life. In this context, digital technologies have gone beyond mere convenience to become a core industrial infrastructure—akin to labor or electricity. Many countries are formulating various policies to strengthen their national competitiveness and secure a leading position in the global digital market in response to the opportunities and challenges brought about by the digital economy. Notable efforts include establishing institutional frameworks for digital trade and integrating digital technologies into legal and regulatory systems.
The European Union (EU) has also recognized the importance of the digital economy and is actively engaging in strategic responses for a digital transformation of society. In particular, the EU has been developing various digital norms and policies to protect European values and technological sovereignty. One of the EU’s flagship digital policies is the Digital Single Market (DSM) strategy, announced by the European Commission in 2015. The primary goal of this strategy is to ensure the free movement of goods, services, people, and capital in the digital environment, while eliminating barriers within the internal online market. To achieve this, the EU has prioritized improving access to online markets, building digital infrastructure, and promoting the growth of the digital economy.
The Digital Single Market encompasses various areas including e-commerce, digital content, cloud services, data utilization, and e-government. The EU believes the DSM will strengthen the existing “European Single Market” framework, enhance Europe’s ability to respond to external platform companies, and ultimately boost the competitiveness of European businesses. Additionally, the EU has responded to concerns about rights and fair order in the digital market by introducing the General Data Protection Regulation (GDPR), and is working to accelerate digital economic innovation and the adoption of AI technologies based on its digital strategy. Policy directions presented in 2024 further emphasized completing the DSM, supporting the growth of small and medium enterprises (SMEs), and encouraging technological development.
The transition to a digital society is impacting not only the economy but also the political and cultural spheres. Through the DSM, the EU aims to promote the use of emerging technologies such as AI and big data while safeguarding fundamental European values such as human rights and the public good. This policy direction can serve as a significant precedent in global discussions on digital governance. As the digital economy continues to expand rapidly, institutional preparation and international cooperation—such as the DSM strategy—are becoming increasingly vital.
2. The Advancement and Application of AI Technology as a Case of Digital Transformation
The advancement of Artificial Intelligence (AI) technology is shaping a new global economic and social order and is acting as a catalyst for digital transformation. Among the various social changes brought about by AI, its impact on the economy is particularly noteworthy. Experts expect that AI technologies, especially machine learning and deep learning, will reduce labor costs while enhancing productivity. Additionally, the growth of the digital market is anticipated to create new economic sectors. Some experts predict that AI could increase global output by about USD 13 trillion—or 16%—by 2030.
As a result, governments and companies around the world are recognizing AI as a core tool for securing future competitiveness, aiming to upgrade industrial structures and reinforce national security systems. Some countries are also proactively introducing institutional measures to address the potential social challenges posed by AI, while actively pursuing international cooperation. AI is no longer a mere technology—it has become a central asset in national strategies encompassing both the economy and security.
Competition over AI technologies is intensifying not only in the economic sphere but also in national security. The United States and China consider AI a key future warfare technology and are significantly increasing their investments, while Europe is also pursuing strategies for the military use of AI. This underscores the fact that AI is influencing not only civilian industries but also security and foreign policy, and is likely to become a major factor in shaping the future international order.
AI technology is having a substantial impact on the European economy as well. Experts expect AI to boost productivity, expand digital markets, and create new industrial ecosystems. In fact, private investment in AI within Europe exceeded €9 billion in 2023. Additionally, the EU is implementing the Digital Europe Programme, which includes €2.1 billion in public investment from 2021 to 2027, to support the transition of the European market to a digital economy. This is more than mere industrial support; it reflects a strategic effort to restructure Europe’s future economic landscape.
However, the EU still lags behind the U.S. and China in digital competitiveness. The EU attributes this to divergent digital laws and policies across member states, which hinder internal market integration. To address this issue, the EU is focusing on building the Digital Single Market and creating an environment where digital technologies—including AI—can be freely utilized throughout Europe. This strategy is aimed at restoring the competitiveness of European companies and strengthening the EU’s position in the global digital economy.
AI is also bringing various changes to European society at large. In the labor market, automation is expected to increase productivity and transform employment patterns, while in daily life, AI-based services are improving convenience. At the same time, new social risks—such as privacy violations, algorithmic bias, and misuse of technology—are emerging. In response, the EU has introduced the concept of “trustworthy AI” to ensure that AI is used ethically and in the public interest, and is reinforcing its institutional foundation for the responsible use of technology.
This policy direction illustrates Europe’s emphasis on balancing social responsibility and values over mere technological speed. To address the wide-ranging societal challenges arising from AI proliferation, the EU is pursuing both technological regulation and economic strategy. This normative approach is gaining attention as a model for global AI governance. As AI adoption continues to spread, the EU’s regulatory framework is expected to play an increasingly important role amid global cooperation and competition.
3. EU’s Legal Framework for the Digital Market
In recent years, regulation of online platforms in the digital market has emerged as a key issue in the international community. While large platform companies such as Google and Amazon have enhanced consumer convenience, they have also created winner-takes-all structures that hinder innovation and foster unfair trade practices. Moreover, growing concerns over data privacy, illegal content distribution, and advertising transparency have led to a consensus that regulatory measures targeting online platforms are necessary. In response, the EU has taken a proactive approach to address the risks posed by digital technologies, aiming to balance public interest with fair competition. Its strategy is increasingly seen as setting a new global regulatory standard.
Moving beyond traditional competition law, the EU has sought to establish laws specifically tailored to the digital market. In 2019, the EU introduced rules to enhance fairness and transparency on online platforms, marking the start of more systematic regulation. In December 2020, the EU unveiled the Digital Services Act (DSA) and the Digital Markets Act (DMA). Together, these laws form the Digital Services Act Package (DSA Package), a regulatory framework for the EU’s Digital Single Market. The DSA and DMA propose concrete standards to tackle issues such as lack of transparency, illegal content, and unfair conduct by dominant platforms. Through these measures, the EU clearly signals its intention to strengthen digital sovereignty.
The DSA reinforces the accountability of online intermediaries, requiring platforms to remove illegal content, disclose content moderation policies, and provide transparency regarding advertising systems. Large platforms, in particular, are required to take proactive measures to prevent misuse, and may face legal liability for non-compliance. These obligations aim to improve predictability and trust for users, and to ensure safe circulation of information in society.
The DMA, in contrast, focuses more directly on limiting the market dominance of big tech platforms. Companies that serve as key gateways in the digital market are designated as “gatekeepers”, and must adhere to obligations such as prohibiting self-preferencing, ensuring interoperability, and guaranteeing fair competition with third-party services. While traditional EU competition law functioned as ex-post regulation, the DMA introduces a legally binding, ex-ante regulatory framework.
The overarching aim of these EU regulations is to curb the dominance of U.S.-based big tech firms in the European digital market, while protecting small and medium-sized digital enterprises (SMEs) and restoring competitive balance within the EU. In particular, the DMA institutionalizes anti-monopoly principles, serving as a tool to induce structural change in platform markets. The DSA and DMA reflect the EU’s broader attempt to establish rules that ensure accountability and fairness in the digital marketplace. They demonstrate active public intervention to uphold public norms in the digital space. These regulations are expected to not only shape the EU’s internal digital ecosystem but also influence the operational practices of global digital platform companies, potentially becoming international norms.
4. The Concept and Spread of the “Brussels Effect”
The Brussels Effect, a term coined by U.S. legal scholar Anu Bradford, refers to the phenomenon in which the European Union’s regulatory standards spread globally and become de facto international norms. Bradford characterizes the EU as a “global regulatory hegemon” or a “benevolent hegemon”, highlighting the far-reaching influence of EU regulations on the global market. From environmental protection and consumer safety to data privacy, fair competition, and hate speech regulation, EU standards have had a substantial impact on the formation of global norms.
A key factor behind the Brussels Effect is the EU’s large internal market. The EU Single Market is considered essential for multinational corporations seeking profitability, prompting them to voluntarily comply with EU regulations. These corporate adjustments often extend beyond Europe, applying the same standards in other markets. This voluntary, non-legally binding adoption of regulation is a defining characteristic of the Brussels Effect, and it reflects the EU’s position as a normative power in the international system. Normative power refers to the ability of one actor to define behavioral standards and norms that influence how others act.
Bradford describes this global diffusion of EU rules as the de facto Brussels Effect, interpreting it as a concrete example of normative power in action. More recently, the Brussels Effect has evolved into what is known as the de jure Brussels Effect, where foreign governments and institutions formally incorporate EU standards into their legal systems. In other words, the EU’s normative power is no longer limited to informal influence—it is now also being institutionalized in international and domestic laws, impacting legal systems in third countries.
The EU, with its tradition of state interventionism favoring the public interest over free-market principles, has historically set stringent regulatory standards to curb market misconduct and protect consumers. These regulations have expanded beyond market discipline to function as social regulations, particularly in areas like digital technology and AI. In fields such as artificial intelligence, digital platforms, and data protection, EU regulations apply not only to domestic firms but also to any company seeking to access the European market. The increasing application of EU standards in these areas underscores the expanding reach of the Brussels Effect.
5. The General Data Protection Regulation (GDPR) and the Brussels Effect
The EU regards personal data protection as a fundamental human right and institutionalized this principle through the General Data Protection Regulation (GDPR) in 2016. This regulation replaced the earlier 1995 Data Protection Directive and sought to establish a unified standard for data protection across the EU’s Digital Single Market. In response to technological advancements and growing concerns over data misuse, the GDPR greatly enhanced individual rights and clarified corporate responsibilities.
Compared to its predecessor, the GDPR introduced detailed rules on data collection, processing, storage, and deletion, and added new rights such as the “right to be forgotten” and data portability. These measures reflect the EU’s commitment to upholding informational self-determination and strengthening corporate accountability. The GDPR also introduced strong legal mechanisms, such as fines of up to 4% of a company’s global annual revenue for non-compliance, thereby ensuring effective enforcement.
While the GDPR is primarily applicable within the EU, it also has extraterritorial implications. Foreign companies that handle the personal data of EU citizens must also comply. Major global IT firms such as Google, Facebook, and Amazon have modified their privacy policies worldwide to align with the GDPR. For businesses, the ability to operate across all EU member states under a single set of rules enhances efficiency, demonstrating that the GDPR is not merely a strict regulatory instrument but also an effective tool for market integration.
The international influence of the GDPR is considered a quintessential example of the Brussels Effect. As GDPR standards are voluntarily adopted outside of Europe, they effectively become global norms, showcasing the EU’s role as a normative power. Countries like Japan and South Korea revised their data protection laws through “adequacy decisions” negotiated with the EU, while China and the United States have also either implemented or are pursuing legal reforms influenced by the GDPR.
As new data privacy issues continue to emerge with technological progress, the GDPR offers normative direction not only within the EU but also for global legislative efforts. This demonstrates the EU’s pivotal role in shaping the global digital order, going beyond market regulation to influence various aspects of society, including environmental and everyday concerns. The GDPR stands as a prime example of how EU regulatory standards function as global norms, and it effectively illustrates how the Brussels Effect extends across the entire digital domain.
6. The Introduction and Significance of the EU AI Act
In 2021, the European Commission proposed the world’s first comprehensive legal framework for regulating artificial intelligence, aiming to address concerns such as human rights violations and privacy risks associated with the commercialization of AI. After negotiations among the European Parliament, Commission, and Council, the final version of the law was adopted in 2024. This legislation reflects the EU’s normative effort to ensure trustworthiness and safety in AI. Alongside regulation, the EU also established institutional measures to promote AI development, striving for a balance between technological innovation and rights protection.
The EU AI Act is an extensive legal framework, consisting of 13 chapters, 113 articles, and 13 annexes, designed to address the wide range of social risks that AI may pose. Its primary goal is to prevent negative impacts of AI systems on human health, safety, democracy, the rule of law, and environmental protection, while creating a human-centered and trustworthy AI ecosystem. The law applies to all AI services operating within the EU, excluding military and security systems, and also applies to non-EU companies that enter the European market.
Chapters 2 and 3 classify AI systems into four risk levels, each subject to different obligations regarding data disclosure, risk assessment, and transparency. This risk-based approach reflects the EU’s intent to carefully manage AI’s potential threats. Chapter 4 outlines transparency obligations for AI systems, while Chapter 5 provides regulations on general-purpose AI models, addressing risk management and copyright issues. Since general-purpose AIs like chatbots are widely used, they have become key targets of regulation.
Chapter 6 introduces institutional measures to support innovation, including a Regulatory Sandbox, which provides a safe environment for startups and SMEs to test new technologies. This reflects the EU’s effort to promote the digital industry. Additionally, the AI Act is designed in alignment with the GDPR, ensuring that AI systems uphold data subjects’ rights when handling personal information.
The EU AI Act is a comprehensive normative framework that seeks to achieve both technological advancement and user protection. Notably, it takes the form of an EU regulation, meaning it is directly applicable across all member states without national legislation, offering uniform and binding effect. This marks a significant shift from previous non-binding AI guidelines. The EU thus institutionalizes the notion that voluntary compliance alone cannot adequately control AI-related risks.
Through the AI Act, the EU aims to establish a global standard for AI regulation. Like the GDPR, it applies to both EU and non-EU companies operating within the European market, reflecting the Brussels Effect in action. In this way, the EU seeks to expand its influence by promoting its regulatory model as the global norm. This underscores the EU’s role not only as an internal regulator but as a leader in shaping international digital governance.
Promoting a Digital Single Market has been a consistent policy goal of the AI Act since its initial proposal in 2021. While this aims to foster the EU’s digital industry, critics argue it could burden foreign companies with heavier regulatory costs. By introducing the world’s first binding AI regulatory framework, the EU is further reinforcing its global leadership in digital norms, as it previously did with the GDPR.
However, concerns have been raised about the Act’s ability to keep pace with the rapid evolution of AI technology, casting doubt on its regulatory effectiveness. In response, the law includes provisions for promoting technological development while managing risks. Although the European AI Office is tasked with enforcement and oversight, some question whether it can effectively regulate, given that most technical knowledge is concentrated in large tech firms.
7. Korea’s AI Basic Act and the Brussels Effect
Regulating and supporting AI has become a common priority among major nations, with many actively pursuing national AI policies. There is growing international consensus that common standards are needed to address the wide-ranging challenges of AI. South Korea has joined this trend by seeking to nurture the AI industry while also establishing ethical and legal standards to govern its development and use.
In December 2020, Korea’s Ministry of Science and ICT, along with the Information and Communications Policy Institute, introduced AI Ethics Guidelines, marking the start of formal AI discussions. These guidelines, aligned with the OECD AI Principles, proposed the vision of “AI for Humanity,” advocating ethical AI development and utilization.
During domestic debates on AI legislation, there was a strong call to establish clear regulatory frameworks, especially for high-risk AI. As a result, multiple AI-related bills were proposed in the National Assembly, most of which emphasized industry development and trust-building. On January 21, 2025, Korea passed the AI Basic Act, officially titled the “Framework Act on AI Development and Trust.” Set to take effect in January 2026, the law contains 6 chapters and 43 articles. It aims to create a national support system while imposing obligations for high-risk AI, striving to balance regulation and innovation. Korea thus became the second country after the EU to enact a dedicated AI law.
The Act mandates that the Ministry of Science and ICT formulate and implement a national master plan every three years, with legal grounds for state support in areas such as R&D and training data infrastructure. Chapter 3 specifies strategies for promoting technological development and nurturing AI businesses, which will guide upcoming government policies. The law clearly aims to enhance AI competitiveness while supporting innovation.
Importantly, Chapter 4 sets standards for ensuring ethical, safe, and trustworthy AI, with particular focus on “high-impact AI systems.” These are defined as AI systems that may significantly affect an individual’s life, safety, or fundamental rights. The Act imposes obligations on these systems regarding transparency and safety. It also includes generative AI in its scope, signaling a policy commitment to controlling social risks posed by AI.
Korea’s AI Basic Act adopts a risk-based approach similar to the EU AI Act, by imposing stricter obligations on high-risk AI technologies. This suggests that EU regulations have influenced Korea’s legal framework, making the Korean law a case of the Brussels Effect. However, there are notable differences in the scope and degree of regulation. Korea’s law places relatively more emphasis on industry growth and technology support, whereas the EU’s law emphasizes regulatory control. Still, the Korean law is expected to evolve in alignment with the EU’s direction, promoting the ethical and responsible development of AI technologies.