Redefining AI Regulation: The Need for Inclusivity Beyond Elite Circles

by Sanjay Puri

After OpenAI introduced ChatGPT in late 2022, its arrival has generated significant discourse about generative AI and its potential implications for the future, which are more controversial and intensive than ever. All AI regulations should be approached as a collective pursuit, requiring the active engagement of all relevant stakeholders.  

 

This is not the case, as large corporations and elite academic institutions frequently dominate these discussions. Suppose anyone looks over the hearings in Congress. In that case, several critical stakeholders are excluded; small businesses, manufacturing companies, the healthcare community, and community colleges are noticeably absent from these vital discussions.

 

So, what are the implications of this exclusion, and how might it shape the future of AI regulation? And most importantly, how can we ensure that these frameworks are both technologically sound and socially equitable? What we cannot have now is the same with the digital divide, where you have the two coasts with great academic centers and companies that drive the conversation. Instead, we must include all these voices in the discussions that will eventually lead to a framework for AI.


Examining the Status Quo of AI Regulation

 

President Biden's Executive Order (EO) on Safe, Secure, and Trustworthy AI was issued on 30 October 2023. Congress must be more effective in regulating advanced digital technologies, especially AI. The EO introduces red-teaming to stress-test AI programs for safety, integrating economic and national security agendas. However, the excessive reliance on executive power highlights a significant challenge, as the EO's reach is limited without congressional action, particularly in areas like privacy. Besides, the EO also touches upon social issues like discrimination and workers' rights but needs more concrete solutions, instead focusing on principles and best practices.

 

Introduced in June 2023 but not effective until 2025, the EU's AI Act postulates, 'The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence.' Aligning with the levels of AI risk, the established regulations should also consider the capabilities of different stakeholders. Stringent requirements for high-risk AI could disproportionately affect entities without the necessary capabilities, hindering their ability to innovate and compete. Minimal risk categories, though less restrictive, still require awareness and understanding that some stakeholders may not readily possess.

 

Listening to the excluded stakeholders like small businesses, community colleges, healthcare, medicine, and manufacturing companies could lead to more nuanced regulations supporting innovation while maintaining safety and ethical standards. Although AI revolutionizes diagnosis, treatment regimens, and patient care, medical practitioners are not represented in AI regulation talks despite their impact. Healthcare professionals' thoughts and experiences are essential in defining safe, ethical, and successful AI regulations in medical contexts. However, their absence from these discussions creates a mismatch between AI research and healthcare applications, leading to policies that don't correctly meet medical demands. We must overcome this gap and include medical experts in the AI regulatory debate in healthcare to maximize its advantages and, most importantly, protect patients' lives.

 

How would this work for small businesses, manufacturing companies, and community colleges?

 

Small Businesses – Key for Economy, but Not Key for AI Regulations

 

Approximately 33.2 million small companies operate in the United States, comprising 99.9% of all companies (SBA, 2022). Paradoxically, these businesses do not have a voice in the evolution of AI regulation. Stringent regulations might set the bar too high for small companies or startups to enter the market, limiting competition and reinforcing the dominance of established players.

 

 

In turn, this scenario can lead to a market where innovation is stifled, consumer choice is limited, and the economic benefits of AI still need to be fully realized across the broader business community. Since they are excluded from the conversation, this apprehension relates to the possibility that the regulatory framework might be excessively complex or demanding. In other words, it might favor larger corporations with more resources and capabilities to navigate and comply with such regulations.

 

Small businesses are often the birthplace of innovation, bringing fresh perspectives and novel solutions to the market. Yet, overburdening them with ill-suited regulations could dampen this innovative spirit, limiting the overall diversity and dynamism that they drive.

Also, excluding small businesses from AI regulatory conversations means missing out on representing diverse business needs and models. Small businesses operate differently from large corporations, with varying customer interactions, operational scales, and market niches. Regulations crafted without their input risk being one-dimensional, tailored to the working models of larger entities.

 

 

Manufacturing Companies – How Can We Build Machines Without Listening to Their Creators?

 

AI regulation will directly affect how most technologies are deployed by manufacturing companies, impacting efficiency, worker safety, and job dynamics. The absence of this sector's voice in AI regulation could lead to oversight of critical industry-specific concerns.

 

Excluding representatives from the manufacturing sector may result in AI policies that inadequately address the unique challenges and opportunities this sector encounters. This exclusion could culminate with regulations that fail to effectively protect workers from job displacement and skill mismatches and to fully integrate AI's potential in manufacturing processes.

 

Manufacturing companies are well-positioned to provide practical insights for implementing AI technologies, ensuring that regulations are based on real-world industrial applications. Their involvement is essential in establishing a well-rounded, knowledgeable, and efficient regulatory structure that promotes sustainable development and innovation within the manufacturing sector.

 

Community Colleges — Addressing the Oversight of Excluding a Major Segment of Educational Institutions

 

Community colleges pave pathways for learning and career growth for students of all backgrounds, most of whom cannot afford a four-year or expensive college. Based on a Community College Research Center (CCRC) study, community colleges enrolled 8.9 million students during the 2020–21 academic year, accounting for about 41% of the total undergraduate population. Considering their widespread and often undervalued impact, community colleges must be part of the conversation around AI regulation.

For a workforce ready to adapt and succeed in an AI-driven future, community colleges excel in providing hands-on, industry-relevant education. Excluding them from AI policy conversations could lead to an imbalanced world in which the advantages of AI development are exclusively available to the wealthy, leaving most of the world's population out in the cold.

Only including elite universities like Stanford, MIT, and Carnegie Mellon in the discussion around AI legislation means losing significant insight provided by the diverse palette of voices in community colleges. Focusing solely on top-tier universities risks fostering academic elitism and overlooking the practical, hands-on experiences that community colleges provide.

Also, including community colleges is about more than just doing the right thing; it also meets a strategic need for creating a workforce that can effectively respond to the possibilities and threats posed by AI. By broadening the conversation to include these institutions, AI regulations can be more representative, equitable, and effective, catering to a broader range of societal needs and workforce realities.


Pursuing Co-Creation in the Process of AI Regulation Development

 The solution? Actively choosing a genuine co-creation strategy guarantees that various viewpoints influence the resultant AI regulation and correspond with the collective objectives, values, and circumstances of all relevant parties, especially those excluded until now.

 

Even more, including relevant stakeholders at each phase of regulation creation, ranging from the original writing to testing and refining, cultivates a perception of trust, ownership, and empowerment among all involved parties.

Previous
Previous

Why Do We Need to Regulate Artificial Intelligence

Next
Next

Technology is a Useful Servant but a Dangerous Master