Open Source Software, Public Policy, and the Stakes of Getting It Right

Open Source software plays a central role in global innovation, research, and economic growth. That statement is familiar to anyone working in technology, but the scale of its impact is still startling. A 2024 Harvard-backed study estimates that the demand-side value of the Open Source ecosystem is approximately $8.8 trillion, and that companies would need to spend 3.5 times more on software if Open Source did not exist.

Those numbers underscore a simple truth: Open Source is not a niche concern or a developer-only issue. It is economic infrastructure. And like any critical infrastructure, it depends not only on technical excellence, but on policy environments that understand how it works.

This reality sits at the center of the Open Source Initiative’s (OSI) expanding work in public policy, a move that reflects how deeply Open Source is now entangled with global regulation, security, and emerging technologies like AI.

From License Stewardship to Policy Engagement

For decades, OSI has been best known as the steward of the Open Source Definition and the organization responsible for approving Open Source licenses. That work continues today, supported by an active license review committee that evaluates submissions against the definition to ensure software remains genuinely open.

But as Katie Steen-James, Senior U.S. Policy Manager at OSI, explains, the organization’s role has necessarily expanded. Over the past few years, OSI has built out a dedicated public policy function in both the United States and Europe. 

The motivation is straightforward: lawmakers are increasingly writing legislation that touches software, security, and AI—often without a clear understanding of how Open Source development actually works. The risk isn’t usually malice. It’s misunderstanding.

As Steen-James puts it, policy can inadvertently restrict Open Source by imposing obligations that don’t align with Open Source licensing or development models. Once enacted, those policies can have a chilling effect on contributors and communities who lack the resources to navigate regulatory complexity.

Educating Policymakers Before Damage Is Done

A core part of OSI’s policy work is education. This includes meeting directly with lawmakers, responding to public consultations and requests for information, and producing short, targeted resources designed specifically for policy audiences.

The goal isn’t to lobby for a particular regulatory outcome, but to ensure that policymakers understand:

  • How Open Source licenses function
  • The difference between developers and deployers
  • Why imposing downstream restrictions on Open Source software often doesn’t make sense

These distinctions matter deeply in areas like software liability, security obligations, and AI governance. Without them, legislation can unintentionally place responsibility on upstream Open Source developers, many of whom are volunteers rather than on the entities that deploy and profit from software systems.

Open Source and the AI Policy Moment

AI has accelerated the urgency of this work. OSI released its Open Source AI Definition in October 2024, following an extensive community process.

The AI ecosystem is moving quickly, and policymakers are under pressure to act. OSI’s role has been to ensure that Open Source AI is part of the conversation—not as an afterthought, but as a foundational concept.

In both the U.S. and Europe, OSI is monitoring AI-related legislation and implementation efforts, including:

  • The White House AI Action Plan and AI R&D Strategy in the U.S.
  • The EU AI Act and its associated codes of practice
  • U.S. state-level AI legislation, where many early regulatory experiments are happening

At the U.S. state-level in particular, OSI has seen proposed language that could restrict downstream use of AI systems without accounting for Open Source licenses. OSI does not take a position on whether AI should or should not be regulated—but it does work to ensure that regulators understand the implications of their choices for Open Source ecosystems.

Building Coalitions, Not Just Positions

Recognizing that OSI cannot and should not speak alone, the organization also leads the Open Policy Alliance, a coalition of nonprofit organizations that want to engage in U.S. policy discussions around Open Source. Membership is free, intentionally lowering the barrier for under-resourced communities to participate.

The alliance model reflects a broader philosophy: policy engagement should not be limited to large corporations or well-funded organizations. Open Source thrives because of diversity—of contributors, institutions, and use cases. Policy conversations need that same diversity to be effective.

Looking Ahead: Stewardship in a Fast-Moving World

OSI’s work on Open Source AI is far from finished. Looking ahead, the organization plans to:

  • Track real-world AI systems that meet the Open Source AI Definition
  • Study how the term “open” is being used (and misused) across AI releases
  • Convene expert and community-led groups to evaluate evolving practices
  • Steward future updates to the Open Source AI Definition through transparent processes

All of this points to a larger shift in how we should think about Open Source governance. As software becomes more deeply embedded in economic, social, and governmental systems, stewardship extends beyond code. It includes education, policy fluency, and sustained engagement with institutions that shape the rules of the digital world.

Open Source has delivered extraordinary value—measured not just in trillions of dollars, but in innovation, resilience, and shared progress. Ensuring it can continue to do so will require attention not only from developers but from policymakers, foundations, and communities willing to engage before decisions are made for them.

Acknowledgement

This post was authored by Rachel Roumeliotis, Principal at Punch Tape Consulting, and is based on a video recording from the virtual event Open Source in 2026, which she organized and hosted. The content draws from a session presented by Katie Steen-James during the event. Watch the video recording on YouTube.

Video recording transcript

Katie: Okay, perfect. Well, thank you so much, Rachel. And good morning, good afternoon to everyone. Happy 2026.

As Rachel said, my name is Katie Steen-James. I’m the Senior U.S. Policy Manager for the Open Source Initiative, or OSI. We’re a nonprofit, and we’ve been the stewards of the Open Source Definition since the late 1990s.

What I’m going to talk about today is how OSI’s work has expanded over the last few years. We continue to focus on the benefits of open source and, of course, on approving licenses against the Open Source Definition, but we’ve also expanded to work much more on public policy in Europe and the United States. I’ll talk a little bit about that today.

These are our main projects at OSI: licensing and legal, which most people are probably aware of and which includes approving licenses against the Open Source Definition; policy and standards work, which I’ll mostly focus on today; and advocacy and outreach, which I’ll also touch on briefly. This is just a general overview of what we work on at OSI.

As I mentioned, OSI-approved licenses are probably what most of you know us for. That’s something we continue to do. We have a license committee that reviews and approves licenses submitted to the committee against the Open Source Definition.

On the policy side, I want to briefly talk about who we are. As I said, my name is Katie, and I’m the Senior U.S. Policy Manager. I have my colleague Jordan, who is my counterpart in the European Union. Many of you also know Simon Phipps, who has been in the community for a very long time and, in a part-time capacity, continues to work on European standards for us.

Someone who isn’t on this slide but who I’d be remiss not to mention is Deborah Bryant. She led U.S. policy work in a part-time capacity before I was hired just under a year ago. Deb has also stepped up as Interim Executive Director for OSI and continues to support our public policy work.

OSI went from having two part-time policy contributors—Simon and Deb—to hiring Jordan and myself full-time to really expand the work and impact on the policy side.

So what are we actually doing, and what are our roles? I want to share a paraphrase from a blog post I wrote, which is on our website. It essentially encapsulates what we do: we educate policymakers about the benefits of open source software, track policy developments, and ultimately ensure that open source developers can continue doing their work.

This work is really for the community. We want to make sure you can keep doing what you do. Part of that is making sure people understand the benefits of open source and how it works. Often, policymakers don’t intend to restrict open source, but they accidentally do because they don’t understand how open source licenses function. Our role is to help prevent that by making sure people understand how open source software works.

We also continue to promote our Open Source AI Definition. I think most people here are familiar with it and the process we undertook, culminating in its release in October 2024. Now, our work focuses on making sure policymakers and civil society organizations know that the definition exists. Awareness is one of the biggest challenges, especially given how fast the AI ecosystem is moving. We also encourage AI policies to use the Open Source AI Definition as a foundation.

In terms of activities, this work includes meeting with lawmakers; responding to public consultations and requests for information (public consultations in Europe, RFIs in the U.S.); continuing leadership and engagement on standards, especially in Europe, which is part of Simon’s portfolio; coalition leadership, both formal and informal; and developing educational resources on the intersection of open source and policy. These resources are short and tailored to policymakers to help them quickly understand the impact of open source.

Our current focus in Europe includes promoting the Open Source AI Definition, working on CRA implementation to ensure it is open source–friendly, engaging on standards with open source in mind, informing lawmakers about open source, and strengthening OSI’s network in Brussels. Another important area I didn’t include on the slide is implementation of the AI Act, particularly the Code of Practice, where Jordan is working to ensure the open source voice is heard.

In the United States, we’re also promoting the Open Source AI Definition and working to strengthen the Open Policy Alliance. The Open Policy Alliance is a coalition of nonprofit organizations that want to participate in U.S. policy discussions. If you’re a nonprofit and want to get more involved in U.S. policy work, I strongly encourage you to join—it’s free. I lead this coalition, and we’re working to strengthen it through more regular meetings and engagement at both the federal and state levels.

We’re also monitoring implementation of the White House AI Action Plan and the forthcoming AI R&D Strategy, and engaging in public comment processes. The AI Action Plan, released in July, includes provisions that explicitly reference open source and open model AI systems. One of my roles is tracking how federal agencies implement those provisions. We also responded to a public comment, together with the Open Forum for AI at Carnegie Mellon University, on what an AI R&D strategy should include. We emphasized the benefits of open source AI for transparency and innovation.

Another focus is tracking security provisions in federal legislation and their potential impact on open source software. Security and openness often raise questions, especially in large bills like the annual defense policy bill. We monitor these closely to ensure they don’t unintentionally harm open source.

A newer area of work for us is tracking state-level AI legislation. Because there hasn’t been much movement at the federal level, many states are pursuing their own AI regulations. Sometimes we see language that restricts downstream use of AI systems without carve-outs for open source. As we know, restrictions on downstream use don’t align with how open source licenses work. We don’t take a position on whether AI should be regulated; we simply want policymakers to understand the potential impact on open source.

One example of this work is a two-page educational resource on our website aimed at U.S. policymakers, outlining considerations for AI regulation and its impact on open source software. It’s not lobbying—just education.

This connects to similar approaches we’ve taken in Europe, particularly around distinguishing between developers and deployers. If there is liability, it should rest with deployers, not upstream open source developers, to avoid a chilling effect on development.

Finally, I want to talk about what’s next for the Open Source AI Definition. Since its release in October 2024, our next steps include continuing to monitor the AI space; examining real-world use cases of systems that meet the definition; understanding who is releasing what, and under what conditions; and looking beyond large language models to other types of AI systems.

We’re also collaborating with a Duke master’s student, Gabriel Toscano, who was an intern with us over the summer and is researching the use of the word “open” in AI models and the licenses associated with them.

We plan to establish an expert working group to evaluate specific model releases, which would eventually evolve into a community-led entity that OSI participates in but does not lead. That group would steward updates to the Open Source AI Definition, address issues that have emerged since its release, and ultimately help us publish a new version of the definition.

That’s my last slide, and I think I’m right on time. Thank you so much for listening. I’ll check the chat now and am happy to answer any questions. With that, I’ll turn it back to Rachel.


Rachel: Thank you so much. What I kept thinking while you were talking was just how much really goes into open source. You covered maybe 15 minutes of work, and it’s global and incredibly important. Do you need help?

Katie: Yes—especially in the U.S. The Open Policy Alliance is a great way to get involved. It’s free for nonprofits to join. Deb from the Python Foundation and Ruth from Apache are already members. It’s not a heavy lift, and it allows under-resourced organizations to engage in policy work without having in-house policy staff.

Rachel: Do you interact with OSPOs a lot?

Katie: Yes. Through the Open Forum for AI at Carnegie Mellon, which brings an academic perspective to open source AI, and also with OSPOs in industry. I’m especially interested in OSPOs in government. The UN has been active here, and I’ve suggested this model to state lawmakers as a way to better serve citizens using open source.

Rachel: Maybe we’ll do an OSPO event later this year.

Katie: That would be great.

Rachel: Thank you so much for taking the time. If anyone has questions for Katie, please put them in the chat. These sessions will be recorded, so be sure to share them. Katie, farewell.

Katie: Thank you. Bye.