Prize Competitions

An invasion of armies can be resisted, but not an idea whose time has come.
Victor Hugo

The Challenge

The Challenge

The problem to solve is clear but the solution isn’t

For some problems (e.g., designing a COVID vaccine), there is a natural set of problem solvers (e.g., pharmaceutical companies). For others, it is less obvious, either because the field is nascent, the means to solve the problem are widely available (e.g., a laptop), or because the problem is interdisciplinary, making it harder to pinpoint a single industry or sector as the natural home for solutions.

The Play

The Play

Competitions engage a diverse community of solvers to develop innovative solutions

Prize competitions invite a broad community of individuals and teams to attempt to make progress on a problem. Prize competitions are:

  • Inclusive. They cast a wide net and offer a low barrier to entry to attract diverse, sometimes unexpected talent. This avoids involving only the “usual suspects” and has the additional benefit of building a larger community of practice dedicated to solving the same problem.

  • Flexible. Prize competitions set forth a problem to be solved and the characteristics of a solution, but they are not prescriptive about how the problem should be solved.

  • Force-multiplying. Done well, the benefits of competitions can extend beyond prize money. Winners receive third-party validation of their work, enhancing their credibility. Often, competitions offer innovators feedback, even if they don’t win. They can use this feedback to further develop their innovation after the competition ends.

When to use a competition

Here are some criteria to consider:

Is there a clearly defined, achievable goal?

A prize must have a clearly defined goal that’s within grasp of potential competitors. It should be ambitious but reachable within a given timeframe. For example, the Ansari XPRIZE offered $10 million for the first privately funded team to launch a reusable, manned spacecraft to an altitude of 100 kilometers twice within two weeks.

Is there a need to attract more innovators to solve a particular problem?

Competitions are useful to expand the scope and kinds of talent working on solving a problem. The overhead of a competition may not be worth it if only a few innovators are capable of solving the problem; and in that case, a grant or contract is a more appropriate mechanism.

Are innovators willing to accept the risk of not winning the prize?

Competitions are most effective when they attract a broad and diverse pool of innovators, increasing the likelihood of viable solutions. Widespread participation will only occur if enough innovators determine that their participation is worthwhile, even if they don’t ultimately win. That’s why prize competitions are typically designed with low barriers to entry, like a short application.

The $10 million Ansari XPRIZE was designed to make space travel safer and more affordable.

How to design a competition

1. Define the problem. The need and problem must be clearly articulated. One useful tool for doing this is a target product profile (TPP). A TPP is a strategic document that summarizes the features of an innovation needed to address an unmet need. It outlines the desired characteristics of a target product by defining the intended use, target population(s), and other desired attributes, including safety and efficacy-related characteristics. You can read more in our TPP playbook here.

2. Determine the target maturity level of solutions. Before launching a competition, the desired readiness of winning solutions must be determined. Completions can range from attempting to crowdsource new ideas to incentivizing the development of commercial solutions. The target level of maturity will help to determine factors like the prize amount that can be offered (i.e., more developed solutions merit a larger prize) and the type and breadth of supports the competition will offer.

3. Recruit innovators. A big, complex problem should attract a large competitor pool. Building and maintaining a large funnel of talent is a year-round effort. It involves advertising the competition through social media, targeting relevant industry groups and message boards, and relying on “connectors” who have large networks and are skilled in matching talent with opportunities. Since most competitors won’t submit their proposals until shortly before the competition closes, it’s best to gauge interest early through an eligibility quiz, email sign-up, or live events like webinars and office hours. This gives a sense of how many and what kinds of innovators to expect and allows time to adjust the recruitment strategy, if needed. Once there is a list of interested innovators, competition organizers should communicate regularly with them, providing reminders of deadlines and opportunities to ask questions. With a wide funnel and supportive touchpoints, participants will be guided from hearing about the competition to submitting a strong proposal.

4. Specify evaluation criteria. Clear rubrics should be established to evaluate innovations fairly.

5. Select judges. A competition needs a set of evaluators and judges with expertise in the problem area.

6. Set the prize amount. This should depend on the size and complexity of the problem to solve. Lower amounts are typically offered for ideas while higher amounts make sense for more refined prototypes and products.

7. Consider your policy on intellectual property. While some competitions allow innovators to retain their intellectual property, others do not. Like a competition’s prize amount or submission requirements, its stance on intellectual property will influence innovators’ decision to compete.

8. Design a feedback process. There should be a clear plan and process for the level of feedback and support each participant will receive, as well as how it will be delivered. This allows for a fair process in which every competitor takes away something valuable, even if they don’t ultimately win.

9. Define the post-competition roadmap. A well-designed competition should anticipate what happens after it ends. It should consider what stage winners will be at (e.g., prototype, minimum viable product) and what steps they should take – and what support they will need – to ensure the full development and scale-up of their innovation.

10. Measure impact. The impact of competitions extends beyond the naming of winners. When deciding how to measure the impact of a competition, consider both the quality and quantity of the concepts, prototypes, or products it generates and the broader impact on the field, years beyond the competition’s formal conclusion.

Case Studies

Case Studies

Competitions can vary in size, scope, structure, and sponsor type (e.g., federal government, philanthropy, nonprofit).

DARPA Self-Driving Car Challenge

Launched by the Defense Advanced Research Projects Agency (DARPA), an agency of the United States Department of Defense, the DARPA Self-Driving Car Challenge started with a clear objective: creating a vehicle that could navigate off-road terrain without human intervention.

At the time, fully autonomous cars were far from reality, and it wasn’t clear who would lead this technological leap. By defining the problem and opening it up to anyone with the skill and vision to tackle it, DARPA was able to attract a diverse range of participants, including university teams, startups, and amateur engineers.

The Self-Driving Car Challenge, which offered a $2M prize to the winner, was divided into two sequential phases:

  • Qualifying Trials: Teams needed to prove their vehicles could meet minimum standards of autonomous function.

  • Main Event: Vehicles navigated a 142-mile desert route within a 10-hour timeframe, without human intervention.

Each phase allowed teams to refine their designs, and DARPA provided constructive feedback to help teams improve, even if they did not advance. This emphasis on learning and iteration meant that even teams who didn’t win took valuable knowledge and experience back to their fields.

"That first competition created a community of innovators, engineers, students, programmers, off-road racers, backyard mechanics, inventors and dreamers who came together to make history by trying to solve a tough technical problem,” said Lt. Col. Scott Wadle, DARPA’s liaison to the U.S. Marine Corps. “The fresh thinking they brought was the spark that has triggered major advances in the development of autonomous robotic ground vehicle technology in the years since.”

Though no team completed the course in the 2004 challenge, DARPA continued the competition in 2005, with five teams successfully completing the course. Sebastian Thrun, leader of the winning 2005 Stanford team, went on to work at Google, where he helped launch the self-driving car project that would become Waymo. Thrun and his team’s work on sensor fusion and mapping technology provided foundational insights that influenced Google’s early efforts in autonomy. Chris Urmson, who later became the CTO of Waymo, was part of the 2004 and 2005 Grand Challenges with Carnegie Mellon University’s Red Team. The experience gave him and other engineers critical expertise that translated directly into Waymo’s future developments.

Sebastian Thrun was the leader of the winning Stanford team in the DARPA Self-Driving Car Challenge, seen here with the team’s driverless vehicle | NBC News

Ansari XPRIZE

In 1996, the XPRIZE Foundation launched the Ansari XPRIZE, offering $10 million to the first team that could build a reusable spacecraft capable of carrying three people to an altitude of 100 kilometers (the Kármán line), the boundary of space.

The prize was awarded in 2004 to Mojave Aerospace Ventures for their flight of SpaceShipOne, a project led by Burt Rutan and financed by Paul Allen. SpaceShipOne completed the required two suborbital flights within a two-week period, achieving the challenge of making space accessible with a reusable vehicle. The technology developed by Mojave Aerospace Ventures was later licensed by Virgin Galactic, with Richard Branson's company planning to offer suborbital space tourism.

SpaceShipOne, created by Mojave Aerospace Ventures, won the Ansari XPRIZE | Hiller Aviation Museum

Beyond the technical achievement, those involved highlighted the paradigm shift sparked by the prize. Peter Diamandis, founder of the XPRIZE, said:

"The idea was, if you set up the right financial incentives, you could get private individuals or companies to do what was historically only accomplished by governments."

Tools Competition

The Learning Engineering Tools Competition (Tools Competition) is a global initiative that supports the development of cutting edge solutions for the most pressing challenges in education. Launched in July 2020, the competition has now run four cycles and named 130 winners from 44 countries who are projected to reach 131+ million learners, from early childhood to adulthood, by 2027. The Learning Agency manages the competition in partnership with academics and Renaissance Philanthropy.

With a mission to advance learning engineering principles, the Tools Competition unites edtech innovators, researchers, and educators to harness big data, enabling a deeper understanding of learning and fostering a culture of continuous improvement and rapid experimentation. By encouraging collaboration across sectors, the Tools Competition generates evidence-based solutions with the potential to reshape the future of education.

Commenting on the 2024 cohort of winners and their impact on the broader learning engineering community, Kumar Garg, President at Renaissance Philanthropy and a founding sponsor of the Tools Competition, said:

"Winning tools stand out in their potential to both transform educational outcomes and build the field of learning engineering. They are demonstrating the power of advanced technology to accelerate learning and are working with researchers to scale their impact for the benefit of the field at large."

This year-long competition follows many of the principles outlined above, including a multi-phase process, allowing competitors to refine their ideas over the course of the competition. Phases include:

  • Phase I: The Abstract. Competitors complete a short eligibility quiz and then submit a brief abstract, which describes the problem they are solving and their innovative solution.

  • Phase II: The Proposal. Competitors advancing to Phase II build on their ideas and develop a full proposal complete with a budget and plan for learning engineering. Expert reviewers provide detailed feedback on proposals, offering expertise spanning education, technology, and research.

  • Phase III: The Pitch. Finalists pitch their ideas live before a panel of expert judges.

To accelerate innovation beyond the scope of the competition, competitors are supported at every stage of the process. Those who don’t advance still receive feedback on their ideas, and all past competitors are eligible to compete in future cycles.

The Tools Competition hosts three prize levels based on the maturity of the innovation:

  • Catalyst Prizes ($50,000) are critical for introducing high-innovation, early-stage ideas to the field—creating a space for those without a functioning tool or previous venture to propose their idea.

  • Growth Prizes ($150,0000) are designed for products that are ready to be refined and scaled.

  • Transform Prizes ($300,000) allow established tools with 10,000 or more users to reach new heights.

The Tools Competition is organized into tracks to spur innovation in areas of greatest need, and often also features cross-cutting priority areas or prizes. Focusing competitors’ attention on core areas builds the capacity of the field to solve seemingly intractable problems in education. Past cycles have included topics such as Accelerating & Assessing K-12 Learning; Coaching for Early Childhood Education; and Facilitating Faster, Better & Cheaper Learning Science Research.

After receiving their awards, winners engage in a year-long, post-prize impact evaluation, covering themes including adoption, scale and sustainability, and evidence and impact. These evaluations indicate that winners see notable expansions in the number of users and significant increases in both funding opportunities and partnership engagement. For example, 72% of 2023 winners reported forming new implementation or distribution partnerships in the first 6 months after being awarded.

Short Answer leverages peer-driven formative assessment to help students improve their writing skills | 2022 Tools Competition winner

VITAL Prize Challenge

Three philanthropic organizations and the National Science Foundation (NSF) came together to design and carry out a $6 million, one-year challenge to support the next generation of learning technologies.

The VITAL Prize Challenge (VITAL) was created in response to the COVID-19 pandemic, which disrupted K-12 student learning in profound ways. The pandemic exacerbated gaps in digital access and student achievement – and dampened student performance overall, especially in mathematics. The funding partners shared a vision for improving K-12 STEM education learning by harnessing the talents of researchers, technologists, and educators to design innovative and responsive edtech, particularly for students from marginalized communities.

The public-philanthropic partnership behind this challenge was made possible by NSF’s increasing interest in partnerships, as signaled by its new Technology, Innovation, and Partnerships (TIP) Directorate. At the same time, the participating philanthropies – the Bill & Melinda Gates Foundation, Schmidt Futures, and the Walton Family Foundation – saw the value in partnering with the federal government to amplify and sustain their impact.

This competition invited a wide range of proposals from across the education R&D ecosystem, including universities and startups. Digital Promise, selected by NSF to manage the competition, and the funding partners leveraged their networks to attract over 300 proposals across three focused tracks:

  • Rapid and Continuous Learning Assessment;

  • Mathematical Literacy to Promote a Future STEM Workforce; and

  • Other Innovations in Translational Learning Technologies.

VITAL was a multistage, or down-select, challenge. It first selected 100 Discovery Round teams. In this round, teams received NSF I-corps training to help teams refine their prototype ideas and assess market fit. Then, 54 teams were chosen for the Semi-Final Round, in which they received $20,000 and received supports like “educator mentors,” opportunities to get feedback from students, and a series of equity-centered trainings. Eighteen teams were awarded an additional $50,000 as finalists and had the opportunity to hone their minimum viable product. The Final Round concluded with a Pitch Session, from which nine winning teams were selected, with three winners from each of the three tracks, awarded $250,000 (first prize), $150,000 (second prize), or $100,000 (third prize).

Michael Jay, one of the team leads, discussed VITAL’s emphasis on innovations that support historically underrepresented communities and how this informed his team’s product development:

"VITAL helped us look at populations that are both underserved and bring different traditions to learning and teaching. We have begun conversations with tribal schools in the U.S. to determine how we can best infuse our technology with the ability to represent their cultural perspectives. A unique feature of MatchMaker means this same strategy can be used for other underrepresented populations without judgment or impinging on any group’s beliefs. This also supports greater sharing between educators and institutions and generates a greater audience for organizations that create resources to address what are considered niche needs."

This competition not only resulted in the development of new, equity-focused ed tech prototypes, but also engaged educators and students in the process of developing learning technologies.

Samuel Reed, a VITAL educator mentor, shared how his participation as a co-designer in this competition ended up informing his teaching practice:

"Through my participation in the VITAL Prize Challenge, I had an important learning experience that shed light on the issue of learner variability. As I delved into the challenge, I discovered a significant flaw in my own school district’s dual enrollment programs. While on the surface, these programs seemed to offer valuable opportunities, they lacked the necessary support to accommodate the diverse learning needs of students. This realization is driving me to advocate for more inclusive and adaptable approaches. I want to reinforce the importance of addressing learner variability and the impact it can have on students’ success in dual enrollment programs."

Kasi Math is a multisensory, inclusive learning system for students who are blind or have low vision | VITAL Prize winner Alchemie

How we can help

How we can help

Renaissance Philanthropy has experience supporting the design and implementation of private and public competitions.

We can:

  • Help you decide if the problem you seek to solve is ripe for a competition. A variety of instruments can be deployed to solve big, complex problems. We can help you determine if a competition is the right mechanism to achieve the impact you want to make.

  • Connect you with existing high-impact competitions. We have supported a variety of competitions and can provide you with a menu of options for your areas of interest. Renaissance Philanthropy can make connections to help you get involved with existing competitions that align with your organization’s mission.

  • Provide guidance to help you design a new competition. If you have identified a problem that no existing opportunity is designed to solve, the Renaissance Philanthropy team can advise you on how to develop a competition from the ground up. We can offer guidance on:

    • Setting an ambitious yet achievable goal for the competition;

    • Designing the scope of the competition, including specific tracks to encourage certain types of entries;

    • Selecting appropriate amounts for prize money;

    • Developing clear rubrics to evaluate competitors’ proposals and progress;

    • Recruiting a diverse and expert panel of evaluators or judges;

    • Creating a promotional plan to attract a wide and diverse pool of competitors;

    • Determining the type and level of feedback and support to provide competitors; and

    • Planning next steps for competition winners.

Resources

Resources