New to iGEM? Start here.
If you've never heard of iGEM before this week, read this page and you'll have the mental model you need to understand every other page. The rest of the guide assumes you know these ten terms and the one big idea.
iGEM is an engineering competition, not a science fair. Judges do not score us on whether our biology worked. They score us on whether we designed, built, tested, and iterated like engineers — and whether we documented it in the right place.
A well-documented failure with clear iteration scores higher than an undocumented success. Internalize this sentence before you do anything else.
Ten terms, one paragraph each
Judges are not obligated to look beyond the Standard URL pages. If you do brilliant work but document it on the wrong page, or only in your lab notebook, it may never be scored. Every experiment, every interview, every model must be mapped to a Standard URL.
The competition
iGEM has been running since 2004 out of MIT. Over 400 teams from 40+ countries compete each year. Teams design, build, and test biological systems using standard biological parts, then present at the Grand Jamboree. Evaluation spans engineering rigor, human-practices integration, wiki documentation, presentation quality, and community contribution.
1.1 Divisions
Three competitive divisions, each judged separately:
| Division | Who | Notes |
|---|---|---|
| High School | Pre-university students | Smaller scale projects expected |
| Undergraduate | Bachelor's-level students | Largest and most competitive division |
| Overgraduate | Master's / PhD / postdoc teams | Highest technical bar, deeper analysis expected |
UH iGEM 2026 competes as Overgraduate. This raises expectations for technical depth, modeling sophistication, and proof-of-concept rigor. Calibrate your work accordingly.
1.2 Villages (thematic categories)
Each team selects one Village. It determines the peer group we compete against for Village Awards, and frames how judges evaluate real-world impact.
| Village | Focus area |
|---|---|
| Diagnostics | Disease detection, biosensors, point-of-care tools |
| Therapeutics | Drug delivery, engineered therapies, probiotic interventions |
| Infectious Diseases | Antimicrobials, phage therapy, pathogen detection |
| Oncology | Cancer detection / treatment, tumor-targeting systems |
| Agriculture | Crop improvement, soil health, pest management |
| Food & Nutrition | Food safety, nutritional enhancement, fermentation |
| Climate Crisis | Carbon capture, bioremediation, sustainable materials |
| Environment | Pollution cleanup, ecosystem monitoring, biosensors |
| Foundational Advance | New tools, methods, or fundamental knowledge for synbio |
| Biomanufacturing | Industrial bioprocesses, metabolic engineering |
| Software & AI | Computational tools, modeling platforms, AI applications |
1.3 Medal criteria — Bronze, Silver, Gold
Medals are cumulative. Gold requires all Bronze + all Silver + all Gold criteria. Every criterion must be documented on the correct Standard URL page.
Bronze
- Deliverables. Complete all required items (wiki, video, judging form, safety forms, roster, attribution).
- Wiki. Functional team wiki at the correct Standard URL with all required pages.
- Attribution. Clear attribution of all work — who did what, external help, commercially obtained materials.
- Project description. Clear statement of what we're trying to achieve and why.
- Contribution. Something future iGEM teams can build on.
Silver
- Engineering success. Demonstrate the design → build → test → learn cycle with evidence of iteration based on data.
- Human Practices. Show how external input changed our design. A feedback loop — not just outreach.
Gold
- Proof of concept. Experimental validation that the project works as intended — at minimum proof-of-principle.
- Specialization excellence. Excel in up to 3 Special Prize areas. High quality bar.
- Integration. Deep integration between engineering, human practices, and narrative.
Our proof-of-concept is the C. elegans lifespan assay. Even partial results satisfy this if we frame them as iterative engineering with clear next steps. Our FBA analysis is a massive asset for the engineering-documentation criterion.
Where we're competing
2.1 Village choice — decision pending
Our village has not been selected yet. Village choice depends on technical decisions still being finalized — final project scope, chosen constructs, and modeling direction. The Village Selection Freeze lands in June 2026, so this must be resolved in Phase 1. Final call rests with the Project Lead and PI once the technical plan lands.
Village selection is a strategic decision with direct impact on our competitive position. Rather than recommend a village prematurely, the framework below is what we'll use to make the call once the technical plan is locked.
How we'll decide
| Factor | What to evaluate |
|---|---|
| Narrative fit | Which village does our story sit in most naturally — therapeutic framing? toolkit framing? environmental? |
| Judge expertise | Do the village judges reward what our project does best (circuit work? clinical translation? modeling?) |
| Competition density | How crowded is the village? Smaller villages mean fewer competitors for the village award |
| Award alignment | Which Special Prizes are common in that village? Do they align with our strongest work? |
| Precedent | What has recently won Grand Prize from this village? Does our shape resemble those winners? |
Once the technical plan is finalized, this section will be replaced with a concrete recommendation and the reasoning behind it.
2.2 Special prize targets — decision pending
Special Prize targets depend on our final technical scope and village choice. We can target up to 3. The table below is the full menu — we'll narrow to three once we know which areas our actual project can realistically win.
Three winners per prize per division (High School, Undergrad, Overgrad). The full menu:
| Prize | What judges want |
|---|---|
| Best Model | Mathematical / computational model that informs system design |
| Best New Part | One outstanding new BioBrick with excellent characterization |
| Best Human Practices | Exceptional integration of stakeholder feedback into design |
| Best Presentation | Outstanding Jamboree talk with a clear narrative |
| Best Wiki | Excellence in documentation, design, navigation, completeness |
| Best Part Collection | Outstanding collection of related, well-documented parts |
| Best Software Tool | Computational tool useful to other teams |
| Measurement | Improved measurement approaches for parts characterization |
| Education | Innovative educational tools or outreach activities |
| Inclusivity | Exceptional efforts to include diverse identities |
When the technical plan lands, we'll pick three targets, mark them here with our rationale, and align all four phases of work against them.
What we must deliver
3.1 Mandatory deliverables
Missing any single item can disqualify us from medal consideration. Print this list. Pin it up.
3.2 Standard wiki URLs
Each wiki page has a fixed URL that judges will check. Document work on the correct page or it may not be evaluated.
| URL suffix | Page content | Scoring level |
|---|---|---|
| /description | Project description | Bronze |
| /contribution | Contribution to iGEM community | Bronze |
| /attributions | Attribution | Bronze |
| /engineering | Engineering Success | Silver |
| /human-practices | Human Practices | Silver |
| /model | Modeling | Special Prize |
| /parts | Parts overview / collection | Special Prize |
| /safety | Safety | Recommended |
| /experiments | Experiments / protocols | Recommended |
| /results | Results | Recommended |
| /notebook | Lab notebook | Recommended |
| /implementation | Proposed implementation | Recommended |
Documentation is everything. The best experimental result in the world is worthless if it isn't on the right Wiki page with the right Standard URL. Every piece of work must be mapped to its corresponding Standard URL page, ideally when it happens — not in October.
How teams win
4.1 Grand Prize patterns — 2023–2025 analysis
Grand Prizes (the BioBrick Trophies) go to the highest-ranked team in each division. Three recent case studies:
2024 — Heidelberg (PICasSO)
Village: Foundational Advance. Built a pioneering toolbox for rearranging genome 3D architecture using CRISPR/Cas-mediated spatial engineering. Also won Best FA, Best Parts Collection, Best Model, Best Wiki. Key success factor: extraordinary depth across every dimension.
2023 — McGill (Proteus)
Village: Therapeutics. Modular chimeric fusion proteins to selectively kill cancer cells. Also won Best Therapeutics, Best Part Collection, Best Presentation. First Canadian team to win Grand Prize, in only their second year competing.
2025 — McGill (UG) & Brno (OG)
McGill repeated — this time from Foundational Advance. Brno (Czech Republic) took Overgraduate in Agriculture with deep computational + wet-lab integration.
The patterns that show up every year
| Pattern | What it means for us |
|---|---|
| Deep modeling | Every Grand Prize winner had competition-leading computational work |
| Parts collection | Winners submit large, well-documented collections |
| Presentation excellence | McGill won Best Presentation in 2023. Rehearse relentlessly. |
| Wiki as masterpiece | Heidelberg 2024's wiki is considered the best in competition history |
| Multi-prize stacking | Grand Prize winners typically also win 3–4 Special Prizes |
| Iterative engineering | Clear design → build → test → learn cycles with documented failures |
| Real stakeholder integration | External feedback that changed the design, not perfunctory outreach |
4.2 UH's position — assessment framework
A specific strength / gap assessment depends on our final technical scope. The framework below is what we'll use to do an honest self-assessment once the project plan is locked. Fill it in with PI + leads in Phase 1.
Where we're strong — dimensions to assess
| Dimension | What to evaluate |
|---|---|
| Computational depth | Is our modeling competition-leading? Does it drive design decisions? |
| Novel circuit / parts | Do we have a genuinely new, characterizable part that other teams could reuse? |
| Biosafety architecture | Is our containment story layered and defensible? |
| HP framework | Is there a visible feedback loop — stakeholder input that changed the design? |
| Scope discipline | Is our framing tight enough to prevent impossible translational questions from judges? |
| Modularity | Do our parts and methods form a natural collection other teams can adapt? |
Where we need attention — categories to watch
| Category | What to watch |
|---|---|
| Open technical decisions | Every unresolved "which approach" call blocks downstream planning. Resolve early in Phase 1. |
| Wet-lab timeline realism | Multi-step construction plans need slack built in. What's the critical path? |
| Assay execution windows | Long-running assays limit iteration count. Start as early as possible, plan for 2–3 attempts max. |
| Wiki polish | Professional design and interactive elements take longer than expected. Start before content is final. |
| Presentation rehearsal | Start scripting in September, not October. Rehearsal frequency separates medals from prizes. |
| Parts documentation | Every part needs sequence + characterization data. Backlog grows fast if deferred. |
When the technical plan lands, replace these frameworks with a concrete strength / gap list, priorities attached.
The UH playbook
Win Gold + Best Model + Best New Part + Best Human Practices. Compete for Grand Prize in our chosen village.
5.1 The four phases
Phase 1 — Foundation (April – May)
- Finalize experimental design with PI + leads. Lock scope, approach, and measurable outputs before any wet-lab work begins.
- Complete IRB / IBC approvals once experimental design is locked.
- Finalize and send HP outreach emails. Schedule stakeholder interviews for May–June.
- Begin wiki infrastructure — choose framework, set up repo, create Standard URL stubs.
- Decide and submit Village Selection with PI. Submit Safety Forms.
- Order DNA synthesis and reagents once design is locked.
- Set up modeling environment as a reproducible package.
Phase 2 — Build (June – July)
- Wet-lab sprint: begin construction per finalized design. Prioritize highest-impact pieces first.
- Build characterization constructs independently so each part can stand on its own as a contribution.
- Conduct HP interviews. Document every conversation. Update the Design Change Log.
- Develop modeling wiki page content in parallel with wet-lab work.
- Begin Parts Registry documentation for each completed construct.
- Education outreach: run at least one synthetic-biology workshop.
Phase 3 — Test & iterate (August – September)
- Analytical validation of key outputs. Compare measurements to model predictions.
- Characterize each part under varying conditions to produce Registry-ready data.
- Long-running assays — start as early as possible. Plan for 2–3 attempts max.
- Document every iteration on the Engineering Success page in real time.
- Complete HP stakeholder engagement. Synthesize the Design Change Log into a narrative.
- Target 60 % of wiki content drafted by end of September.
Phase 4 — Polish & present (October – November)
- Wiki sprint: finalize all Standard URL pages. Figures, interactive elements, references.
- Complete Parts Registry pages with full characterization data.
- Record the Presentation Video. Script it. Rehearse 20+ times.
- Record the Project Promotion Video (2–3 min, public audience).
- Submit the Judging Form with clear links to evidence for every criterion.
- Wiki Freeze: nothing can change after this date.
- Jamboree (Nov 13–16, Paris): present, engage judges, network.
5.2 Timeline & deadlines
All freeze deadlines are hard — they will not be extended. Freeze time is 15:00 UTC. Check competition.igem.org/calendar for exact 2026 dates.
| When | What UH should be doing | Competition deadline |
|---|---|---|
| April 2026 | Finalize experimental design, begin wiki setup | Team Roster due |
| May 2026 | IRB / IBC approval, HP outreach begins, reagent ordering | Safety Form deadline |
| June 2026 | Village Selection, wet-lab construction, HP interviews | Village Selection Freeze |
| July 2026 | Peak wet-lab sprint, modeling refinement, parts docs | — |
| August 2026 | Validation assays, iteration, long-running experiments | Midpoint check-in |
| September 2026 | Wiki writing sprint, poster design, presentation scripting | Begin video recording |
| October 2026 | Wiki finalization, video submission, judging form | Wiki / Video / Registry Freeze |
| Nov 13–16 | Grand Jamboree in Paris | Competition |
Risk register
Every risk below has been explicitly acknowledged with a contingency plan. When one of these fires, reach for the plan — don't improvise.
| Risk | Likelihood | Contingency |
|---|---|---|
| Construction / build fails | HIGH | Prioritize work by modeled impact. Have a minimum viable subset the team can still deliver. |
| Primary outcome assay shows no effect | MEDIUM | Reframe around the engineering goal. Show intermediate data (production, expression, activity). |
| A circuit or part doesn't function as designed | MEDIUM | Characterize each input / component independently. The part itself is still a contribution. |
| Wiki not done by freeze | HIGH | Start content early (June). Internal deadline 2 weeks before actual freeze. |
| Team member drops out | MEDIUM | Document all work. Cross-train. No single points of failure. |
| Funding runs short | MEDIUM | Apply for iGEM grants (Zymo, IDT, NEB). Corporate sponsorships. |
| Safety concern raised by judges or reviewers | LOW | Address proactively in the HP framework. Be transparent about containment and rationale. |
| Judges question project framing | MEDIUM | Stay disciplined with language. A clear non-claims list is your armor. |
Your first week
New to the team? Find your role below. In your first week, finish everything in your card and read this guide cover-to-cover. Then schedule a 1:1 with the Project Coordinator (Dr. Windham).
All new members
Day 1–7 basics
- Read this guide cover-to-cover
- Sign in to Team Dashboards and introduce yourself in your role dashboard
- Access the shared Google Drive / GitLab
- Open your role's dashboard and bookmark it
- Complete iGEM safety training
- Read one recent Grand Prize wiki end-to-end (Heidelberg 2024 recommended)
Wet lab
Know this by Friday
- Who your wet-lab lead is, and how to reach them
- Lab-access, training, and safety requirements you still need
- Where experiments and protocols are documented
- How to request reagents and consumables
- Which medal criteria your work contributes to
Modeling / dry lab
Know this by Friday
- Who your modeling lead is, and how to reach them
- How modeling work gets shared with wet lab
- Where code and notebooks live, and how to contribute
- Which wiki pages modeling work will land on
- Which medal criteria / Special Prizes your work supports
Human Practices
Know this by Friday
- Who your HP lead is (May Shin Thant), and how to reach them
- Your mentor for HP work is Dr. Windham — introduce yourself in your first week
- What a feedback loop looks like (it's not outreach)
- Where stakeholder conversations get logged
- How to propose a new stakeholder to reach out to
- Which medal criteria HP work is graded against
Wiki / documentation
Know this by Friday
- Who your wiki lead is, and how to reach them
- Your mentor for wiki work is Dr. Windham — introduce yourself in your first week
- How to edit the wiki and preview your changes
- The Standard URL pages every iGEM wiki must have
- When the Wiki Freeze lands and our internal deadline
- What "competition-grade polish" looks like (study a recent Grand Prize wiki)
Outreach / education
Know this by Friday
- Who your outreach lead is, and how to reach them
- What events are already on the calendar
- How to document an event (photos, attendance, impact)
- Where outreach / education write-ups live
- Which medal criteria and Special Prizes this work supports
The bottom line
UH iGEM 2026 has the raw materials to compete for Gold and Special Prizes. Our computational depth rivals recent Grand Prize winners. Our circuit design is genuinely novel. Our HP framework is more thoughtful than most teams attempt.
What separates Gold from Grand Prize is execution and polish. The teams that win don't have fundamentally better science — they have better documentation, better presentations, better wikis, and more complete engineering stories. Every hour spent on the wiki, on presentation rehearsal, on parts documentation, on the engineering narrative is an hour that directly translates to medal criteria and prize scoring.
Start now. Document everything. Iterate relentlessly. Win in Paris.