DOC · STRATEGY-001
UH iGEM 2026 · TEAM STRATEGY GUIDE
REV · v1.0 / APR 2026
Competition Intelligence · Winning Strategy

UH iGEM 2026
Team Strategy Guide

Engineering E. coli Nissle 1917 for controlled α-ketoglutarate delivery in a C. elegans aging model — and how we turn that into a Gold medal and a Grand Prize run in Paris.

DivisionOvergraduate VillageTBD · decision pending JamboreeNov 13–16, 2026 · Paris PreparedApril 2026

How to use this guide

This is the shared mental model for the whole team. Read it once cover-to-cover in your first week, then keep it open in a tab as a reference.

  1. New to iGEM? Start at Part 0 — ten terms and the mental model will save you hours.
  2. Returning member? Skim Parts 1–4 to confirm our positioning, then dig into the Playbook (Part 5) and Risk Register (Part 6).
  3. Lead looking for scope? Your phase-by-phase deliverables live in §9; your role's first week is in the Appendix.
Part 00

New to iGEM? Start here.

If you've never heard of iGEM before this week, read this page and you'll have the mental model you need to understand every other page. The rest of the guide assumes you know these ten terms and the one big idea.

★ The one big idea

iGEM is an engineering competition, not a science fair. Judges do not score us on whether our biology worked. They score us on whether we designed, built, tested, and iterated like engineers — and whether we documented it in the right place.

A well-documented failure with clear iteration scores higher than an undocumented success. Internalize this sentence before you do anything else.

Ten terms, one paragraph each

iGEM
The International Genetically Engineered Machine competition. 400+ teams from 40+ countries design synthetic biology projects and present at an annual Grand Jamboree.
Grand Jamboree
The finale. Four days in Paris (Nov 13–16, 2026) where every team presents. Posters, talks, judging sessions, village awards, Grand Prize.
Division
Your bracket: High School, Undergraduate, or Overgraduate. We are Overgraduate — higher technical bar, deeper analysis expected.
Village
Your thematic category, chosen once. It determines the peer group judges compare us against. UH's village is TBD — decision pending (see Part 2.1).
Medal
Bronze, Silver, or Gold. Not ranked — every team that meets the criteria gets the medal. You must hit ALL Bronze criteria to qualify for Silver, all Silver for Gold.
Special Prize
Awards for specific areas (Best Model, Best Wiki, Best HP, etc). A team can target up to three. Grand Prize winners typically stack 3–4 of them.
Wiki
Our public project website at 2026.igem.wiki/uh/. It IS the submission. Judges read the wiki; if it isn't on the wiki, it doesn't count.
Standard URL
Fixed page paths every team must fill (/description, /engineering, /human-practices…). Work on the wrong URL may not be evaluated.
Parts Registry
The iGEM biological-parts database (parts.igem.org) where every construct we build gets its own documented entry.
Freeze
The hard deadlines in October when the wiki, videos, and registry lock. Nothing can change after a freeze. They do not get extended.
⚠ Read this twice

Judges are not obligated to look beyond the Standard URL pages. If you do brilliant work but document it on the wrong page, or only in your lab notebook, it may never be scored. Every experiment, every interview, every model must be mapped to a Standard URL.

Part 01

The competition

iGEM has been running since 2004 out of MIT. Over 400 teams from 40+ countries compete each year. Teams design, build, and test biological systems using standard biological parts, then present at the Grand Jamboree. Evaluation spans engineering rigor, human-practices integration, wiki documentation, presentation quality, and community contribution.

1.1 Divisions

Three competitive divisions, each judged separately:

DivisionWhoNotes
High SchoolPre-university studentsSmaller scale projects expected
UndergraduateBachelor's-level studentsLargest and most competitive division
OvergraduateMaster's / PhD / postdoc teamsHighest technical bar, deeper analysis expected
★ Confirmed

UH iGEM 2026 competes as Overgraduate. This raises expectations for technical depth, modeling sophistication, and proof-of-concept rigor. Calibrate your work accordingly.

1.2 Villages (thematic categories)

Each team selects one Village. It determines the peer group we compete against for Village Awards, and frames how judges evaluate real-world impact.

VillageFocus area
DiagnosticsDisease detection, biosensors, point-of-care tools
TherapeuticsDrug delivery, engineered therapies, probiotic interventions
Infectious DiseasesAntimicrobials, phage therapy, pathogen detection
OncologyCancer detection / treatment, tumor-targeting systems
AgricultureCrop improvement, soil health, pest management
Food & NutritionFood safety, nutritional enhancement, fermentation
Climate CrisisCarbon capture, bioremediation, sustainable materials
EnvironmentPollution cleanup, ecosystem monitoring, biosensors
Foundational AdvanceNew tools, methods, or fundamental knowledge for synbio
BiomanufacturingIndustrial bioprocesses, metabolic engineering
Software & AIComputational tools, modeling platforms, AI applications

1.3 Medal criteria — Bronze, Silver, Gold

Medals are cumulative. Gold requires all Bronze + all Silver + all Gold criteria. Every criterion must be documented on the correct Standard URL page.

Tier 01

Bronze

  • Deliverables. Complete all required items (wiki, video, judging form, safety forms, roster, attribution).
  • Wiki. Functional team wiki at the correct Standard URL with all required pages.
  • Attribution. Clear attribution of all work — who did what, external help, commercially obtained materials.
  • Project description. Clear statement of what we're trying to achieve and why.
  • Contribution. Something future iGEM teams can build on.
Tier 02

Silver

  • Engineering success. Demonstrate the design → build → test → learn cycle with evidence of iteration based on data.
  • Human Practices. Show how external input changed our design. A feedback loop — not just outreach.
Tier 03

Gold

  • Proof of concept. Experimental validation that the project works as intended — at minimum proof-of-principle.
  • Specialization excellence. Excel in up to 3 Special Prize areas. High quality bar.
  • Integration. Deep integration between engineering, human practices, and narrative.
★ Gold strategy

Our proof-of-concept is the C. elegans lifespan assay. Even partial results satisfy this if we frame them as iterative engineering with clear next steps. Our FBA analysis is a massive asset for the engineering-documentation criterion.

Part 02

Where we're competing

2.1 Village choice — decision pending

⚠ Open decision

Our village has not been selected yet. Village choice depends on technical decisions still being finalized — final project scope, chosen constructs, and modeling direction. The Village Selection Freeze lands in June 2026, so this must be resolved in Phase 1. Final call rests with the Project Lead and PI once the technical plan lands.

Village selection is a strategic decision with direct impact on our competitive position. Rather than recommend a village prematurely, the framework below is what we'll use to make the call once the technical plan is locked.

How we'll decide

FactorWhat to evaluate
Narrative fitWhich village does our story sit in most naturally — therapeutic framing? toolkit framing? environmental?
Judge expertiseDo the village judges reward what our project does best (circuit work? clinical translation? modeling?)
Competition densityHow crowded is the village? Smaller villages mean fewer competitors for the village award
Award alignmentWhich Special Prizes are common in that village? Do they align with our strongest work?
PrecedentWhat has recently won Grand Prize from this village? Does our shape resemble those winners?

Once the technical plan is finalized, this section will be replaced with a concrete recommendation and the reasoning behind it.

2.2 Special prize targets — decision pending

⚠ Open decision

Special Prize targets depend on our final technical scope and village choice. We can target up to 3. The table below is the full menu — we'll narrow to three once we know which areas our actual project can realistically win.

Three winners per prize per division (High School, Undergrad, Overgrad). The full menu:

PrizeWhat judges want
Best ModelMathematical / computational model that informs system design
Best New PartOne outstanding new BioBrick with excellent characterization
Best Human PracticesExceptional integration of stakeholder feedback into design
Best PresentationOutstanding Jamboree talk with a clear narrative
Best WikiExcellence in documentation, design, navigation, completeness
Best Part CollectionOutstanding collection of related, well-documented parts
Best Software ToolComputational tool useful to other teams
MeasurementImproved measurement approaches for parts characterization
EducationInnovative educational tools or outreach activities
InclusivityExceptional efforts to include diverse identities

When the technical plan lands, we'll pick three targets, mark them here with our rationale, and align all four phases of work against them.

Part 03

What we must deliver

3.1 Mandatory deliverables

Missing any single item can disqualify us from medal consideration. Print this list. Pin it up.

Team Wiki
2026.igem.wiki/uh/ · Wiki Freeze (Oct)
Presentation Video
iGEM portal upload · Oct deadline
Project Promotion Video
iGEM portal upload · Oct deadline
Poster
Physical at Jamboree · 4 ft × 4 ft · Nov 13–16
Judging Form
iGEM submission portal · Oct deadline
Safety Forms
iGEM submission portal · multiple deadlines
Team Roster
iGEM portal · Spring deadline
Attribution
/attributions wiki page · Wiki Freeze
Parts Registry
parts.igem.org · Registry Freeze

3.2 Standard wiki URLs

Each wiki page has a fixed URL that judges will check. Document work on the correct page or it may not be evaluated.

URL suffixPage contentScoring level
/descriptionProject descriptionBronze
/contributionContribution to iGEM communityBronze
/attributionsAttributionBronze
/engineeringEngineering SuccessSilver
/human-practicesHuman PracticesSilver
/modelModelingSpecial Prize
/partsParts overview / collectionSpecial Prize
/safetySafetyRecommended
/experimentsExperiments / protocolsRecommended
/resultsResultsRecommended
/notebookLab notebookRecommended
/implementationProposed implementationRecommended
⚠ Critical

Documentation is everything. The best experimental result in the world is worthless if it isn't on the right Wiki page with the right Standard URL. Every piece of work must be mapped to its corresponding Standard URL page, ideally when it happens — not in October.

Part 04

How teams win

4.1 Grand Prize patterns — 2023–2025 analysis

Grand Prizes (the BioBrick Trophies) go to the highest-ranked team in each division. Three recent case studies:

2024 — Heidelberg (PICasSO)

Village: Foundational Advance. Built a pioneering toolbox for rearranging genome 3D architecture using CRISPR/Cas-mediated spatial engineering. Also won Best FA, Best Parts Collection, Best Model, Best Wiki. Key success factor: extraordinary depth across every dimension.

2023 — McGill (Proteus)

Village: Therapeutics. Modular chimeric fusion proteins to selectively kill cancer cells. Also won Best Therapeutics, Best Part Collection, Best Presentation. First Canadian team to win Grand Prize, in only their second year competing.

2025 — McGill (UG) & Brno (OG)

McGill repeated — this time from Foundational Advance. Brno (Czech Republic) took Overgraduate in Agriculture with deep computational + wet-lab integration.

The patterns that show up every year

PatternWhat it means for us
Deep modelingEvery Grand Prize winner had competition-leading computational work
Parts collectionWinners submit large, well-documented collections
Presentation excellenceMcGill won Best Presentation in 2023. Rehearse relentlessly.
Wiki as masterpieceHeidelberg 2024's wiki is considered the best in competition history
Multi-prize stackingGrand Prize winners typically also win 3–4 Special Prizes
Iterative engineeringClear design → build → test → learn cycles with documented failures
Real stakeholder integrationExternal feedback that changed the design, not perfunctory outreach

4.2 UH's position — assessment framework

⚠ Pending technical finalization

A specific strength / gap assessment depends on our final technical scope. The framework below is what we'll use to do an honest self-assessment once the project plan is locked. Fill it in with PI + leads in Phase 1.

Where we're strong — dimensions to assess

DimensionWhat to evaluate
Computational depthIs our modeling competition-leading? Does it drive design decisions?
Novel circuit / partsDo we have a genuinely new, characterizable part that other teams could reuse?
Biosafety architectureIs our containment story layered and defensible?
HP frameworkIs there a visible feedback loop — stakeholder input that changed the design?
Scope disciplineIs our framing tight enough to prevent impossible translational questions from judges?
ModularityDo our parts and methods form a natural collection other teams can adapt?

Where we need attention — categories to watch

CategoryWhat to watch
Open technical decisionsEvery unresolved "which approach" call blocks downstream planning. Resolve early in Phase 1.
Wet-lab timeline realismMulti-step construction plans need slack built in. What's the critical path?
Assay execution windowsLong-running assays limit iteration count. Start as early as possible, plan for 2–3 attempts max.
Wiki polishProfessional design and interactive elements take longer than expected. Start before content is final.
Presentation rehearsalStart scripting in September, not October. Rehearsal frequency separates medals from prizes.
Parts documentationEvery part needs sequence + characterization data. Backlog grows fast if deferred.

When the technical plan lands, replace these frameworks with a concrete strength / gap list, priorities attached.

Part 05

The UH playbook

🎯 Mission

Win Gold + Best Model + Best New Part + Best Human Practices. Compete for Grand Prize in our chosen village.

5.1 The four phases

AprMayJunJulAugSepOctNov
Phase 1
Foundation
Phase 2
Build
Phase 3
Test & iterate
Phase 4
Polish & present

Phase 1 — Foundation (April – May)

  • Finalize experimental design with PI + leads. Lock scope, approach, and measurable outputs before any wet-lab work begins.
  • Complete IRB / IBC approvals once experimental design is locked.
  • Finalize and send HP outreach emails. Schedule stakeholder interviews for May–June.
  • Begin wiki infrastructure — choose framework, set up repo, create Standard URL stubs.
  • Decide and submit Village Selection with PI. Submit Safety Forms.
  • Order DNA synthesis and reagents once design is locked.
  • Set up modeling environment as a reproducible package.

Phase 2 — Build (June – July)

  • Wet-lab sprint: begin construction per finalized design. Prioritize highest-impact pieces first.
  • Build characterization constructs independently so each part can stand on its own as a contribution.
  • Conduct HP interviews. Document every conversation. Update the Design Change Log.
  • Develop modeling wiki page content in parallel with wet-lab work.
  • Begin Parts Registry documentation for each completed construct.
  • Education outreach: run at least one synthetic-biology workshop.

Phase 3 — Test & iterate (August – September)

  • Analytical validation of key outputs. Compare measurements to model predictions.
  • Characterize each part under varying conditions to produce Registry-ready data.
  • Long-running assays — start as early as possible. Plan for 2–3 attempts max.
  • Document every iteration on the Engineering Success page in real time.
  • Complete HP stakeholder engagement. Synthesize the Design Change Log into a narrative.
  • Target 60 % of wiki content drafted by end of September.

Phase 4 — Polish & present (October – November)

  • Wiki sprint: finalize all Standard URL pages. Figures, interactive elements, references.
  • Complete Parts Registry pages with full characterization data.
  • Record the Presentation Video. Script it. Rehearse 20+ times.
  • Record the Project Promotion Video (2–3 min, public audience).
  • Submit the Judging Form with clear links to evidence for every criterion.
  • Wiki Freeze: nothing can change after this date.
  • Jamboree (Nov 13–16, Paris): present, engage judges, network.

5.2 Timeline & deadlines

⚠ Freeze deadlines

All freeze deadlines are hard — they will not be extended. Freeze time is 15:00 UTC. Check competition.igem.org/calendar for exact 2026 dates.

WhenWhat UH should be doingCompetition deadline
April 2026Finalize experimental design, begin wiki setupTeam Roster due
May 2026IRB / IBC approval, HP outreach begins, reagent orderingSafety Form deadline
June 2026Village Selection, wet-lab construction, HP interviewsVillage Selection Freeze
July 2026Peak wet-lab sprint, modeling refinement, parts docs
August 2026Validation assays, iteration, long-running experimentsMidpoint check-in
September 2026Wiki writing sprint, poster design, presentation scriptingBegin video recording
October 2026Wiki finalization, video submission, judging formWiki / Video / Registry Freeze
Nov 13–16Grand Jamboree in ParisCompetition
Part 06

Risk register

Every risk below has been explicitly acknowledged with a contingency plan. When one of these fires, reach for the plan — don't improvise.

RiskLikelihoodContingency
Construction / build failsHIGHPrioritize work by modeled impact. Have a minimum viable subset the team can still deliver.
Primary outcome assay shows no effectMEDIUMReframe around the engineering goal. Show intermediate data (production, expression, activity).
A circuit or part doesn't function as designedMEDIUMCharacterize each input / component independently. The part itself is still a contribution.
Wiki not done by freezeHIGHStart content early (June). Internal deadline 2 weeks before actual freeze.
Team member drops outMEDIUMDocument all work. Cross-train. No single points of failure.
Funding runs shortMEDIUMApply for iGEM grants (Zymo, IDT, NEB). Corporate sponsorships.
Safety concern raised by judges or reviewersLOWAddress proactively in the HP framework. Be transparent about containment and rationale.
Judges question project framingMEDIUMStay disciplined with language. A clear non-claims list is your armor.
Appendix

Your first week

New to the team? Find your role below. In your first week, finish everything in your card and read this guide cover-to-cover. Then schedule a 1:1 with the Project Coordinator (Dr. Windham).

All new members

Day 1–7 basics
  • Read this guide cover-to-cover
  • Sign in to Team Dashboards and introduce yourself in your role dashboard
  • Access the shared Google Drive / GitLab
  • Open your role's dashboard and bookmark it
  • Complete iGEM safety training
  • Read one recent Grand Prize wiki end-to-end (Heidelberg 2024 recommended)

Wet lab

Know this by Friday
  • Who your wet-lab lead is, and how to reach them
  • Lab-access, training, and safety requirements you still need
  • Where experiments and protocols are documented
  • How to request reagents and consumables
  • Which medal criteria your work contributes to

Modeling / dry lab

Know this by Friday
  • Who your modeling lead is, and how to reach them
  • How modeling work gets shared with wet lab
  • Where code and notebooks live, and how to contribute
  • Which wiki pages modeling work will land on
  • Which medal criteria / Special Prizes your work supports

Human Practices

Know this by Friday
  • Who your HP lead is (May Shin Thant), and how to reach them
  • Your mentor for HP work is Dr. Windham — introduce yourself in your first week
  • What a feedback loop looks like (it's not outreach)
  • Where stakeholder conversations get logged
  • How to propose a new stakeholder to reach out to
  • Which medal criteria HP work is graded against

Wiki / documentation

Know this by Friday
  • Who your wiki lead is, and how to reach them
  • Your mentor for wiki work is Dr. Windham — introduce yourself in your first week
  • How to edit the wiki and preview your changes
  • The Standard URL pages every iGEM wiki must have
  • When the Wiki Freeze lands and our internal deadline
  • What "competition-grade polish" looks like (study a recent Grand Prize wiki)

Outreach / education

Know this by Friday
  • Who your outreach lead is, and how to reach them
  • What events are already on the calendar
  • How to document an event (photos, attendance, impact)
  • Where outreach / education write-ups live
  • Which medal criteria and Special Prizes this work supports

The bottom line

UH iGEM 2026 has the raw materials to compete for Gold and Special Prizes. Our computational depth rivals recent Grand Prize winners. Our circuit design is genuinely novel. Our HP framework is more thoughtful than most teams attempt.

What separates Gold from Grand Prize is execution and polish. The teams that win don't have fundamentally better science — they have better documentation, better presentations, better wikis, and more complete engineering stories. Every hour spent on the wiki, on presentation rehearsal, on parts documentation, on the engineering narrative is an hour that directly translates to medal criteria and prize scoring.

Start now. Document everything. Iterate relentlessly. Win in Paris.