Multi-tenant platform for organizers, judges, and participants. Submit projects, score with sliders, blend human and AI evaluation, and settle the leaderboard — all in one place.
OpenAI GPT-4o → AWS Bedrock Claude → simulated fallback. Configurable hybrid weight blends human and AI scores on the leaderboard.
One deployment, many hackathons. Each event has its own projects, criteria, members, and leaderboard — fully isolated.
Organizers create, system admins approve. Status badges keep pending events out of the participant view until they're live.
Drag-and-drop up to 50 files per project, including zips with safe extraction. Stored in S3 or GCS with signed URLs.
Weighted scoring across configurable criteria. Real-time rank updates. Drill into per-criterion averages and judge notes.
Every mutation recorded — who, what, when. Append-only audit log per hackathon, queryable with filters and pagination.
Sets up the hackathon — name, criteria, weights — and invites judges. Awaits admin approval.
System admin reviews and approves. Event status flips to active and becomes joinable.
Users complete a profile (bio, skills, links), join the event, and submit a project with files.
Judges read profiles, score with sliders. AI evaluation runs on demand. Leaderboard updates live.
Sign up, create your event, and have judges scoring within minutes.