by Katia Ernst | Apr 7, 2026 | Articles
What is “good art”? Why do some paintings, dances or pieces of music appeal to us more than others? And isn’t it all just a matter of taste in the end?
Anyone who has ever been responsible for a creative competition knows these questions: How do you evaluate something that is inherently subjective? And how do you explain afterwards why one participant won and another didn’t?
Fair, transparent evaluation criteria are the backbone of any credible creative competition. It’s important for program managers to make the shift from intuition to structure, strengthening both the integrity of the competition and the trust of everyone involved.
Early in my career, I was a project manager for an international conducting competition. To be honest, I had very little exposure to classical music at the time and even less idea of what makes a great conductor.
What I learned during the competition surprised me: Much of the real work doesn’t happen on stage at all, but beforehand, in collaboration with the orchestra. How quickly does someone build trust? How clearly do they communicate their musical vision? How does the ensemble respond? All of that is decided in rehearsals, long before the concert begins.
For me as a non-expert, it was a revelation. An art form that had previously seemed abstract and barely tangible suddenly gained depth. And at the same time, it became clear to me: specialists can look at the same performance and see entirely different things, weighing each element according to their own expertise. That diversity of perspective is what makes a strong jury—and why a shared framework matters so much.
That is exactly where the real challenge of creative competitions begins.
Fair judging systems don’t aim to replace artistic judgement with algorithms. The goal is to design the evaluation framework so that different judges approach the same performance with the same core questions in mind.
A well-established approach is to break the overall score down into clearly defined sub-areas. Instead of assessing a vague “overall impression”, the jury works with categories such as technical execution, artistic interpretation, originality and stage presence. Each category receives its own weighting and, crucially, a short description of what a strong, average or weak performance looks like in that area.
For a conducting competition, this might look something like the following, based on the evaluation criteria of the London International Choral Conducting Competition:
Frameworks like these don’t make the decision mechanical, but they give the jury a shared starting point.
(See more frameworks on our blog about how to assess portfolios.)
1. Develop criteria together with subject-matter experts
Designing evaluation sheets alone at your desk risks producing criteria that don’t hold up in practice. Bring experienced judges into the process early. Their perspective sharpens the criteria and builds buy-in across the judging panel.
2. Use plain language, not jargon
Terms like “technical brilliance” sound precise but aren’t. What does brilliance actually mean? It’s better to write: “Technique is executed without errors; transitions are smooth and controlled.” That leaves less room for interpretation.
3. Build in calibration sessions
Before the competition begins, all judges should evaluate one or two sample entries together and discuss the results. This calibration reduces the so-called halo effect, where a standout element skews the overall impression, and helps align different scoring tendencies across the panel.
4. Include comment fields
Numbers alone don’t explain a decision. When judges write short justifications alongside their scores, participants gain constructive feedback and program managers have a clear record to fall back on if questions or disputes arise.
5. Use digital tools
Paper forms and spreadsheets quickly reach their limits, especially when many entries are being reviewed in parallel. Digital judging systems make it possible to configure evaluation forms flexibly, consolidate scores in real time and reliably meet data protection requirements. Award Force provides a configurable environment where managers can set up their own criteria, weightings and comment fields without being locked into a fixed template. All data is stored securely and accessible only to authorised users.
A fair evaluation system doesn’t end when the scores are in. How results are communicated matters just as much. Participants who can understand how their entry was assessed are more likely to accept the outcome, even if they didn’t win.
That doesn’t mean every score needs to be made public. It means the process is clearly documented: what criteria were applied, how they were weighted and whether all entries were assessed under the same conditions. Anyone who can answer those questions is on solid ground.
Creative competitions thrive on the passion of their participants. Anyone who has invested weeks or months into a choreography, an artwork or a composition deserves judges who evaluate with clear standards and an organisation that puts the right conditions in place.
A well-designed evaluation system is not a bureaucratic straitjacket — it is a sign of respect. Award Force helps program managers build exactly this kind of structure. Flexible, secure and focused on what really matters: outstanding competitions that leave a lasting impression.
Articles
Feature focus
How-to-guides
Press releases
Product updates