by Lindsay Nash | Jan 12, 2026 | Feature focus
AI is becoming an increasingly valuable support tool for awards, prizes and recognition programs. For busy program managers and judges, it can help surface insights, increase efficiency and strengthen fairness at scale.
And with the Award Force AI field type, you can now bring this capability directly into your entry forms, giving you a fast, structured way to generate consistent summaries, extract information and reveal insights.
In this article, you’ll find practical, ready-to-use prompt examples you can add to your entry forms today. The power of the AI field lies in the user: It’s all under your control.
First of all, the usage of AI in Award Force is optional. Why should you consider using it?
The AI field type allows you to create a dedicated form field that can automatically analyse the content of an entry, comments from judges or program managers and scores from across a submission. It does not replace human judgment; instead, it gives you a reliable, scalable way to support decision-making, based on the content that the entrant or judges have provided.
You can set the field to be hidden for entrants, and in this way, it becomes an effective tool for program managers to see and analyse entry information at a glance, dramatically reducing the time spent compiling summaries or reviewing feedback.
Or, you can set the AI field to be viewable by entrants, which can be helpful in a variety of use cases, such as:
Keep in mind that only program managers have the ability to generate an AI response. Entrants and judges can only view the outputs, and only if you permit them.
In the article below, you can find examples of prompts you can use for:
You can add these to any AI field within your Award Force entry form configuration and customise them to suit your program’s needs.
Back to top
These prompts extract or summarise information from the entrant’s own text. They’re ideal for programs handling high entry volumes or where consistency of summaries is important.
Summarise this entry in two sentences.Useful for creating uniform shortlists or internal reviews where you want administrators to see the essence of each entry at a glance.
List the entrant’s key achievements in this entry.This helps standardise information by pulling out achievements regardless of how each entrant frames their submission.
Please count all the words used by the entrant in this entry.Particularly helpful when your program imposes word limits or you want transparency in how closely entrants follow guidelines.
Summarise the entrant’s claims of excellence or innovation.Ideal for innovation, impact or research-based awards where you need to compare claims consistently across entries.
Extract all metrics or evidence provided to support the entry.For data-driven programs, this ensures no key statistics are overlooked.
Create a concise overview that captures the spirit of this entry, what makes it stand out and why it was submitted for recognition.This produces a more narrative summary useful for judging panels, ceremony scripts or communications teams preparing finalist profiles.
These prompts use judges’ scores or comments as inputs. They are especially helpful for programs with multiple rounds, large panels or qualitative feedback.
List all judges who assessed this entry and show their average scores.A fast way to confirm participation, look for outliers or verify scoring patterns across rounds.
Summarise the overall sentiment of the judges for this entry.Sentiment summaries can highlight tone and consensus—particularly useful when qualitative comments are lengthy.
List the strengths and weaknesses highlighted by the judges.This helps program managers collate feedback and then provide it back to entrants for easy sharing, supporting transparency and learning in the process.
Summarise how the judges’ comments reflect their scoring.This reveals whether feedback aligns with scores, a key fairness consideration for many program administrators.
Highlight the main reasons judges felt this entry should or should not win.A good tool for program managers preparing for moderation sessions or producing post-round reports.
Identify recurring themes in the judging feedback.Recurring themes help administrators spot common evaluation patterns, bias or areas where rubric instructions need refinement.
Pretend you are an auditor. In less than 50 words, evaluate the average human judges’ scores of this entry from [XYZ] score set and identify any significant deviations. Summarise your findings in bullet points followed by a one-sentence takeaway.This provides a clear, concise diagnostic you can use in moderation meetings or appeals reviews.
Tip: Here, you will be comparing an AI field output to human judging scores and comments. It’s important to use the word “human” when referencing judges to separate AI from human judges so it does not include itself if the context matches.
Insight prompts help program managers take a step back from the awards entry itself to understand how it performed, how it could be improved or what the judging feedback implies.
Summarise how effectively this entry demonstrates excellence in its category.A useful barometer before entries move to final assessment.
Identify sentences in this entry that appear subjective.Particularly relevant in academic, evidence-based or compliance-driven awards.
If you were preparing this entry for resubmission next year, what would you improve?Entrants appreciate this kind of actionable guidance, especially in recurring annual programs.
Based on the judges’ comments, how might this entry perform in a related category?Helpful for cross-category reviews or when your program evolves its categories year to year.
Generate a short three-section overview titled: About the entry, Judging feedback and Key highlights. Use the entire entry as the basis, but don’t include any identifiable information in your reply.This is ideal for producing award ceremony scripts, finalist summaries or internal evaluation reports at scale.
These prompt examples are triggered in two ways:
Award Force supports the following prompt contexts:
Here are a few considerations for writing AI prompts in Award Force.
Adjectives are your friend. AI is adjective sensitive.“Show scores of this entry” “Show the average human judges’ scores of this entry from xyz score set.” ✅
Be clear about purpose or outcome. Tell the AI field why you are asking.“Evaluate the average human judges’ scores of this entry from XYZ score set and identify any significant deviations.” ✅
Ask for preferred format: Bullet point, paragraphs, numbered lists, etc.“Evaluate the average human judges’ scores of this entry from XYZ score set and identify any significant deviations. Summarise your findings in bullet points, followed by a one-sentence takeaway.” ✅
Add constraints: Word limits, tone or expertise“In less than 50 words, evaluate the average human judges’ scores of this entry from XYZ score set and identify any significant deviations. Summarise your findings in bullet points, followed by a one-sentence takeaway.” ✅
Ask the AI to play a role —it will frame its answer accordingly“Pretend you are an auditor. In less than 50 words, evaluate the average human judges’ scores of this entry from XYZ score set and identify any significant deviations. Summarise your findings in bullet points, followed by a one-sentence takeaway.”
Award Force was built to help you run the most efficient, fair and high-performing awards possible.
The Award Force AI field is one more way to help you recognise excellence more efficiently, giving you smarter tools, clearer insights and more time to focus on what matters: celebrating outstanding achievement.
Learn more about Award Force AI fields in our Help Centre: Using AI fields
Articles
Feature focus
How-to-guides
Press releases
Product updates