A holistic approach to avoid and detect AI-generated content in your submissions program

by | Mar 9, 2023 | Articles

The release of ChatGPT in November of 2022 sent ripples throughout the globe. Humans had done it: created an artificial intelligence-powered chatbot that uses a machine learning model to scan through billions of data points and spit back out relevant content in a conversational tone that sounds human.  

Excitement ensued. The powerful new chatbot saw more than one million users in its first five days.

 By January, just two months after its launch, ChatGPT had reached 100 monthly active users, becoming the fastest-growing consumer application in history. Data from Similarweb reported that the service had 13 million unique users per day in January, more than double its levels in December.

To put that in context, it took TikTok nine months to reach 100 million users. It took Instagram more than two years.

But, as excitement has grown, so have new fears. Universities and academia across the globe are crying foul—how can they ever accept essay submissions again? Will there be a way to protect original content? Will there be a way to check for AI-generated content? 

AI is the new plagiarism. But it’s better, faster, and smarter for those looking to create content quickly.

It’s undoubtedly a scary situation for organisations that manage submission programs. But, let’s pause for a moment. Take a deep breath. Yes, there are tools out there to catch AI-generated content. And yes, there is so much you can do to promote and demand original content.

In this article, we’ll take a look at AI-generated content and how to avoid and detect it in your submission program.

What is AI-generated content?

First, let’s take a look at what AI-generated content actually is, and how it works. The more you know, the better you can understand and detect AI content in your own submission review.

ChatGPT is an artificial intelligence chatbot developed by OpenAI and financed in large part by Microsoft. It is built on top of OpenAI’s GPT-3 family of large language models and has been fine-tuned using both supervised and reinforcement learning techniques.

Back to top

Screenshot from ChatGPT that provides examples, capabilities and limitations

Source: ChatGPT

The chatbot uses complex learning models to predict the next word based on previous word sequences. That might sound like what your phone does when you start typing a message to a friend, but it’s very different. 

I think Harry Guinness in his Zapier blog about ChatGPT explains it best: 

“[A] humongous dataset was used to form a deep learning neural network—a complex, many-layered, weighted algorithm modelled after the human brain—which allowed ChatGPT to learn patterns and relationships in the text data and tap into the ability to create human-like responses by predicting what text should come next in any given sentence.”

Generative AI can produce text and images, write blog posts, create program code, write poetry and even generate artwork.

I decided to create a prompt in ChatGPT…something random and probably very easy. 

“Can you write a 1,000 word essay on the French involvement in the American Revolution?”

Screenshot of ChatGPT prompt, requesting 1,000 word essay on French involvement in American revolution

It took less than 30 seconds for the essay to arrive. It was smart, made sense, was grammatically correct and flowed perfectly.

Then, I tested AI-generated images. I went to OpenAI’s DALL·E 2, where you can describe an image and the software creates it—in mere seconds.

I provided the following prompt: “An oil painting by Matisse of an awards ceremony.”

This was even faster. In about 10 seconds, DALL•E produced the following image.

Back to top

An AI-generated image from DALL·E  of "an oil painting painting of an awards ceremony."

AI-generated image from DALL·E 

The downsides of AI-generated content 

While users have quickly seen the benefits of implementing such powerful AI tools, many have begun raising alarms. 

First, there is the increased threat of misinformation. AI tools can easily generate articles that contain factual errors. NewsGuard, a company that tracks online misinformation, called AI-powered chatbots “the most powerful tool for spreading misinformation that has ever been on the internet.”

Then, there is the opportunity for students or submitters to leverage it for written work. You can see how easy it was above to create an essay in seconds without having to do any actual research. AI tools can greatly diminish the quality of education, which could have far-reaching consequences. 

If you run a submission program, AI-generated content can seriously undercut the integrity of your program and your review process, which could damage your organisation’s reputation and impact.

Let’s go over how you can mitigate against it.

Back to top

How to avoid and detect AI-generated content

Use an AI content detection tool

First of all, there is a quick way to detect AI-generated content—AI content detection tools. There is a slew of these software apps already on the market (see a listing of AI content detection software on G2), and as AI-powered tools  advance, so too will these tools.

These AI detection solutions, such as CopyLeaks, Originality.ai and GPTZero—to name only a few, allow you to input your content and scan for originality. 

I took the essay I created above in ChatGPT on the French involvement in the American Revolution and plugged it into this free AI content detector at CopyLeaks. Here was the response. 

Sample of Copyleaks AI content detection

I wanted to double-check these results. So, I also ran it through GPTZero, and received the same response.

GPTZero sample of AI content detection

Communicate your content originality standards up front

Employing an AI content detection tool is important–but it’s best employed as a single quality check in your submission review process. 

It’s critical to create a submission process that promotes original content from the outset. 

Be upfront and transparent about your expectations for any submitted content. Let your participants know if you don’t allow AI-generated content. Set the expectation from the very beginning. This will likely go further than any AI content detection tool to discourage any lazy submission behaviours. 

Create an eligibility screener for submissions

Consider a qualification round where you can check submissions for eligibility. In Award Force, for example, you can create an eligibility round to collect specific details that can be reviewed through auto-scoring, based on certain criteria you have created. This can help clean out any weak submissions and weed out those who don’t meet your criteria from the start. 

(See: Save time + automatically check entrant eligibility).

This can save a lot of time by only moving quality submissions forward to your review team.

Back to top

Demand diversity in submission content format

It’s time to think outside the box. Submissions should contain more than text. Multimedia is a wonderful way to diversify your submissions. It also helps submitters provide a more well-rounded submission. 

If, for example, you are collecting essays on the French involvement in the American revolution, you could ask for: 

  • A video overview on the topic where the submitter must speak on video for two minutes or less about the topic.
  • Visuals or artwork from the time period, with sources attached
  • Photos of the research process. For example, photos of notes taken or a list of credible sources. 

Submission management software can make all of this possible, with ease. For example, in Award Force, you can accept unlimited files and file sizes, from video and audio to images and more.

There’s more to a high-quality submission than text. It’s more important now than ever to request varying formats as part of your submission process. 

Back to top

Build a multi-stage review process

If your submission program is complex or requires considerable review time, it’s a good idea to create a multi-step review process.

If you want more eyes on each submission, you could, for example, create a process where submissions go through an approval process before landing in front of your final review team. 

If, for example, you’re working with students on American Revolution essays, you could require sign-off from the student’s advisor on the submission.

If you are managing an employee advancement or recognition program, you could require approval on a submission from a manager.

A multi-stage review flow can help provide integrity checks along the early stages of review, which can help flag any questionable submissions.

Reinforce original content requirements throughout the submission

It’s a good idea to create a policy around AI-generated content and any consequences of submitting non-original content. Then, make sure that the policy is transparent for and communicated to your submitters. 

In Award Force, for example,  you can use content blocks in the first tab of the application or entry form to reinforce your policy on AI-generated content and remind submitters that all content will be checked for originality.

Back to top

Knowledge is power, just ask ChatGPT

Creating a holistic approach to avoid and detect AI-generated content is critical for the integrity of your submissions program. And learning about AI-generated content is the first step to acknowledging its power. It can be used for good and bad. But it’s important to understand the implications it has created for our world.

If you haven’t already, explore ChatGPT and other tools for yourself to see how it works. Try different prompts… or even the prompts you ask for in submissions. Then, stay up-to-date on new developments; there will be many. The door to artificial intelligence is only just opening. 

But by implementing a holistic, diverse submission process, you can start protecting your submissions, program and organisation, today.

Back to top

 

Search our blog

Categories

Follow our blog!