As artificial intelligence (AI) language models become increasingly sophisticated, the ability to distinguish between human-written text and text generated by AI is becoming more important than ever. How can Chat GPT be detected and tell if the article we just read or the email we received was written by a human or an AI language model? From detecting fake news and propaganda to preventing phishing attacks and identifying market manipulation, being able to detect whether a text was written by a human or an AI language model, such as ChatGPT, has a wide range of practical applications.

But how exactly do we detect AI-generated text? How can Chat GPT be detected? What are the techniques used to distinguish between human-written and AI-generated text?

In this article, we'll explore the answers to these questions. By the end of this article, you'll better understand how to detect whether a text was written by a human or an AI language model and why it's becoming increasingly important. So, let's dive in!

Table of contents

Understanding language generation with AI
Detecting language generation with AI
Risks of AI-generated text
Techniques for detecting human-generated text
    ➤  Content at Scale AI Detector
    ➤  Originality.ai
    ➤  Giant Language Model Test Room
    ➤  AI Content Detector at Writer.com
    ➤  Classifier OpenAI (made by OpenAI)
    ➤  DetectorGPT
    ➤  GPTZero
    ➤  Technical indicators
    ➤  Check your sources and author's credibility
Limitations of detecting AI-generated text
Conclusion

Understanding language generation with AI

Generating text or speech in a natural language using AI software is the focus of Natural Language Generation (NLG), a subfield of Natural Language Processing (NLP). NLG involves computational linguistics, Natural Language Understanding (NLU), and Natural Language Processing (NLP).

You can use natural language generation from chatbots and virtual assistants to customer service and content generation. You can also use it to produce written content like reports, summaries, and descriptions.

NLG systems use machine learning algorithms trained on large datasets to generate human-sounding text. Recurrent Neural Networks (RNNs) and Transformers are two examples of deep learning methods that power some of the most advanced NLG systems.

The most common type of AI language model is a neural network-based model, which consists of multiple layers of interconnected nodes. These nodes are trained on large datasets, such as Wikipedia or news articles, to learn patterns and relationships between words and phrases in human language. Once trained, the AI language model can generate new text by predicting the most likely next word or phrase based on the context of the previous words.

ChatGPT, OpenAI's large GPT-4-based language model (for now!), is one of the most popular AI tools. The system has been trained with a lot of data so that it can understand and make up language that sounds like what people say. In other words, ChatGPT is a computer programme that is made to talk to people, answer their questions, give them information, and to create chatbots and virtual assistants.

Chat GPT is also intelligent enough to pass prestigious graduate-level exams but without particularly high marks. The powerful AI chatbot tool recently passed both the law bar and the medical board exams.

Because of their ability to generate human-like text, Chat GPT and other AI language models have raised concerns about their potential misuse. Elon Musk has been vocal about his dissatisfaction with OpenAI since stepping down from its board in February 2018, culminating in an open letter calling for the organisation to pause AI work on more powerful systems. Still, despite some of the concerns stated, Musk has been an advocate for the research and development of AI technologies such as ChatGPT, recognising their enormous potential.

So, determining whether a human or machine has written text is a growing challenge, but can aid in the prevention of misinformation and malicious content spread, especially in journalism, cybersecurity, and finance.


4 Strategies to Improve the Relevance of your Business  using Data Science

Risks of AI-generated text

Researchers have experimented with several methods to identify text produced by AI. This is important since recent NLG models have improved machine-generated text diversity, control, and quality. But the ability to create unique, manipulable, human-like text with unprecedented speed and efficiency makes NLG model abuses like phishing, disinformation, fraudulent product reviews, academic dishonesty, and toxic spam harder to detect. To maximise the benefits of NLG technology while minimising harm, trustworthy AI must address abuse risk.

Real-world abuse of generative language models is emerging. One AI controversy involved an AI researcher who made a computer programme that writes things like real people on a message board called 4chan. The message board's users taught the programme to say mean and hurtful things, producing many board posts, including objectionable ones, from its training data. He made the programme available for download and viewing, but many websites banned it because it could say mean things. Many AI leaders—scientific directors, CEOs, and professors—condemned this model's deployment.

One of the potential dangers associated with these models is their accessibility to advanced threat actors, as evidenced by ChatGPT's user-friendly web interface. A prime example is GPT-3, which assists Jasper, an AI writing assistant, in generating content through human collaboration. Thanks to Jasper's capabilities, users without technical expertise can furnish the model with prompts, keywords, and voice tone to create vast amounts of blog and website content. This process can easily be replicated using open-source models to produce limitless amounts of targeted misinformation designed for popular social media sites and load it onto grey-market account automation tools.

Ultimately, future NLG research will bring new wonders, but bad actors will also use it. To maximise the benefits of this technology while minimising its risks, humans must predict and defend against abuses.

Techniques for detecting human-generated text

Here are some tools and manual methods to determine if an AI wrote a text:

Content at Scale AI Detector

Content at Scale AI Detector has been trained using billions of data pages. It can test up to 25,000 characters (nearly 4000 words).

To use the tool, copy and paste your writing into the detection field before submitting it for detection. In seconds, you'll see a human content score (indicating how likely it is that a human wrote a sample of text) and a line-by-line breakdown of suspicious or obvious AI.

Image showing how Content at Scale AI Detector works.
Content at Scale AI Detector

Artificial intelligence predicts by recreating patterns. AI generators are taught to recognise patterns and generate results that "fit" them. Text that corresponds to pre-existing formats is more likely to be AI-generated.

The differences between AI output and human writing are evaluated through predictability, probability, and pattern scores. Human writing is unpredictable because it does not always follow patterns. Human outcomes vary more and are more inventive. AI writing, on the other hand, only recognises patterns.

Originality.ai

The only non-official AI content detection tool that works with ChatGPT and GPT 3.5 is Originality (the most advanced generative language tool). Originality is a top content checker that detects artificial intelligence and plagiarism. This tool determines content predictability using GPT-3 and other natural language models trained on massive amounts of data.

You get a professional, industry-level content detection checker, which effectively checks copies at the production level.

The tool uses a modified version of the BERT classification model to figure out if a piece of text was written by a human or made by AI. The core of the tool is a pre-trained language model with a new architecture built on 160GB of text data and fine-tuned with millions of samples from a training dataset. This model finds short texts that are hard to understand and is reliable for texts with more than 50 tokens.

To use Originality, paste the content into the checker and scan it.

Unlike Content at Scale, Originality saves scans in your account dashboard. This is excellent for frequently returning to multiple pieces of content.

The AI detection score, not the percentage, indicates the likelihood that the selected writing is AI.

Scores for Detection
According to the CEO of Originality, content that consistently ranks below 10% is safe! Only when content contains 40-50% AI should you be suspicious of its origins.

Larger sample sizes improve detection accuracy, but accuracy does not imply reliability! The more content you read by a writer, the better you can tell if it is genuine.

Keep an eye out for false positives and negatives. Evaluating a writer/service based on a series of articles rather than a single one is preferable.

Complete Sites
If detection scores are consistently high or low, AI-written content is most likely. A single article cannot demonstrate that a website or multiple documents were written with the assistance of AI. These detection tools should only be used with extreme caution. More articles from a single source will increase your statistical sample. Still, detection involves many factors beyond what a website can do. The following sections will go over syntax, repetition, and complexity. Originality has implemented a site-wide checker.

Giant Language Model Test Room

The Giant Language Test Room (GLTR), developed by three researchers from the MIT-IBM Watson AI lab and Harvard NLP, is an excellent free tool for detecting machine-generated text (or GLTR, for short). GLTR is currently the simplest way to predict whether or not casual portions of text were written with AI. Copy and paste the text into the GLTR input box, then click "analyse." This tool might be less powerful than GPT-3-based methods because it is based on GPT-2.

The tool estimates the AI origin of the text: the context to the left determines the likelihood of each word being the predicted word. The top ten predicted words are green, the top 100 are yellow, the top 1000 are red, and the remaining are violet. The colour of AI-generated content is green.

Image showing how GLTR AI Detector works.
Giant Language model Test Room (GLTR)

Again, not perfect, but a very good predictor. GLTR is a useful visual tool for evaluating AI content but does not provide a score: you will not be given a percentage or a number that says, "Yeah, this is probably AI." By pasting text, you can estimate how likely an AI wrote it, but you should make the final decision.

AI Content Detector at Writer.com

Although the parameters for detecting AI content could be more explicit, Writer.com provides a free and straightforward AI writing detection tool. You can check text by URL or directly paste writing into their tool to run scans.

The detector includes 1500 characters of AI content that can be checked for free anytime. It detects ChatGPT-generated writing reasonably well.

OpenAI Classifier (made by OpenAI)

OpenAI released its language classifier to determine whether or not something was written with AI (especially ChatGPT). Although unreliable, the company claims you can use their tool to determine if something was written with AI. Even though the tool was developed by the same company as Chat GPT, OpenAI claims that only 26% of AI-written samples tested were identified as "likely AI-written".

You could use the classifier in this case. It requires at least 1000 characters and performs much better with larger chunks of text. Text that is always predictable cannot be identified reliably. This includes songs and math equations because each answer will always be the same. With the release of the classifier came some guidelines for educators attempting to deal with and digest the recent ChatGPT hype.

Paste a text article into the input and press "submit" to use the classifier. When you click the example buttons, the samples will be auto-filled into the text field.

Image showing how Classifier OpenAI AI Detector works.
Classifier OpenAI

DetectGPT

The DetectGPT method is based on computing the text's (log-)probabilities. If an LLM makes text, each token has a different chance of appearing based on the tokens that came before it. Multiply all of these conditional probabilities together to get the whole text's probability.

The DetectGPT method then messes with the text. If the probability of the new text is much lower than the probability of the original text, then the original text was made by AI. Otherwise, if it's about the same, humans made it.

Image showing how DetectGPT AI Detector works.
DetectGPT

GPTZero

GPTZero is a simple linear regression model that estimates how hard the text is to understand.

The confusion has to do with the log probability of the text that was mentioned above for DetectGPT. The exponent of the negative log probability is used to figure out the perplexity. Large language models learn to maximise the text probability, which minimises the negative log probability and minimises perplexity. So, the less confusing a text is, the less random it is.

Then, GPTZero uses the idea that sentences that are easier to understand are more likely to be made by an AI. GPTZero also reports the so-called "burstiness" of text, which is another way of saying how confusing the text is. The burstiness is a graph of how hard each sentence is to understand.

Image showing how GPTZero AI Detector works.
GPTZero

Technical indicators

Another way to tell if AI-generated content is through technical aspects of writing. Look deeply at the content if you need help with the previous tools or want to break down further writing you've seen. Take a look at these:

1. Short sentences are common in AI-generated content. The AI attempts to write like humans but has yet to master complex sentences. This is obvious when reading a technical blog with code or instructions. AI has yet to pass the Turing test. You're in good shape if GLTR or Originality show creative, one-of-a-kind content. Examine the confidently shady technical content.

2. Another method for identifying AI-generated content is repetition. Because it doesn't know what it's talking about, the AI fills in the blanks with relevant keywords. As a result, an article written by an AI is more likely to repeat the same word, like keyword-stuffed articles and spammy AI-generation SEO tools. Keyword stuffing is the use of unnaturally repeated words or phrases. Some articles include their keyword in nearly every sentence. It will take your attention away from the article. It also turns off readers.

3. Lack of analysis. AI-written articles are deficient in complex analysis. Machines are excellent at gathering data but need to improve at interpreting it. If an article reads like a list of facts without analysis, it was most likely written by artificial intelligence. AI-generated writing excels at static writing (history, facts, etc.) but needs to improve at creative or analytical writing. With more information, AI writes and manipulates better.

4. Incorrect data. This is more common in AI-generated product descriptions but can also be found in blog posts and articles. When collecting data from multiple sources, machines need to correct things. If a machine does not know what to do but must produce results, it will predict numbers based on inaccurate patterns. As a result, if you read an article and notice several inconsistencies between facts and numbers, you can be certain AI wrote it.

Check your sources and author's credibility

This one may seem redundant, but it's still worth mentioning. If you're reading an article and the domain appears unrelated to the content, that's your first red flag. But, more importantly, you should double-check the sources cited in the article (if any). Suppose an author uses sources from dubious websites or declares things without a source. In that case, the author is either not doing their research or is simply automating a slew of AI-generated content.

Limitations of detecting AI-generated text

While there are techniques for detecting AI-generated text, they have limitations, such as:

  • With short paragraphs, AI text detectors can be unreliable. As a result, ensure that the text contains at least 1000 characters.
  • Sometimes, the AI text detector needs to be more trustworthy and claims that the text was generated by AI even if humans wrote it.
  • While some language models can generate text in multiple languages, these AI text detectors are currently only available in English.
  • Text detectors can detect text generated by other language models, but they work best with ChatGPT text.
  • They may fail to detect AI-generated text if humans later edit it.
  • An advanced enough AI language model may be indistinguishable from human-written text if the language model has access to large amounts of data to learn from.
  • Additionally, some AI language models are specifically designed to mimic human behaviour and intentionally generate text that is difficult to distinguish from the human-written text. These are known as "adversarial" models and can be incredibly challenging to detect.

Conclusion

In conclusion, being able to tell if a text is written by a person or an AI language model is an essential tool for encouraging people to use technology and information in a responsible and ethical way. As AI language models improve, spotting AI-generated text will become more critical in many fields and industries, such as journalism, finance, and cybersecurity.

Even though the methods used to find AI-generated text have their limits, ongoing research and development in this field will lead to new and better ways to find AI-generated text. By staying informed and alert, we can help stop the spread of false information, propaganda, and harmful content, which promotes the right way to use information and technology.

If you're interested in learning more about our data science services, including AI and NLP, we invite you to explore the Imaginary Cloud AI website. Our expert team is committed to providing cutting-edge solutions to help you harness the power of data and AI in your business.

You can also watch here Imaginary Cloud's workshop about "A Watermark for Large Language Models" :

FAQs

How can I tell if an AI language model generated a text message?
You can look for repetitive patterns, analyse the text's complexity, and analyse the word frequency. Alternatively, you can use machine learning tools to classify text as human or AI-generated.

Can Chat GPT be detected?
Yes, ChatGPT can be detected. Numerous methodologies and tools have been developed for recognising text generated by ChatGPT. Online tools like OpenAI API Key and AI text detectors like GPTZero can identify ChatGPT-written text. However, as these tools are not perfect, the accuracy of detection can vary.

What are some applications for detecting AI-generated text?
Applications include identifying fake news and propaganda in journalism, detecting and preventing phishing attacks in cybersecurity, identifying market manipulation and fraud in finance, and ensuring that customers interact with a human in customer service.

Are there any limitations to techniques for detecting AI-generated text?
Some limitations include the lack of labelled data for training machine learning models, the difficulty distinguishing between slightly altered text generated by AI language models, and the constant evolution and improvement of AI language models.

How can I protect myself from AI-generated text?
Be cautious of suspicious emails or messages, and verify the source of the text if possible. Use tools that detect AI-generated text to help identify and prevent potential threats.

Will detecting AI-generated text become more important in the future?
Yes, as AI language models become more advanced, the ability to detect AI-generated text will become increasingly important in various industries and fields.


Artificial Intelligence Solutions  done right - CTA