Machine Learning News Hubb
Advertisement Banner
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
Machine Learning News Hubb
No Result
View All Result
Home Neural Network

Apparently I am a robot

admin by admin
January 18, 2023
in Neural Network


As AI-generated text is getting better, it’s getting easier to pass it off as human-written.

That’s not to say it’s as good as human-written. Its goal is to sound correct rather than be correct, so it has a well-known tendency to confidently make stuff up. But there’s a market for mediocre writing, whether you’re trying to lure traffic to a blog or pass an assignment.

Wouldn’t it be nice if there was a way to detect AI-generated text? In Nov 2019, along with the release of GPT-2, OpenAI also released a GPT-2 output detector, and there’s an online demo hosted at Hugging Face. (As well as a browser plugin called GPTrueOrFalse.) Although it’s trained specifically to detect GPT-2’s generated text, I’ve noticed that it’s also fairly good at detecting the generated text from completely different text generators, including those that are much more or less advanced.

So, I tried it on an excerpt from my book on AI:

92.99% fake? Apparently I am a robot.

But wait! The preamble to the GPT-2 output detector demo mentions that the results start to become accurate after 50 tokens, and this was only 36 tokens. I added a couple more sentences and tried again.

Input:

So does this mean I’m really only 50% a robot? Or that robots do 50% of my work?

I added lots more text and it’s back to nearly 100% robot.

Rated as 98.72% fake base on 268 tokens. Input: Let’s say, hypothetically, that we have discovered a magic hole in the ground that produces a random sandwich every few seconds. (Okay, this is very hypothetical.) The problem is that the sandwiches are very, very random. Ingredients include jam, ice cubes, and old socks. If we want to find the good ones, we’ll have to sit in front of the hole all day and sort them.   But that’s going to get tedious. Good sandwiches are only one in a thousand. However, they are very, very good sandwiches. Let’s try to automate the job.  To save ourselves time and effort, we want to build a neural network that can look at each sandwich and decide whether it’s good. For now, let’s ignore the problem of how to get the neural network to recognize the ingredients the sandwiches are made of—that’s a really hard problem. And let’s ignore the problem of how the neural network is going to pick up each sandwich. That’s also really, really hard — not just recognizing the motion of the sandwich as it flies from the hole but also instructing a robot arm to grab a slim paper-and-motor-oil sandwich or a thick bowling-ball-and-mustard sandwich.

In fact, even when I maxed out the detector’s length at 510 tokens (just under 500 words), it was still rating me as about 8% human.

Up till this point I thought the GPT-2 detector worked pretty well – as long as the text sample was long enough, it matched what I knew or guessed about the text’s origin. And it doesn’t think every excerpt from my book is AI-generated. But the fact that it insisted even one excerpt is not by a human means that it’s useless for detecting AI-generated text.

The GPT-2 detector is especially useless in cases like detecting cheating, where being deemed a robot could carry huge penalties. I’ve even suggested it for this purpose in the past, and have completely changed my mind. The bloggers responsible have been sacked.

“What’s worse than a tool that doesn’t work? One that does work, nearly perfectly, except when it fails in unpredictable and subtle ways.” – Cory Doctorow

Bonus post: I get ChatGPT to rate another neural network’s recipes. It is … generous.



Source link

Previous Post

Victorian Holiday cards by AI

Next Post

ChatGPT rates recipes by another neural net

Next Post

ChatGPT rates recipes by another neural net

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

New and Improved Embedding Model

Related Post

Artificial Intelligence

Dates and Subqueries in SQL. Working with dates in SQL | by Michael Grogan | Jan, 2023

by admin
January 27, 2023
Machine Learning

ChatGPT Is Here To Stay For A Long Time | by Jack Martin | Jan, 2023

by admin
January 27, 2023
Machine Learning

5 steps to organize digital files effectively

by admin
January 27, 2023
Artificial Intelligence

Explain text classification model predictions using Amazon SageMaker Clarify

by admin
January 27, 2023
Artificial Intelligence

Human Resource Management Challenges and The Role of Artificial Intelligence in 2023 | by Ghulam Mustafa Shoaib | Jan, 2023

by admin
January 27, 2023
Deep Learning

Training Neural Nets: a Hacker’s Perspective

by admin
January 27, 2023

© 2023 Machine Learning News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.