Was This Written By ChatGPT?
Can you tell who wrote the article you're reading?
Can you tell what’s been written by GPT and what hasn’t been? Probably not without an algorithm to help and even with one, the fraudsters always seem to be a step ahead of the good guys (FTX, Madoff, olympic doping, etc).
Well this is kind of a big deal because if we can’t tell, what makes us think GPT can? And if it can’t, how do we know GPT-X isn’t just learning from some crappier previous version of GPT? It’s bad enough that humans post/share/tweet wrong/incorrect/misleading/dumb content, but now, we’re training the AIs on this dreck, then potentially training the next generation of AIs or AGIs on the previous dreck-fed-dreck?
How confident are you that the Wikipedia article you read wasn’t written by some unscrupulous/lazy editor who just typed some stuff into GPT? And if your superior, human critical thinking cannot tell, what hope does the new AI have of maintaining its detection abilities?
Someone will build a BS-detector (someone has), the scammers will do one better (wait a minute or two), and on we go. And at some point, truth vanishes into the ether (if it hasn’t already?). AI just generates more content that might have emerged from arable corpuses tilled with manure (figuratively speaking).
Enjoy reading your next online article.
No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays.
Was This Written By ChatGPT?
Can you tell who wrote the article you're reading?
Can you tell what’s been written by GPT and what hasn’t been? Probably not without an algorithm to help and even with one, the fraudsters always seem to be a step ahead of the good guys (FTX, Madoff, olympic doping, etc).
Well this is kind of a big deal because if we can’t tell, what makes us think GPT can? And if it can’t, how do we know GPT-X isn’t just learning from some crappier previous version of GPT? It’s bad enough that humans post/share/tweet wrong/incorrect/misleading/dumb content, but now, we’re training the AIs on this dreck, then potentially training the next generation of AIs or AGIs on the previous dreck-fed-dreck?
How confident are you that the Wikipedia article you read wasn’t written by some unscrupulous/lazy editor who just typed some stuff into GPT? And if your superior, human critical thinking cannot tell, what hope does the new AI have of maintaining its detection abilities?
Someone will build a BS-detector (someone has), the scammers will do one better (wait a minute or two), and on we go. And at some point, truth vanishes into the ether (if it hasn’t already?). AI just generates more content that might have emerged from arable corpuses tilled with manure (figuratively speaking).
Enjoy reading your next online article.