Count me in the camp of AI doubters. I’ve seen some impressive things from generative AI, including things that might one day help humanity as a whole, but too often it’s used as a crutch — especially by people who think it’s a shortcut to getting rich. Startups and billion-dollar corporations alike are promising the moon at a time when ChatGPT and Google Gemini still have trouble getting basic facts straight. When I’m researching an article, I can only ever rely on it for the simplest of questions. Everything else needs to be double-checked.
Two of the groups most bent on abusing AI are unethical product makers and marketers. Since people still depend on reviews to figure out whether something is worth buying, they’ll bombard online stores with fake reviews, looking to sway your opinion or at least get a product higher in rankings or search results. Thankfully, if you keep a few key tips in mind, it’s usually not that hard to identify AI scam campaigns — whether you’re shopping for a $2 spoon or a $2,000 smart TV. Pretty soon you’ll start noticing them like you’re staring at code in The Matrix.
Vague and universally positive language
If it sounds too good to be true…
To be clear, you will see human reviews that are overwhelmingly positive sometimes, mostly if they’re short comments attached to star/point ratings. A happy buyer who only wants to share their feelings probably isn’t going to spend much time weighing positives and negatives. The whole point of an AI review is to skew perceptions in a product’s favor, however, so the more glowing a review is without acknowledging the downsides, the more skeptical you should be. One countermeasure I’ll use is to intentionally search for one-, two-, or three-star reviews — those may be skewed in their own right, but if nothing else, they’re liable to be human.
The more glowing a review is without acknowledging the downsides, the more skeptical you should be.
Always be on the lookout for praise that doesn’t reference specific product details. An AI-generated camera review might say something like “the quality of this product is perfect for the price,” rather than “the bundled 18-70 kit lens is tack-sharp, and I love its fold-out touchscreen.” That’s because an AI isn’t going to be familiar with the features a serious buyer cares about (more on this later), at least for anything more complex than that spoon I mentioned. In fact, there’s a chance that AI reviews will overcompensate, quoting parts of the official product description back at you to sound detailed. Those efforts should be patently obvious.
Formal or otherwise unnatural-sounding sentences
Sound it out
While it’s not from a review, the example above highlights the fact that generative AI tends to produce formal, even clinical-sounding text unless you pull tricks to make it say otherwise.
It’s not actually intelligent — all it’s doing is synthesizing vast amounts of human-created content to spit out the results most likely to please your requests. It doesn’t have a personality, and it doesn’t understand the subtleties of social interaction in your culture. It’s just putting things together in a logical manner that’s going to read properly for as many people as possible.
An easy way to catch this is to verbalize what you’re reading. It doesn’t have to be out loud — the voice in your head will do. If you can’t imagine yourself, a friend, or a family member talking that way, it might be a red flag. If grammar is fundamentally broken — with adjectives, nouns, or verbs in the wrong order — that flag could turn into a 50-foot monster, signaling that someone originally generated the review in a different language, then used an online translation tool.
An irony here is that the best reviews may sound very formal if a person puts in serious effort. Hopefully, they’ll know their audience well enough not to come across as clinical.
Checking with reputable third-party publications
Authority still matters
If you’re still suspicious of user reviews despite following my previous tips, it might be worth checking what well-known publications are saying. You can, of course, choose to rely solely on those publications, since their writers will probably be experienced enough to catch important details, and have a vested interest in credibility. Some people seem to assume that every professional has been paid off if they deliver a positive review — but in the long term, it’s not worth destroying your own reputation or that of the site(s) you work for.
Pro reviews can paint a picture of the commonly-accepted pros and cons people are discussing.
The main reason I’m suggesting this tactic, however, is that pro reviews can paint a picture of the commonly-accepted pros and cons people are discussing. If a user review doesn’t touch on critical points, that’s an obvious warning sign. If it does touch on a few that are all positive, they should at least be ones that make sense to prioritize. Smartphone buyers typically care most about battery life, performance, and reliability — less so things like elegant design or how many colors are available.
Fighting fire with fire
Because platforms like ChatGPT and Gemini rely on algorithms to generate content, common patterns will emerge. You may be able to spot these yourself if you’re paying close attention, but the easier approach is to rely on the various AI detection tools out there. One I like is the Chrome extension for Copyleaks, a tool most often used by schools and corporations. If I’m ever concerned about something, all I have to do is open the extension and paste in the relevant text. It does need a minimum sample size to work, but it’s not very hard to meet that threshold.
Detection tools aren’t foolproof.
Bear in mind that detection tools aren’t foolproof. The creators of AI platforms are always trying to refine their algorithms, whether to add functions, improve accuracy, or simply sound more natural. It’s possible that an AI could change too much to be detected, much in the same way that an animal can evolve into a new species. AI detection firms are certainly aware of this possibility, though, so you should be able to detect the vast majority of fake reviews. Copyleaks claims a 99% success rate at spotting AI, which is pretty remarkable if true.
Trending Products
HP 2024 Laptop | 15.6″ FHD (1...
Lenovo V-Series V15 Business Laptop...
HP 24mh FHD Pc Monitor with 23.8-In...
Thermaltake Ceres 300 Matcha Green ...
ASUS TUF Gaming 27″ 1080P Mon...
Acer Nitro 27″ WQHD 2560 x 14...
CORSAIR iCUE 4000X RGB Tempered Gla...
SAMSUNG 32-Inch ViewFinity S7 (S70D...
Wi-fi Keyboard and Mouse Combo, Lov...
