Artificial Intelligence detects fake reviews that pose a threat to consumers, according to a new research study.
The researchers have found that fake product and service reviews based on algorithms make it hard for consumers to tell which reviews are real or fake, but Artificial Intelligence detects fake reviews more easily than humans can. The study was a collaboration by researchers at Aalto University’s Secure Systems Research Group and the Waseda University in Japan.
What are the fake reviews and why are they generated?
User reviews of products and services across popular consumer websites are not always legitimate. Researchers have found that consumers are at risk of seeing this user reviews and believing they are genuine. They found that nine out of ten people read the peer reviews on sites like TripAdvisor, Yelp, and Amazon, and trust what they say. Up to 40% of users make a purchase based on just a couple of user reviews, and often spent up to 30% more on purchases when the product or service receives great reviews.
Mika Juuti, a doctoral student at the Aalto University said: “Misbehaving companies can either try to boost their sales by creating a positive brand image artificially or by generating fake negative reviews about a competitor. The motivation is, of course, money: online reviews are a big business for travel destinations, hotels, service providers and consumer products.”
Researchers have identified that the fake reviews generated by machines is likely to increase even more.
How Artificial Intelligence detects fake reviews
The study, which was recently presented at the 2018 European Symposium on Research in Computer Security, focused on developing machine learning to improve the way Artificial Intelligence detects fake reviews. In 2017 researchers at the University of Chicago used a method for training a machine learning model using the data from three million real restaurant ratings on Yelp. After the training the model was able to generate fake restaurant reviews, but its errors were easily identifiable by readers as it struggled to stay on the right topic, often straying to talk about restaurants in the wrong city. Juuti and his team used neural machine translation instead, to provide the model with context using a text sequence of ‘review rating, restaurant name, city, state, and food tags’.
After they did this, they conducted a user study to see whether the results were believable. Juuti added: “we showed participants real reviews written by humans and fake machine-generated reviews and asked them to identify the fakes. Up to 60% of the fake reviews were mistakenly thought to be real.” Following this, they devised a classifier which meant that the model was able to spot the fake reviews, even in the most difficult cases where humans could not distinguish between fake and genuine.