TY - GEN
T1 - Constructing and evaluating a novel crowdsourcing-based paraphrased opinion spam dataset
AU - Kim, Seongsoon
AU - Lee, Seongwoon
AU - Park, Donghyeon
AU - Kang, Jaewoo
N1 - Funding Information:
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF-2014R1A2A1A10051238, 2012M3C4A7033341).
Publisher Copyright:
© 2017 International World Wide Web Conference Committee (IW3C2)
PY - 2017
Y1 - 2017
N2 - Opinion spam, intentionally written by spammers who do not have actual experience with services or products, has recently become a factor that undermines the credibility of information online. In recent years, studies have attempted to detect opinion spam using machine learning algorithms. However, limitations of gold-standard spam datasets still prove to be a major obstacle in opinion spam research. In this paper, we introduce a novel dataset called Paraphrased OPinion Spam (POPS), which contains a new type of review spam that imitates real human opinions using crowdsourcing. To create such a seemingly truthful review spam dataset, we asked task participants to paraphrase truthful reviews, and include factual information and domain knowledge in their reviews. The classification experiments and semantic analysis results show that our POPS dataset most linguistically and semantically resembles truthful reviews. We believe that our new deceptive opinion spam dataset1 will help advance opinion spam research.
AB - Opinion spam, intentionally written by spammers who do not have actual experience with services or products, has recently become a factor that undermines the credibility of information online. In recent years, studies have attempted to detect opinion spam using machine learning algorithms. However, limitations of gold-standard spam datasets still prove to be a major obstacle in opinion spam research. In this paper, we introduce a novel dataset called Paraphrased OPinion Spam (POPS), which contains a new type of review spam that imitates real human opinions using crowdsourcing. To create such a seemingly truthful review spam dataset, we asked task participants to paraphrase truthful reviews, and include factual information and domain knowledge in their reviews. The classification experiments and semantic analysis results show that our POPS dataset most linguistically and semantically resembles truthful reviews. We believe that our new deceptive opinion spam dataset1 will help advance opinion spam research.
KW - Crowdsourcing
KW - Deceptive opinion spam
KW - Paraphrased opinion spam
UR - http://www.scopus.com/inward/record.url?scp=85051552928&partnerID=8YFLogxK
U2 - 10.1145/3038912.3052607
DO - 10.1145/3038912.3052607
M3 - Conference contribution
AN - SCOPUS:85051552928
SN - 9781450349130
T3 - 26th International World Wide Web Conference, WWW 2017
SP - 827
EP - 836
BT - 26th International World Wide Web Conference, WWW 2017
PB - International World Wide Web Conferences Steering Committee
T2 - 26th International World Wide Web Conference, WWW 2017
Y2 - 3 April 2017 through 7 April 2017
ER -