Social Media Malware: Determinants of Users Intention to Share Potentially Infected Posts Emergent Research Forum Paper Sonia Camacho Universidad de los Andes so-camac@uniandes.edu.co Abstract Malware has become problematic in social networking sites (SNS), where users can click on links that come from legitimate users but that may be infected with non-authorized downloads. This may generate security risks for companies (e.g. unauthorized access to sensitive information), if employees use those social networks from their work devices. Considering the relevance of this phenomenon for organizations, this ERF paper explores the potential of malware infection from the point of view of the weakest link in security efforts: the user. This study proposes a theoretical model based on Theory of Reasoned Action that will aid in understanding the factors that determine a user s intention to share potentially infected content in a SNS. This model will be empirically validated using a survey-based study involving Facebook users, and structural equation modelling techniques. Keywords Social networking sites, malware, online sharing behavior. Introduction Malware is a short word for malicious software and it refers to software that is designed to cause damage or perform unwanted actions in a computer system (e.g. run a computer virus) (TechTerms.com, 2015). Malware has become problematic in social networking sites, where users may click on links that are sent by legitimate friends or acquaintances and that may come with infected non-authorized downloads (Everett, 2010). Sharing personal information on social networking sites increases the vulnerability of users and their possibility of falling victims to phishing (Jagatic, Johnson, Jakobsson, and Menczer 2007). In a study conducted on 2011, more than 50% of Information Technology (IT) practitioners surveyed indicated an increase of malware in their systems due to the use of social media, and more than 60% specified that social media represented a security risk to their organizations (Ponemon 2011 as cited in He 2012). Considering the relevance of this phenomenon to organizations, researchers have explored the psychological tactics (e.g. manipulating users greed or curiosity) employed by malware authors to lure users into opening malicious links, as well as the activities performed by malware once it is activated (e.g. fighting existing protection mechanisms) (Abraham and Chengalur-Smith 2010). Moreover, they have studied the mechanisms employed by organizations to address malware and other security risks derived from social networking sites (e.g. monitoring social networking sites usage, banning its use, having up-todate security software) (Everett 2010; He 2012). However, and considering that the user is the weakest link in security efforts, the strategies to address malware and other risks need to focus on the usage behavior of individuals (ISACA 2010 as cited in He 2012). This is a pressing issue considering that users may believe that the behaviors they are performing are not risky (Davinson and Sillence 2010). Therefore, the research objective of this study is to explore the motivations users have to click or share dubious links in social networking sites. Twenty-second Americas Conference on Information Systems, San Diego, 2016 1
Theoretical Background and Research Model In order to achieve the proposed research objective, this study proposes a theoretical model based on Theory of Reasoned Action (briefly described below). Theory of Reasoned Action Theory of Reasoned Action (TRA) has been used widely in Information Systems (IS) studies that have sought to understand the adoption and use of IS (Hong, Kim, and Lee 2008). According to TRA, individuals performance of certain behaviors is determined by individuals intentions to carry out them, which in turn, are influenced by subjective norms and individuals attitude. The attitude toward a particular behavior is determined by a set of beliefs the individual holds about that behavior (Ajzen and Fishbein 1980). In the application of TRA to IS adoption studies, researchers have found inconsistent results in terms of the power of attitude to predict individuals behavior, and as such, this construct has been generally excluded from adoption models (Kroenung & Eckhardt, 2015). This theory is deemed suitable for this study, as it explores a user s intention to perform a particular behavior (i.e. clicking on a link or sharing a post) and its antecedents. Research Model and Hypotheses The proposed research model is based on the theoretical framework described above and shown in Figure 1. The constructs and hypotheses included in this model are described in detail below. Social norms H1+ Perceived usefulness of content H2+ Intention to click on link or share post Trust in SNS contacts H3- H4- Perceived vulnerability Figure 1. Proposed research model Social norms Social norms refer to individuals perceptions of what referent individuals or groups believe about a particular behavior. They are determined by individuals perceived expectations of specific others and individuals intention to comply with these expectations (Davis, Bagozzi, and Warshaw 1989). In the context of social networking sites (SNS), authors have found that individuals sharing behavior can be influenced by the behavioral patterns of their friends (Bakshy, Rosenn, Marlow, and Adamic 2012; Chakraborty, Vishik, and Rao 2013). Moreover, social norms have been found to influence individuals intentions to use Facebook (Błachnio, Przepiórka, and Rudnicka 2013) and individuals disclosure in SNS (Cheung, Lee, and Chan 2015). In light of these findings, it is expected that users will be more likely to click on a link or share a post if their friends have done so, or if they believe their friends expect them to do it (i.e. as a result of social influence). Thus, it is hypothesized that: H1: Social norms are positively related to the intention to click on a link or share a post Perceived usefulness of content Perceived usefulness refers to an individual s belief that using a particular IS will generate improvements in her or his job performance (Davis, Bagozzi, and Warshaw 1989). This construct has shown to be a strong determinant of intention to use an IS (Venkatesh, Morris, Davis, and Davis 2003). In the Twenty-second Americas Conference on Information Systems, San Diego, 2016 2
knowledge sharing literature, Sussman and Siegal (2003) found that the usefulness of information provided by others (e.g. advice, recommendations) increases individuals likelihood of adopting that piece of information (e.g. to follow the advice). In the same vein, finding useful knowledge in online communities increases users intentions to seek and share knowledge in those communities (Park, Gu, Leung, and Konana 2014; Yu, Lu, and Liu 2010). Considering these findings, it is expected that individuals intention to click on a link or share a post will increase if they find the information contained in that link or post useful (e.g. it gives more knowledge to the user, it offers discounts or prizes). In formal terms: H2: Perceived usefulness of content is positively related to the intention to click on a link or share a post Perceived vulnerability Perceived vulnerability refers to the degree to which an individual believes a threat will occur to him/her (Lee, Larose, and Rifon 2008; Mohamed and Ahmad 2012, p. 2368). In Protection Motivation Theory studies, perceptions of vulnerability to a threat (e.g. a computer virus) are positively related to the intentions to adopt protective behaviors (e.g. installing a firewall) (see for example Lee, Larose, and Rifon 2008). In the context of SNS, perceptions of vulnerability to different risks (e.g. blackmailing, identity theft) have a negative impact on users disclosure of their information (Cheung, Lee, and Chan 2015). In the same vein, it is expected that users perception of vulnerability to potential risks (e.g. personal information loss, virus infections) involved in a link or post will lead to a protective behavior (i.e., avoiding to click on a link or to share a post). Thus, it is hypothesized that: H3: Perceived vulnerability is negatively related to the intention to click on a link or share a post Trust Trust can be defined as a person's willingness to be vulnerable to another, based on the expectation that her/his confidence will not be exploited (Mayer, Davis, and Schoorman 1995). In the context of SNS, users develop trusting beliefs towards other users based on perceptions of similarity (Krasnova, Spiekermann, Koroleva, and Hildebrand 2010). Those trusting beliefs help users mitigate the privacy concerns derived from disclosing their information in SNS (Cheung, Lee, and Chan 2015). In light of the previous findings, it is expected that trust in the source of information in a SNS (i.e. a SNS contact) will reduce the perceived vulnerability of a user to a potential threat in the link or post that s/he is considering to share. In formal terms: H4: Trust in social networking sites contacts is negatively related to perceived vulnerability Methodology This research will employ a survey-based study to collect data and test the hypothesized relationships. Participants in this study will be adults that are active Facebook users, and they will be recruited using a market research firm. Facebook is chosen as the SNS focus of this study because it is the leading social network worldwide, with 1.5 billion monthly active users (Statista, 2016). After obtaining consent, participants will be randomly assigned to one of three groups. Each group will be presented with a different scenario (a screenshot), in which a hypothetical Facebook News Feed will show one of the following: (1) a link containing a video of a celebrity, (2) a link offering extra-security features (e.g. antispam software), and (3) a post offering a free product. Those scenarios were chosen based on some of the emotions manipulated by malware authors to persuade users: curiosity (e.g. offering information about celebrities), fear (e.g. issuing a fraud alert), and greed (e.g. offering free things) (Abraham and Chengalur- Smith 2010). The scenarios will be pretested to make sure they are realistic and that participants have no difficulty in placing themselves in their assigned hypothetical scenario (following D'Arcy, Hovav, and Galletta 2009). In each of the groups, and after presenting the screenshots, participants will respond to the items measuring the five constructs included in the research model. In addition, other variables (e.g. age, gender, occupation, time with a Facebook account, and whether participants deem the content exhibited in the screenshot to be dangerous) will be collected to control for their potential influence on the endogenous constructs of the model. Ethics approval will be sought before data collection. Twenty-second Americas Conference on Information Systems, San Diego, 2016 3
Measurement Instrument In order to measure the constructs in the proposed research model, previously validated scales will be adapted to the context of this study. Intention will be measured with a scale from Cheung and Lee (2012). Social norm and trust will be measured with scales from Cheung, Lee, and Chan (2015). Perceived usefulness of content will be measured with a scale from Sussman and Siegal (2003). Finally, perceived vulnerability will be measured with a scale from Mohamed and Ahmad (2012). Model Validation and Sample Size Structural Equation Modeling (SEM) will be used to validate the proposed model. In particular, Partial Least Squares (PLS) will be used as it is suitable for exploratory studies, like the one proposed (Gefen, Straub, and Boudreau 2000). In order to compare the differences among the three groups, the guidelines offered by Faul, Erdfelder, Lang, and Buchner (2007) to detect a medium size effect, with a power of 0.80, an alpha of 0.05, and 3 groups will be followed. According to these guidelines, the minimum number of participants that would need to be recruited is 53 per group (159 in total). In order to account for potential spoiled responses, a total of 180 participants will be recruited. Potential contributions and limitations From an academic standpoint, this is the first known study to address the issue of malware infection from the point of view of users motivations to share potentially malicious content in social networking sites. This study will also help to advance the Dark Side of Information Technology literature, by understanding the factors that make invidividuals vulnerable to the infection of malware while they interact with their SNS applications. From a practical perspective, an understanding of users motivations to share potentially malicious content will help organizations to design more effective user training programs aimed at reducing the threat of malware disseminated through social media. This study has some limitations. First, due to its focus on only one SNS, its results may not be generalizable to other SNS with different characteristics (e.g. Twitter and its shortened links). Second, the proposed research model does not contemplate motivations tied to individual differences or cultural factors. These elements may be included in future research studies. References Abraham, S., & Chengalur-Smith, I. 2010. An overview of social engineering malware: Trends, tactics, and implications, Technology in Society (32), pp. 183-196. Ajzen, I., & Fishbein, M. 1980. Understanding Attitudes and Predicting Social Behavior, Englewood Cliffs, NJ: Prentice-Hall, Inc. Bakshy, E., Rosenn, I., Marlow, C., & Adamic, L. 2012. The role of social networks in information diffusion, in Proceedings of the 21st international conference on WWW (pp. 519-528). ACM. Błachnio, A., Przepiórka, A., and Rudnicka, P. 2013. Psychological Determinants of Using Facebook: A Research Review, International Journal of Human-Computer Interaction (29), pp. 775-787. Chakraborty, R., Vishik, C., & Rao, R. 2013. Privacy preserving actions of older adults on social media: Exploring the behavior of opting out of information sharing, Decision Support Systems, pp. 948-956. Cheung, C. M., & Lee, M. K. 2012. What drives consumers to spread electronic word of mouth in online consumer-opinion platforms, Decision Support Systems (53), pp. 218-225. Cheung, C., Lee, Z. W., & Chan, T. K. 2015. Self-disclosure in social networking sites: The role of perceived cost, perceived benefits and social influence, Internet Research (25:2), pp. 279-299. D'Arcy, J., Hovav, A., & Galletta, D. 2009. User Awareness of Security Countermeasures and Its Impact on Information Systems Misuse: A Deterrence Approach, Information Systems Research (20:1), pp. 79 98. Twenty-second Americas Conference on Information Systems, San Diego, 2016 4
Davinson, N., & Sillence, E. 2010. It won't happen to me: promoting secure behaviour among internet users, Computers in Human Behavior (26:6), pp. 1739-47. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. 1989. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models, Management Science (35:8), pp. 982-1003. Everett, C. 2010. Social media: opportunity or risk?, Computer Fraud & Security (June), pp. 8-10. Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. 2007. G* Power 3: A flexible statistical power analysis program for th social, behavioral, and biomedical sciences, Behavior Research Methods (39:2), pp. 172-191. Gefen, D., Straub, D. W., & Boudreau, M. C. 2000. Structural Equation Modeling and Regression: Guidelines for Research and Practice, Communications of the Association for Information Systems. He, W. 2012. A review of social media security risks and mitigation techniques, Journal of Systems and Information Technology (14:2), pp. 171-180. Hong, S., Kim, J., & Lee, H. 2008. Antecedents of use-continuance in information systems: toward an integrative view, Journal of Computer Information Systems (Spring), pp. 61-73. ISACA. 2010. Top five social media risks for business: new ISACA white paper. Retrieved from www.isaca.org. Jagatic, T., Johnson, N., Jakobsson, M., & Menczer, F. 2007. Social phishing, Communications of the ACM (50), pp. 94 100. Krasnova, H., Spiekermann, S., Koroleva, K., & Hildebrand, T. 2010. Online social networks: why we disclose, Journal of Information Technology (25:2), 109-125. Kroenung, J., & Eckhardt, A. 2015. The attitude cube - A three-dimensional model of situational factors in IS adoption and their impact on the attitude-behavior relationship, Information & Management (52), pp. 611-627. Lee, D., Larose, R., & Rifon, N. 2008. Keeping our network safe: A model of online protection behavior, Behaviour and Information Technology (27:5), pp. 445-454. Mayer, R. C., Davis, J. H., & Schoorman, F. D. 1995. An Integrative Model of Organizational Trust, Academy of Management Review (20:3), pp. 709-734. Mohamed, N., & Ahmad, I. H. 2012. Information privacy concerns, antecedents and privacy measure use in social networking sites: Evidence from Malaysia, Computers in Human Behavior (28), pp. 2366 2375. Park, J. H., Gu, B., Leung, A. C., & Konana, P. 2014. An investigation of information sharing and seeking behaviors in online investment communities, Computers in Human Behavior (31), pp. 1-12. Ponemon. 2011. Ponemon Institute Research Report: Global Survey on Social Media Risks Survey of IT & IT Security Practitioners. Statista. January, 2016. Leading social networks worldwide as of January 2016, ranked by number of active users (in millions). Retrieved from http://www.statista.com/statistics/272014/global-socialnetworks-ranked-by-number-of-users/ Sussman, S. W., & Siegal, W. S. 2003. Informational Influence in Organizations: An Integrated Approach to Knowledge Adoption, Information Systems Research (14:1), pp. 47-65. TechTerms.com. 2015. Malware. Retrieved from http://techterms.com/definition/malware Venkatesh, V., Morris, M., Davis, G., & Davis, F. 2003. User Acceptance of Information Technology: Toward a Unified View, MIS Quarterly (27:3), pp. 425-478. Yu, T.-K., Lu, L.-C., & Liu, T.-F. 2010. Exploring factors that influence knowledge sharing behavior via weblogs, Computers in Human Behavior (26:1), 32-41. Twenty-second Americas Conference on Information Systems, San Diego, 2016 5