Social engineering, in the context of Information Security, refers to psychological manipulation of people into performing actions or divulging confidential information. A type of confidence trick for the purpose of information gathering, fraud, or system access, it differs from a traditional “con” in that it is often one of many steps in a more complex fraud scheme.
It has also been defined as “any act that influences a person to take an action that may or may not be in their best interests.”
Information security culture
Employee behavior can have a big impact on information security in organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. “Exploring the Relationship between Organizational Culture and Information Security Culture” provides the following definition of information security culture: “ISC is the totality of patterns of behavior in an organization that contribute to the protection of information of all kinds.”
Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security “effort” and often take actions that ignore organizational information security best interests. Research shows Information security culture needs to be improved continuously. In “Information Security Culture from Analysis to Change”, authors commented, “It’s a never ending process, a cycle of evaluation and change or maintenance.” To manage the information security culture, five steps should be taken: Pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.
- Pre-Evaluation: to identify the awareness of information security within employees and to analysis current security policy.
- Strategic Planning: to come up a better awareness-program, we need to set clear targets. Clustering people is helpful to achieve it.
- Operative Planning: we can set a good security culture based on internal communication, management-buy-in, and security awareness and training program.
- Implementation: four stages should be used to implement the information security culture. They are commitment of the management, communication with organizational members, courses for all organizational members, and commitment of the employees.
Techniques and terms
All social engineering techniques are based on specific attributes of human decision-making known as cognitive biases. These biases, sometimes called “bugs in the human hardware”, are exploited in various combinations to create attack techniques, some of which are listed below. The attacks used in social engineering can be used to steal employees’ confidential information. The most common type of social engineering happens over the phone. Other examples of social engineering attacks are criminals posing as exterminators, fire marshals and technicians to go unnoticed as they steal company secrets.
One example of social engineering is an individual who walks into a building and posts an official-looking announcement to the company bulletin that says the number for the help desk has changed. So, when employees call for help the individual asks them for their passwords and IDs thereby gaining the ability to access the company’s private information. Another example of social engineering would be that the hacker contacts the target on a social networking site and starts a conversation with the target. Gradually the hacker gains the trust of the target and then uses that trust to get access to sensitive information like password or bank account details.
Social engineering relies heavily on the 6 principles of influence established by Robert Cialdini. Cialdini’s theory of influence is based on six key principles: reciprocity, commitment and consistency, social proof, authority, liking, scarcity.
Six key principles
- Reciprocity – People tend to return a favor, thus the pervasiveness of free samples in marketing. In his conferences, he often uses the example of Ethiopia providing thousands of dollars in humanitarian aid to Mexico just after the 1985 earthquake, despite Ethiopia suffering from a crippling famine and civil war at the time. Ethiopia had been reciprocating for the diplomatic support Mexico provided when Italy invaded Ethiopia in 1935. The good cop/bad cop strategy is also based on this principle.
- Commitment and consistency – If people commit, orally or in writing, to an idea or goal, they are more likely to honor that commitment because of establishing that idea or goal as being congruent with their self-image. Even if the original incentive or motivation is removed after they have already agreed, they will continue to honor the agreement. Cialdini notes Chinese brainwashing of American prisoners of war to rewrite their self-image and gain automatic unenforced compliance. Another example is marketers make you close popups by saying “I’ll sign up later” or “No thanks, I prefer not making money”.
- Social proof – People will do things that they see other people are doing. For example, in one experiment, one or more confederates would look up into the sky; bystanders would then look up into the sky to see what they were seeing. At one point this experiment aborted, as so many people were looking up that they stopped traffic. See conformity, and the Asch conformity experiments.
- Authority – People will tend to obey authority figures, even if they are asked to perform objectionable acts. Cialdini cites incidents such as the Milgram experiments in the early 1960s and the My Lai massacre.
- Liking – People are easily persuaded by other people that they like. Cialdini cites the marketing of Tupperware in what might now be called viral marketing. People were more likely to buy if they liked the person selling it to them.
- Scarcity – Perceived scarcity will generate demand. For example, saying offers are available for a “limited time only” encourages sales.
His 1984 book, Influence: The Psychology of Persuasion, was based on three “undercover” years applying for and training at used car dealerships, fund-raising organizations, and telemarketing firms to observe real-life situations of persuasion. It has been mentioned in 50 Psychology Classics.
Four Social Engineering Vectors
Vishing, otherwise known as “voice phishing”, is the criminal practice of using social engineering over the telephone system to gain access to private personal and financial information from the public for the purpose of financial reward. It is also employed by attackers for reconnaissance purposes to gather more detailed intelligence on a target organisation.
Phishing is a technique of fraudulently obtaining private information. Typically, the phisher sends an e-mail that appears to come from a legitimate business—a bank, or credit card company—requesting “verification” of information and warning of some dire consequence if it is not provided. The e-mail usually contains a link to a fraudulent web page that seems legitimate—with company logos and content—and has a form requesting everything from a home address to an ATM card’s PIN or a credit card number. For example, in 2003, there was a phishing scam in which users received e-mails supposedly from eBay claiming that the user’s account was about to be suspended unless a link provided was clicked to update a credit card (information that the genuine eBay already had). Because it is relatively simple to make a Web site resemble a legitimate organization’s site by mimicking the HTML code and logos the scam counted on people being tricked into thinking they were being contacted by eBay and subsequently, were going to eBay’s site to update their account information. By spamming large groups of people, the “phisher” counted on the e-mail being read by a percentage of people who already had listed credit card numbers with eBay legitimately, who might respond.
The act of using SMS text messaging to lure victims into a specific course of action. Like Phishing it can be clicking on a malicious link or divulging information.
Pretending or pretexting to be another person with the goal of gaining access physically to a system or building.
Pretexting , is the act of creating and using an invented scenario (the pretext) to engage a targeted victim in a manner that increases the chance the victim will divulge information or perform actions that would be unlikely in ordinary circumstances. An elaborate lie, it most often involves some prior research or setup and the use of this information for impersonation (e.g., date of birth, Social Security number, last bill amount) to establish legitimacy in the mind of the target.
This technique can be used to fool a business into disclosing customer information as well as by private investigators to obtain telephone records, utility records, banking records and other information directly from company service representatives. The information can then be used to establish even greater legitimacy under tougher questioning with a manager, e.g., to make account changes, get specific balances, etc.
Pretexting can also be used to impersonate co-workers, police, bank, tax authorities, clergy, insurance investigators—or any other individual who could have perceived authority or right-to-know in the mind of the targeted victim. The pretexter must simply prepare answers to questions that might be asked by the victim. In some cases, all that is needed is a voice that sounds authoritative, an earnest tone, and an ability to think on one’s feet to create a pretextual scenario.
Phone phishing (or “vishing”) uses a rogue interactive voice response (IVR) system to recreate a legitimate-sounding copy of a bank or other institution’s IVR system. The victim is prompted (typically via a phishing e-mail) to call in to the “bank” via a (ideally toll free) number provided in order to “verify” information. A typical “vishing” system will reject log-ins continually, ensuring the victim enters PINs or passwords multiple times, often disclosing several different passwords. More advanced systems transfer the victim to the attacker/defrauder, who poses as a customer service agent or security expert for further questioning of the victim.
Although similar to “phishing”, spear phishing is a technique that fraudulently obtains private information by sending highly customized emails to few end users. It is the main difference between phishing attacks because phishing campaigns focus on sending out high volumes of generalized emails with the expectation that only a few people will respond. On the other hand, spear phishing emails require the attacker to perform additional research on their targets in order to “trick” end users into performing requested activities. The success rate of spear-phishing attacks is considerably higher than phishing attacks with people opening roughly 3% of phishing emails when compared to roughly 70% of potential attempts. Furthermore, when users actually open the emails phishing emails have a relatively modest 5% success rate to have the link or attachment clicked when compared to a spear-phishing attack’s 50% success rate.
Spear Phishing success is heavily dependent on the amount and quality of OSINT (Open Source Intelligence) that the attacker can obtain. Social media account activity is one example of a source of OSINT.
Water holing is a targeted social engineering strategy that capitalizes on the trust users have in websites they regularly visit. The victim feels safe to do things they would not do in a different situation. A wary person might, for example, purposefully avoid clicking a link in an unsolicited email, but the same person would not hesitate to follow a link on a website he or she often visits. So, the attacker prepares a trap for the unwary prey at a favored watering hole. This strategy has been successfully used to gain access to some (supposedly) very secure systems.
The attacker may set out by identifying a group or individuals to target. The preparation involves gathering information about websites the targets often visit from the secure system. The information gathering confirms that the targets visit the websites and that the system allows such visits. The attacker then tests these websites for vulnerabilities to inject code that may infect a visitor’s system with malware. The injected code trap and malware may be tailored to the specific target group and the specific systems they use. In time, one or more members of the target group will get infected and the attacker can gain access to the secure system.
Baiting is like the real-world Trojan horse that uses physical media and relies on the curiosity or greed of the victim. In this attack, attackers leave malware-infected floppy disks, CD-ROMs, or USB flash drives in locations people will find them (bathrooms, elevators, sidewalks, parking lots, etc.), give them legitimate and curiosity-piquing labels, and waits for victims.
For example, an attacker may create a disk featuring a corporate logo, available from the target’s website, and label it “Executive Salary Summary Q2 2012”. The attacker then leaves the disk on the floor of an elevator or somewhere in the lobby of the target company. An unknowing employee may find it and insert the disk into a computer to satisfy his or her curiosity, or a good Samaritan may find it and return it to the company. In any case, just inserting the disk into a computer installs malware, giving attackers access to the victim’s PC and, perhaps, the target company’s internal computer network.
Unless computer controls block infections, insertion compromises PCs “auto-running” media. Hostile devices can also be used. For instance, a “lucky winner” is sent a free digital audio player compromising any computer it is plugged to. A “road apple” (the colloquial term for horse manure, suggesting the device’s undesirable nature) is any removable media with malicious software left in opportunistic or conspicuous places. It may be a CD, DVD, or USB flash drive, among other media. Curious people take it and plug it into a computer, infecting the host and any attached networks. Hackers may give them enticing labels, such as “Employee Salaries” or “Confidential”.
One study done in 2016 had researchers drop 297 USB drives around the campus of the University of Illinois. The drives contained files on them that linked to webpages owned by the researchers. The researchers were able to see how many of the drives had files on them opened, but not how many were inserted into a computer without having a file opened. Of the 297 drives that were dropped, 290 (98%) of them were picked up and 135 (45%) of them “called home”.
Quid pro quo
Quid pro quo means something for something:
- An attacker calls random numbers at a company, claiming to be calling back from technical support. Eventually this person will hit someone with a legitimate problem, grateful that someone is calling back to help them. The attacker will “help” solve the problem and, in the process, have the user type commands that give the attacker access or launch malware.
- In a 2003 information security survey, 90% of office workers gave researchers what they claimed was their password in answer to a survey question in exchange for a cheap pen. Similar surveys in later years obtained similar results using chocolates and other cheap lures, although they made no attempt to validate the passwords.
An attacker, seeking entry to a restricted area secured by unattended, electronic access control, e.g. by RFID card, simply walks in behind a person who has legitimate access. Following common courtesy, the legitimate person will usually hold the door open for the attacker or the attackers themselves may ask the employee to hold it open for them. The legitimate person may fail to ask for identification for any of several reasons, or may accept an assertion that the attacker has forgotten or lost the appropriate identity token. The attacker may also fake the action of presenting an identity token.
Common confidence tricksters or fraudsters also could be considered “social engineers” in the wider sense, in that they deliberately deceive and manipulate people, exploiting human weaknesses to obtain personal benefit. They may, for example, use social engineering techniques as part of an IT fraud.
A very recent type of social engineering technique includes spoofing or hacking IDs of people having popular e-mail IDs such as Yahoo!, Gmail, Hotmail, etc. Among the many motivations for deception are:
- Phishing credit-card account numbers and their passwords.
- Cracking private e-mails and chat histories, and manipulating them by using common editing techniques before using them to extort money and creating distrust among individuals.
- Cracking websites of companies or organizations and destroying their reputation.
- Computer virus hoaxes
- Convincing users to run malicious code within the web browser via self-XSS attack to allow access to their web account
Organizations reduce their security risks by:
Training to Employees .Training employees in security protocols relevant to their position. (e.g., in situations such as tailgating, if a person’s identity cannot be verified, then employees must be trained to politely refuse.)
Standard Framework. Establishing frameworks of trust on an employee/personnel level (i.e., specify and train personnel when/where/why/how sensitive information should be handled)
Scrutinizing Information. Identifying which information is sensitive and evaluating its exposure to social engineering and breakdowns in security systems (building, computer system, etc.)
Security Protocols. Establishing security protocols, policies, and procedures for handling sensitive information.
Event Test. Performing unannounced, periodic tests of the security framework.
Inoculation. Preventing social engineering and other fraudulent tricks or traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.
Review. Reviewing the above steps regularly: no solutions to information integrity are perfect.
Waste Management. Using a waste management service that has dumpsters with locks on them, with keys to them limited only to the waste management company and the cleaning staff. Locating the dumpster either in view of employees so that trying to access it carries a risk of being seen or caught, or behind a locked gate or fence where the person must trespass before they can attempt to access the dumpster.
La Ingeniería social es la práctica de obtener información confidencial a través de la manipulación de usuarios legítimos. Es una técnica que pueden usar ciertas personas para obtener información, acceso o privilegios en sistemas de información que les permitan realizar algún acto que perjudique o exponga la persona u organismo comprometido a riesgo o abusos.
El principio que sustenta la ingeniería social es el que en cualquier sistema “los usuarios son el eslabón débil”.
Técnicas y términos
En la práctica, un ingeniero social usará comúnmente el teléfono o Internet para engañar a la gente, fingiendo ser, por ejemplo, un empleado de algún banco o alguna otra empresa, un compañero de trabajo, un técnico o un cliente. Vía Internet se usa, adicionalmente, el envío de solicitudes de renovación de permisos de acceso a páginas web o correos electrónicos falsos que solicitan respuestas e incluso las famosas cadenas, llevando así a revelar sus credenciales de acceso o infromación sensible, confidencial o crítica.
Con este método, los ingenieros sociales aprovechan la tendencia natural de la gente a reaccionar de manera predecible en ciertas situaciones, –por ejemplo proporcionando detalles financieros a un aparente funcionario de un banco– en lugar de tener que encontrar agujeros de seguridad en los sistemas informáticos.
- La ingeniería Social está definida como un ataque basado en engañar a un usuario o administrador de un sitio en la internet, para poder ver la información que ellos quieren.
- Se hace para obtener acceso a sistemas o información útil.
- Los objetivos de la ingeniería social son fraude, intrusión de una red.
El pretexto es la creación de un escenario inventado para llevar a la víctima a revelar información personal o a actuar de una forma que sería poco común en circunstancias normales. Una mentira elaborada implica a menudo una investigación previa de la víctima para conseguir la información necesaria, y así llevar a cabo la suplantación (por ejemplo, la fecha de nacimiento, el número de la Seguridad Social, datos bancarios, etc.) y hacerle creer que es legítimo.
El pretexto también se puede utilizar para suplantar a compañeros de trabajo, a la policía, al banco, a autoridades fiscales o cualquier otra persona que podría haber percibido el derecho a la información en la mente de la víctima. El “pretexter” simplemente debe preparar respuestas a preguntas que se puede plantear la víctima. En algunos casos, todo lo que necesita es una voz que inspire autoridad, un tono serio y la capacidad de improvisar para crear un escenario pretextual.
Uno de los factores más peligrosos, es la creciente tendencia por parte de los usuarios, principalmente los más jóvenes, a colocar información personal y sensible en forma constante. Desde imágenes de toda su familia, los lugares que frecuentan, gustos personales y relaciones amorosas. Las redes sociales proveen de mucha información a un delincuente para que realice un ataque, como para robar tu identidad o en el menor de los casos ser convincente para tener empatía.
Quizá el ataque más simple pero muy efectivo sea engañar a un usuario llevándolo a pensar que un administrador del sistema está solicitando una contraseña para varios propósitos legítimos. Los usuarios de sistemas de Internet frecuentemente reciben mensajes que solicitan contraseñas o información de tarjeta de crédito, con el motivo de “crear una cuenta”, “reactivar una configuración”, u otra operación benigna; a este tipo de ataques se los llama phishing (se pronuncia igual que fishing, pesca). Los usuarios de estos sistemas deberían ser advertidos temprana y frecuentemente para que no divulguen contraseñas u otra información sensible a personas que dicen ser administradores.
Otro ejemplo contemporáneo de un ataque de ingeniería social es el uso de archivos adjuntos en correos electrónicos, ofreciendo, por ejemplo, fotos “íntimas” de alguna persona famosa o algún programa “gratis” (a menudo aparentemente provenientes de alguna persona conocida) pero que ejecutan código malicioso (por ejemplo, usar la máquina de la víctima para enviar cantidades masivas de spam). Ahora, después de que los primeros correos electrónicos maliciosos llevaran a los proveedores de software a deshabilitar la ejecución automática de archivos adjuntos, los usuarios deben activar esos archivos de forma explícita para que ocurra una acción maliciosa. Muchos usuarios, sin embargo, abren casi ciegamente cualquier archivo adjunto recibido, concretando de esta forma el ataque.
La ingeniería social también se aplica al acto de manipulación cara a cara para obtener acceso a los sistemas informáticos. Otro ejemplo es el conocimiento sobre la víctima, a través de la introducción de contraseñas habituales, lógicas típicas o conociendo su pasado y presente; respondiendo a la pregunta: ¿Qué contraseña introduciría yo si fuese la víctima?
La principal defensa contra la ingeniería social es educar y entrenar a los usuarios en el uso de políticas de seguridad y asegurarse de que estas sean seguidas.
Uno de los ingenieros sociales más famosos de los últimos tiempos es Kevin Mitnick. Según su opinión, la ingeniería social se basa en estos cuatro principios:
- Todos queremos ayudar.
- El primer movimiento es siempre de confianza hacia el otro.
- No nos gusta decir No.
- A todos nos gusta que nos alaben.
El vishing consiste en realizar llamadas telefónicas encubiertas bajo encuestas con las que también se podría sacar información personal de forma que la víctima no sospeche.
Por este motivo debemos tener cuidado y no proporcionar información personal aunque se trate de nuestra compañía de móvil, electricidad o agua (entre otras), ya que podría ser un hacker que haya elegido casualmente la nuestra.
En este caso se utiliza un dispositivo de almacenamiento extraíble (CD, DVD, USB) infectado con un software malicioso, dejándolo en un lugar en el cual sea fácil de encontrar (por ejemplo, baños públicos, ascensores, aceras, etc.). Cuando la víctima encuentre dicho dispositivo y lo introduzca en su ordenador, el software se instalará y permitirá que el hacker obtenga todos los datos personales del usuario.
Quid pro quo
Quid pro quo significa “algo por algo”. El atacante llama a números aleatorios en una empresa, alegando estar llamando de nuevo desde el soporte técnico. Esta persona informará a alguien de un problema legítimo y se ofrecerá a ayudarle, durante el proceso conseguirá los datos de acceso y lanzará un malware.
En una encuesta de seguridad de la información de 2003, el 90% de los trabajadores de una oficina dieron a los atacantes lo que ellos afirmaban ser su contraseña en respuesta a una pregunta de la encuesta a cambio de una pluma. Estudios similares en años posteriores obtuvieron resultados similares utilizando chocolates y otros señuelos baratos, aunque no intentaron validar las contraseñas.