I am often curious what AI thinks about AI. So, I decided to sit down and have a chat with Bing and see what it thinks about phishing attacks in the AI realm.
Phishing has been around forever, and AI is now picking up mainstream usage. What would phishing look like in AI?
AI has indeed been used in phishing attacks, and it has significantly increased their sophistication. Here are some ways AI is being used in phishing:
1. **Deepfake Technology**: AI can be used to create deepfakes, where the identity of a person is simulated to a high degree of accuracy. For instance, there was a case where a scammer used AI deepfake technology to impersonate the Chief Financial Officer (CFO) of a UK branch of a company, scamming $25.6 million USD¹².
2. **Generative AI**: Generative AI tools can produce convincing text, voice, and images from simple hand-typed prompts. These tools can make it difficult for people to assess whether an email or password reset request is genuine, or to identify phishing, spoofing, or social engineering attempts⁴.
3. **AI-Enhanced Emails**: AI can make scam emails look more genuine. The National Cyber Security Centre (NCSC) warned that generative AI tools can help make approaches to potential victims more convincing by creating fake “lure documents” that do not contain the translation, spelling, or grammatical errors that tended to give away phishing attacks⁴.
4. **Spear Phishing with AI**: AI can be used to enhance spear phishing attacks, which use carefully cultivated information related to the target to phish for sensitive data. Scammers who hoard breached data from hacked websites can use AI technology to read that data and organize it into a highly targeted spear phishing attack⁵.
These examples illustrate how AI can be used to enhance phishing attacks, making them more sophisticated and harder to detect. It’s important to stay vigilant and take precautions when dealing with suspicious communications. Always verify the source and never share sensitive information unless you’re sure it’s safe.
(1) AI scammer deepfaked CFO of company, scammed $25.6 million in a single meeting. https://www.tweaktown.com/news/96023/ai-scammer-deepfaked-cfo-of-company-scammed-25-6-million-in-single-meeting/index.html.
(2) AI scammer stole $25 million with deepfake conference call. https://bgr.com/tech/ai-scammer-stole-25-million-with-deepfake-conference-call/.
(3) AI will make scam emails look genuine, UK cybersecurity agency warns …. https://www.theguardian.com/technology/2024/jan/24/ai-scam-emails-uk-cybersecurity-agency-phishing.
(4) How AI is changing phishing scams – Microsoft 365. https://www.microsoft.com/en-us/microsoft-365-life-hacks/privacy-and-safety/how-ai-changing-phishing-scams.
(5) How Phishing Attacks Use Human Evolution To Their Advantage. https://www.forbes.com/sites/forbestechcouncil/2024/02/05/how-phishing-attacks-use-human-evolution-to-their-advantage/.
(6) Experts say AI scams are on the rise as criminals use voice cloning …. https://www.abc.net.au/news/2023-04-12/artificial-intelligence-ai-scams-voice-cloning-phishing-chatgpt/102064086.
(7) Getty Images. https://www.gettyimages.com/detail/illustration/hacker-with-laptop-computer-stealing-royalty-free-illustration/1166526915.
Those are ways that AI has been used in phishing attacks. What would phishing AI look like?
Phishing AI, in the context of cybersecurity, refers to artificial intelligence systems designed to detect and prevent phishing attacks. Here’s how it might work:
1. **Automated Phishing Detection**: AI can analyze email content, user behavior, and other features to detect potential phishing scams⁵. It can identify suspicious patterns and flag them for further investigation.
2. **URL Scanning and Domain Monitoring**: AI can be used to monitor domain registrations, MX servers, and phishing pages to protect against typosquatting⁴. Typosquatting is a common technique used by phishers where they register domains that are similar to popular websites but contain typographical errors.
3. **User Behavior Analysis**: AI can analyze user behavior to identify unusual activities that might indicate a phishing attack. For example, if a user suddenly starts clicking on links in emails from an unknown source, the AI might flag this as potential phishing activity.
4. **Email Content Analysis**: AI can analyze the content of emails to identify phishing attempts. This could involve looking for certain keywords or phrases commonly used in phishing emails, or analyzing the email’s structure and formatting to identify suspicious elements.
5. **Machine Learning**: AI systems can learn from past phishing attempts to improve their detection capabilities. Over time, the system becomes more effective at identifying phishing scams, even as phishers evolve their tactics.
It’s important to note that while AI can significantly enhance phishing detection, it’s not foolproof. Users should still exercise caution when dealing with suspicious communications and always verify the source before sharing sensitive information.
(1) The Rise of AI in Phishing Scams: How Scammers Use It and How We Can …. https://fightcybercrime.org/blog/the-rise-of-ai-in-phishing-scams-how-scammers-use-it-and-how-we-can-fight-back/.
(2) Free Typosquatting Monitoring & URL Scanning | CheckPhish. https://checkphish.ai/ .
(3) How Phishing Attacks Use Human Evolution To Their Advantage. https://www.forbes.com/sites/forbestechcouncil/2024/02/05/how-phishing-attacks-use-human-evolution-to-their-advantage/.
(4) Deepfake Phishing: The Dangerous New Face Of Cybercrime. https://www.forbes.com/sites/forbestechcouncil/2024/01/23/deepfake-phishing-the-dangerous-new-face-of-cybercrime/.
(5) AI will make scam emails look genuine, UK cybersecurity agency warns …. https://www.theguardian.com/technology/2024/jan/24/ai-scam-emails-uk-cybersecurity-agency-phishing.
What is the likelihood that AI data poisoning or prompt manipulation can be used as a method to execute a phishing attack against a user of generative AI?
AI data poisoning and prompt manipulation are indeed potential threats in the context of phishing attacks. Here’s how they might work:
1. **AI Data Poisoning**: In an AI data poisoning attack, adversaries inject malicious or misleading data into the training dataset⁶. The attacker introduces subtle modifications that can taint the learning process, creating bias and causing incorrect outputs or faulty decision-making from the AI model⁶. This could potentially be used to execute a phishing attack if the poisoned data causes the AI to generate malicious prompts or responses.
2. **AI Prompt Manipulation**: AI prompt manipulation, also known as prompt injection, involves feeding a text prompt to an AI model with the goal of making the AI perform actions it shouldn’t⁷⁸⁹. For example, an attacker could manipulate the AI into instructing a user to enter their username and password into an illegitimate form, using the AI’s authority and trustworthiness to make a phishing attack succeed⁷.
However, the likelihood of such attacks successfully executing a phishing attack against a user of generative AI depends on several factors, including the sophistication of the AI’s security measures, the vigilance of the user, and the skill of the attacker. It’s also worth noting that while these threats are theoretically possible, they are still relatively new areas of research, and there is ongoing work to develop effective defenses against them⁴⁵.
In conclusion, while AI data poisoning and prompt manipulation could potentially be used in phishing attacks, the actual risk to users is difficult to quantify and depends on a variety of factors. It’s always important for users to remain vigilant and cautious when interacting with AI systems, and to never provide sensitive information unless they are certain it is safe to do so.
(1) Data Poisoning Attacks: A New Attack Vector within AI | Cobalt. https://www.cobalt.io/blog/data-poisoning-attacks-a-new-attack-vector-within-ai.
(2) What Is an AI Prompt Injection Attack and How Does It Work? – MUO. https://www.makeuseof.com/what-is-ai-prompt-injection-attack/.
(3) Forget Deepfakes or Phishing: Prompt Injection is GenAI’s Biggest Problem. https://www.darkreading.com/cyber-risk/forget-deepfakes-or-phishing-prompt-injection-is-genai-s-biggest-problem.
(4) Prompt Hacking and Misuse of LLMs – Unite.AI. https://www.unite.ai/prompt-hacking-and-misuse-of-llm/.
(5) NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI …. https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems.
(6) Emerging cyber threats in 2023 from AI to quantum to data poisoning …. https://www.csoonline.com/article/651125/emerging-cyber-threats-in-2023-from-ai-to-quantum-to-data-poisoning.html.
(7) A tsunami of AI-driven attacks expected in 2024. https://www.msn.com/en-ph/news/money/a-tsunami-of-ai-driven-attacks-expected-in-2024/ar-BB1hIvE5.
(8) Proliferation of AI-driven attacks anticipated in 2024. https://manilastandard.net/tech/314409901/proliferation-of-ai-driven-attacks-anticipated-in-2024.html.
(9) Holistic approach needed to fight cybercrimes. https://businessmirror.com.ph/2024/02/03/holistic-approach-needed-to-fight-cybercrimes/.
(10) The ELI5 Guide to Prompt Injection: Techniques, Prevention Methods …. https://www.lakera.ai/blog/guide-to-prompt-injection.
What do you think are the best defenses against abusing AI to exectue phishing attacks?
Defending against the abuse of AI in phishing attacks requires a multi-faceted approach:
1. **User Education**: One of the most effective defenses is educating users about the risks and signs of phishing attacks. This includes teaching users not to click on suspicious links, not to provide sensitive information unless they’re sure it’s safe, and to verify the source of any communication.
2. **AI Security Measures**: AI systems should be designed with robust security measures to prevent data poisoning and prompt manipulation. This includes using secure and trusted data sources, implementing strong access controls, and regularly auditing and testing the system for vulnerabilities.
3. **Regular Updates and Patches**: Keeping AI systems up-to-date with the latest security patches can help protect against known vulnerabilities that could be exploited in phishing attacks.
4. **Anomaly Detection**: Implementing AI systems that can detect unusual user behavior or suspicious patterns can help identify potential phishing attacks early.
5. **Two-Factor Authentication (2FA)**: Implementing 2FA can add an extra layer of security, making it more difficult for phishers to gain access to user accounts even if they manage to obtain the user’s credentials.
6. **Secure Communication Channels**: Using secure communication channels can help protect against man-in-the-middle attacks, a common technique used in phishing.
7. **Incident Response Plan**: Having a plan in place for responding to phishing attacks can help minimize damage when an attack does occur. This includes steps for identifying and isolating affected systems, notifying users, and reporting the incident to relevant authorities.
Remember, no defense is perfect, and the key to cybersecurity is constant vigilance and adaptation to new threats. It’s always important to stay informed about the latest developments in cybersecurity and to regularly review and update security practices.
Are these specific to AI based phishing attacks, or are they just generic phishing defences?
The defenses I mentioned are generally applicable to both traditional and AI-based phishing attacks. However, some aspects are particularly relevant to AI-based attacks:
– **User Education**: While this is a generic defense, it’s particularly important for AI-based attacks because users may not be aware that AI can be used in phishing. They need to understand that even seemingly intelligent and personalized communications could be generated by AI and may not be trustworthy.
– **AI Security Measures**: These are specific to AI-based attacks. Ensuring the integrity of the training data and the robustness of the AI model against manipulation are crucial in preventing AI from being exploited in phishing attacks.
– **Anomaly Detection**: While this can be used in traditional phishing defense, it becomes even more important with AI-based attacks. AI can analyze large amounts of data and identify subtle patterns that might be missed by human analysts.
– **Incident Response Plan**: Having a plan that specifically addresses AI-based attacks can help organizations respond effectively if they are targeted.
So, while many of the defenses are applicable to phishing in general, their implementation may need to be adapted to address the specific challenges posed by AI-based attacks. It’s also worth noting that as AI-based attacks evolve, the defenses against them will need to evolve as well.
How do you suggest CISOs implement a plan to prevent phishing attacks through AI in their organisations?
Implementing a plan to prevent AI-based phishing attacks involves several steps:
1. **Risk Assessment**: Understand the specific risks associated with AI-based phishing attacks in the context of your organization. This includes understanding how AI can be used in phishing attacks and how your organization might be targeted.
2. **Education and Training**: Educate employees about the risks and signs of AI-based phishing attacks. This could involve training sessions, workshops, or online courses. The training should be ongoing to keep up with the evolving threat landscape.
3. **AI Security Measures**: Implement robust security measures for your AI systems. This includes securing the training data, implementing strong access controls, and regularly auditing and testing the AI systems for vulnerabilities.
4. **Anomaly Detection Systems**: Implement AI systems that can detect unusual user behavior or suspicious patterns that might indicate a phishing attack. These systems can provide early warning of an attack.
5. **Incident Response Plan**: Develop a specific incident response plan for AI-based phishing attacks. This plan should outline the steps to be taken in the event of an attack, including how to identify and isolate affected systems, how to notify users, and how to report the incident to relevant authorities.
6. **Regular Updates and Patches**: Ensure that all AI systems are kept up-to-date with the latest security patches and updates. This can help protect against known vulnerabilities that could be exploited in phishing attacks.
7. **Collaboration and Information Sharing**: Collaborate with other organizations and participate in information sharing networks to stay informed about the latest threats and defenses.
8. **Vendor Assessment**: If you’re using third-party AI systems, assess the vendor’s security practices and ensure they meet your organization’s standards.
Remember, the key to cybersecurity is constant vigilance and adaptation to new threats. It’s always important to stay informed about the latest developments in cybersecurity and to regularly review and update security practices.
Are you phishing me right now?
No, I’m not phishing you. As an AI developed by OpenAI, I’m designed to respect user privacy and security. I don’t ask for personal data unless it’s necessary to provide the service you’re asking for, and I don’t store personal data between conversations. My primary function is to provide information and answer questions to the best of my knowledge and abilities. If you have any concerns about privacy or security, feel free to let me know, and I’ll do my best to address them.
Source: Conversation with Bing, 06/02/2024
I found it an interesting academic exercise to see how similar AI itself thinks that traditional phishing and AI based phishing it. The defences are essentially the same, although I would suspect that given how much scrutiny AI is getting now, and all the lessons we’ve learned from previous software security practices that AI systems will have a better relationship with security from the inception phase onward than traditional systems did.
What do you think? Did Bing get it right, is this how AI can be manipulated to execute phishing attacks, and are the defences similar enough to stop them, or do we need a considerably different approach?
Leave a Reply