A significant technological transformation is on the horizon, one that could fundamentally alter our understanding of cybersecurity.
Yes, we’ve been here before.
When ChatGPT was introduced to the public in late 2022, security experts reacted with a mix of enthusiasm and concern. While they were intrigued by the potential of this new technology, they also feared its potential misuse in cyberattacks. Fast forward two years, and it’s clear that tools like ChatGPT and other generative AI platforms primarily enhance existing attack methods rather than creating entirely new ones.
However, if artificial intelligence evolves into what is being called an “agentic” model by 2025, the landscape of cyber threats could change dramatically. In this scenario, AI tools would act as autonomous “agents,” capable of independently identifying vulnerabilities, stealing login credentials, and infiltrating accounts.
These AI agents could even extort individuals by cross-referencing stolen data with publicly available email addresses or social media profiles. They could craft convincing messages and engage in full conversations with victims, who might believe they are dealing with a human hacker possessing sensitive information like Social Security numbers, home addresses, or credit card details. If this model proves effective against individuals, it’s likely to be just as effective against small businesses.
This warning stems from the 2025 State of Malware report, which analyzed a year’s worth of data to identify emerging cyber threats. While the report is aimed at IT professionals, its findings highlight risks that could impact individuals and small businesses alike. Consider last year’s global IT outage, which grounded flights worldwide and underscored the interconnectedness of companies, cybersecurity, and everyday life.
By 2025, agentic AI could further expose how deeply intertwined we all are in the fight for cybersecurity. Here’s what we might anticipate.
You can access the full 2025 State of Malware report [here].
The Limited Impact of Generative AI
The release of ChatGPT in November 2022 marked a turning point in how we interact with technology. No longer were computers just tools for recording or aiding creative work—they could now generate creative content on their own.
AI image generators like Midjourney and DALL-E can produce visuals from simple text prompts, even mimicking the styles of renowned artists such as Van Gogh, Rembrandt, and Picasso. Similarly, AI chatbots like ChatGPT, Google Gemini, and Claude (developed by OpenAI competitor Anthropic) can brainstorm marketing ideas, write essays, compose poetry, and even proofread human-written text. These tools can also answer a wide range of factual questions, much like Perplexity, which markets itself as the world’s first “answer engine” rather than a traditional search engine.
This is the promise of “generative AI,” a term describing AI systems capable of producing text, images, videos, summaries, and more, limited only by human imagination.
But how has this potential been utilized?
For some, generative AI has made it easier to cheat in academic settings or manipulate social media algorithms for fleeting fame—hardly groundbreaking. For malicious actors, however, it has streamlined proven attack methods, making them more efficient.
Generative AI can craft phishing emails that lack the usual red flags, such as spelling errors or awkward phrasing. It can also generate persuasive messages for romance scams or urgent-sounding texts that trick people into clicking malicious links. While these tactics aren’t new, AI has made them easier to execute on a larger scale.
That said, even tools like ChatGPT have limitations. They are designed to prevent users from generating harmful content. Yet, in 2023, Malwarebytes Labs managed to bypass these safeguards and successfully prompted ChatGPT to create ransomware—twice.
Due to these restrictions, a range of rogue AI tools has emerged online, capable of producing illegal content. One example is the creation of “deepfake nudes,” which use AI to superimpose one person’s face onto another’s body, generating fake explicit images. This technology has sparked numerous scandals in American high schools, serving as a modern tool for blackmail.
The ability to fabricate text, images, and audio has also enabled cybercriminals to impersonate CEOs or executives more convincingly, tricking employees into signing fraudulent contracts or handing over sensitive credentials.
While these threats are real, they are not entirely new. As noted in the 2025 State of Malware report:
“The current impact of AI on malware is limited. While there are exceptions, generative AI primarily enhances efficiency rather than introducing entirely new capabilities. Cybercrime is a well-established field that relies on proven tools like phishing, information stealers, and ransomware, which are already highly effective.”
This could change in 2025.
Agentic AI and the Future of Cyber Threats
Agentic AI represents the next evolution in artificial intelligence, even if it’s not yet widely known.
Tech giants like Google, Amazon, Meta, Microsoft, and others are already exploring this technology. Agentic AI aims to move beyond the confines of chatbots, creating individualized AI “agents” capable of performing specific tasks. For instance, these agents could handle customer service inquiries, help patients find in-network healthcare providers, or offer strategic advice based on a company’s performance. Microsoft has already previewed an AI agent that answers employee questions about HR policies and holiday schedules, while Salesforce is heavily investing in the technology, positioning it as a personal assistant for everyone.
As outlined in the 2025 State of Malware report:
“If agentic AI becomes a reality in 2025, it won’t just answer questions—it will think and act. This transforms AI from a tool that responds to prompts into a peer or expert capable of planning tasks, interacting with the world, and solving problems.”
The implications for cyberattacks are profound. In the wrong hands, malicious actors could use AI agents to:
- Cross-reference stolen data to link Social Security numbers with email addresses, sending phishing emails that threaten further data exposure unless a ransom is paid.
- Scrape social media for baby photos, creating fake profiles to weaponize those images as threats against a child’s safety.
- Analyze LinkedIn to deduce email address formats (e.g., first name.last name@company.com) and use this information to send fraudulent requests from executives to their teams.
- Mine public divorce records to identify targets for romance scams, with AI agents composing and managing entire conversations.
These threats extend beyond individuals to small businesses, as vulnerabilities in personal devices can lead to network-wide malware attacks. Conversely, as attacks on companies become more sophisticated, the data individuals share with these companies becomes increasingly vulnerable.
Fortunately, agentic AI isn’t just a threat—it also offers solutions. AI agents could be deployed to identify vulnerabilities, monitor network activity for suspicious behavior, and guide users in safe online practices, such as posting content, browsing the web, or making purchases from unfamiliar retailers.
The reality is that AI is here to stay. With significant investments from major developers and corporations, its role in our lives will only grow. If attackers are poised to exploit this technology, then defenders and everyday users must also harness its potential. The future of cybersecurity will likely involve a race to leverage AI for both offense and defense.