The researchers on the team Forward-Looking Threat Research from the AI security company TrendAI have developed a tool capable of automatically collecting public posts, photos and information from LinkedIn profiles, analyzing them using artificial intelligence and transforming them into highly personalized phishing emails, complete with a fake website, tailor-made for the victim. All created by a single person, without violating any privacy regulations, using only publicly accessible data. And do you know how long it took them to orchestrate this scam simulation? Less than 30 minutes! What makes this scenario particularly relevant is not the technical complexity of the operation, but the speed of execution and the simplicity with which it was set up: the necessary tools are already available, cheap and within the reach of anyone who has the motivation to use them. And this radically changes the cyber threat landscape for businesses and professionals, because it means that anyone who publishes their activities on LinkedIn, such as conferences attended, projects followed, professional opinions, etc., is unknowingly providing raw material for potential targeted attacks.
The new frontier of spear phishing with AI
Let’s start with a fact. The so-called open-source intelligence or OSINT (Open Source Intelligence), i.e. the analysis of publicly available information, has changed radically with the “democratization” of AI. It once required specialist skills and a lot, a lot of time; today it can be carried out automatically thanks to artificial intelligence. This drastically lowers the entry threshold, as you don’t necessarily need to have structured teams or advanced resources to analyze large quantities of data: you just need to have the right tools and know how to use them.
To demonstrate how AI has totally rewritten the rules of the game, TrendAI researchers have developed an experimental system, what in jargon is called a PoC (Proof of Concept), that is, a practical demonstration of feasibility, capable of collecting public data from LinkedIn and transforming it into detailed profiles to be used for malicious purposes. The process starts from very common elements: posts, images and metadata, i.e. all that “hidden” information associated with the contents, such as dates, context or relationships between elements, which help to better interpret what is seen.
This information was first collected automatically (the researchers succeeded despite the anti-scraping tools LinkedIn is equipped with) and then it was organized into a structure that reconstructed an entire company hierarchy: who works where, with what role and with what level of responsibility. This step was fundamental, because it allowed researchers to understand not only who a person is, but also how much they can influence decisions within the company.
A particularly interesting aspect concerns image analysis. It’s not just about “seeing” what’s in a photo, but about interpreting it in the context of the post in which it was published. Artificial intelligence is able to deduce the professional message that the author wanted to convey, his interests, the events he participates in and even his contact network. In other words, images and text become a single information source that is much richer than the sum of the individual parts.
Once these profiles have been built, the system can take a further step: identify the topics that most interest each individual. This process is based on linguistic analysis, i.e. the automatic study of the language used in posts, to identify recurring and relevant topics. These themes become the basis for creating highly personalized messages.
And this is where so-called spear phishing comes into play. Unlike “traditional” phishing – the one based on the massive sending of generic messages to random users, so to speak – spear phishing is targeted: each communication is tailor-made for a specific victim, using real information to make everything as credible and less suspicious as possible. The analyzed system is able to generate emails that directly refer to the person’s role, the contents they have shared and the events they have participated in.
Not only that, it can also infer business email addresses. By analyzing public examples, identify the most likely format (such as [email protected]) and generates several plausible variants. This further increases the chances of an attack being successful.
The final step is the creation of a fake website, designed to appear real and consistent with the victim’s interests. In just a few minutes, AI can build a complete page, with images, statistics and references to real industry frameworks. In this way, a credible environment is built, designed to convince the user to interact, for example by entering credentials or downloading content.
From LinkedIn Scraping to Attacking: Scams in 30 Minutes
The most disturbing aspect of the experiment concerns the timing of the attack. The entire process, from data collection to phishing content creation, was completed in less than 30 minutes. This completely changes the scale of the problem: what used to require hours or days of manual work can now be replicated quickly and at scale. And from one person only.
In commenting on the experiment, in fact, the TrendAI researchers explained:
Creating our AI-powered analysis tool for LinkedIn took a single researcher just over 24 hours, using Claude Code. (…) Profiling a company’s entire management team, from extracting public posts from LinkedIn to generating custom phishing pages, takes less than 30 minutes, and all this with a Proof of Concept (PoC) tool.









