The growing popularity of ChatGPT'sAI caricature trendcould be putting users at unexpected risk, cybersecurity experts warn. People who upload selfies and personal details to generate caricatures may be inadvertently providing material forcybercriminals to exploit. Images that seem harmless could be used to create fake social media accounts or AI-generated deepfakes, potentially exposing users to targeted scams.
Bob Long, vice-president at age authentication company Daon, said: 'You are doing fraudsters' work for them by giving them a visual representation of who you are.' Charlotte Wilson, head of enterprise at Check Point, added that selfies help criminals move from generic scams to high-conviction impersonation, increasing the effectiveness of fraud attempts.
The AI caricature trend asks users to upload a photo, sometimes alongside details such as job titles or employer names, so that ChatGPT can generate a personalised caricature. Experts have raised concerns about how these images are processed and stored.
Such data may be retained for an unknown period and could be incorporated into AI training datasets
OpenAI, the company behind ChatGPT, states that uploaded images are used to improve the system. The company clarified that this does not mean every image is placed in a public database. However, a breach of such systems could still allow bad actors access to sensitive data.
Experts warn that even seemingly minor details can put users at risk. High-resolution images can be exploited to create realistic fake accounts or deepfake content, which can then be used in targeted scams. Background clues, badges, uniforms, or logos increase the likelihood of images being misused.
Wilson highlighted that selfies are no longer just entertainment, explaining that personalised attacks are far more convincing than generic fraud attempts. Long also noted that the way the trend is framed makes it easier for criminals to gather material, describing the wording as potentially designed to simplify fraudsters' work.
For those still wishing to participate in AI caricature trends, experts recommend taking precautions to limit exposure. Users should crop images tightly, keep backgrounds plain, and avoid including anything that can identify their employer or location. Personal details such as job titles, city, or company should not be shared in prompts.
OpenAI offers a privacy portal where users can opt out of AI trainingby selecting the 'do not train on my content' option. Text conversations with ChatGPT can also be excluded from AI improvement settings. Under EU law, users can request deletion of personal data, although some information may be retained for security and fraud prevention purposes.
AI caricature trends have become increasingly popular on social media, combining entertainment with new artificial intelligence tools. While they appear harmless, cybersecurity professionals caution that these trends carry hidden risks.
Source: International Business Times UK