The ethics of sharing photos online has shifted over the years, especially when it comes to children. A few years ago, many of us may have posted back-to-school photos or holiday snaps without thinking twice. But recently I have noticed that even some of the most enthusiastic oversharers in my social media feeds are quietly backing away. This has happened as the politics and the business of “sharenting” has evolved. And with AI it is all about to get even more fraught.
“Sharenting” was once a jokey portmanteau for the over-enthusiastic social media parent. But it has developed a sharper edge as stories have emerged of children, now teenagers, asking their parents to delete childhood images posted without consent. Some of these young people have gone public, embarrassed and sometimes distressed by the permanent record of them in nappies or having meltdowns.
Former child influencers, who had their big milestones and sometimes daily struggles turned into a stream of monetised content, have been some of the most vocal critics. One young person, Cam Barrett, a TikTok influencer, recently testified to a hearing in Washington state that she was terrified to use her real name, “because a digital footprint I had no control over exists”. Her mother, she said, had shared details of her first period, of illnesses she had and of a car crash she was involved in.
Ensuing debates about child exploitation and consent have prompted a political response. France introduced legislation first requiring profits from under-16 influencers to be set aside in protected accounts, and later giving children explicit rights to their own image. In the US, several states are considering similar laws, or updating laws for child actors to cover influencers. This has all prompted many regular platform users to post fewer pictures, use private messaging apps or opt for back-of-head portraits that don’t show children’s faces.
Ireland not a ‘truly rich’ country, according to The Economist
‘Do you realise I am Irish and we’d never put up with this?’
Sinn Féin may reject commemorating the Normans, but there are some suspiciously Saxon names in its ranks
Defence Forces lost stash of high-calibre ammunition for several months, official documents show
Now, a further big shift is under way. If a photo of you, or of your child, exists on the internet, it has likely been used to train the AI models powering text- and image-generating tools. We no longer have to think about who may see our images, but also who processes them.
Text- and image-generating tools, such as OpenAI’s ChatGPT or Google’s Gemini, are underpinned by AI models, and these models need to be trained on vast, almost unimaginable quantities of data; billions if not trillions of snippets of text, photos and videos. The first wave of this came from crawling the web, copying text and images from across millions of websites, forums, blogs and news sites. But eventually tech companies exhausted even the seeming vastness of the internet. They are now in a race to find content to feed a voracious appetite, and the winner stands to dominate the next phase of the internet.
The need for data is so acute that Meta considered buying the storied publishing house Simon & Schuster just to have access to its catalogue of human language. In the end, according to court filings, they deemed buying content from publishing houses too slow and too expensive; they are accused of having instead downloaded 7.5 million pirated books and 81 million pirated research papers from file-sharing site LibGen.
Owners of copyrighted content have fought back. Creators from journalists to illustrators are suing AI companies for using their work without permission. The core issue is whether scraping the internet for data including copyrighted content qualifies as “fair use”. So far, the answers are murky.
[ Don’t be fooled: AI is a long way from being able to think for itselfOpens in new window ]
Perhaps it is inevitable, then, that the likes of Google and Meta would look closer to home for content to feed the machines. Google transcribed millions of videos posted to its YouTube platform and fed the text to their models (so did OpenAI, which is being sued by YouTube content creators). Meta is trying to work out how it can use the billions, if not trillions, of pieces of content people have uploaded to Facebook and Instagram going back decades. Mark Zuckerberg told shareholders in 2024 that the company’s data set is “greater than the Common Crawl”, one of the largest open web data sets used to train language models. By this he means publicly available Facebook and Instagram content – or, in other words, your photos and mine.
And it might go even further. Reporting by TechCrunch and The Verge suggests that recent changes to Meta’s terms of service may make it possible for the company to use unpublished photos on our phones’ camera rolls to train AI models. (Meta has responded that it is not currently using unpublished photos in this way, and that these features are “opt-in” as part of an AI photo tool, but the reporting suggests it has not ruled out using them in future).
This all adds up to the politics of photos of ourselves and others online – whether shared or just taken and stored in our phones – becoming even more fraught. Where once we worried about bad actors stealing photos of kids for unthinkable purposes, we now need to consider the ethics of their voices, gestures and birthday parties feeding energy-guzzling image generators that will be used for ends we can’t even imagine yet.
Ultimately, we need to consider whether we’re happy for our private photos to be used to bolster the market value of some of the most valuable companies that have ever existed. The monetisation of childhood is no longer limited to a lucky (or unlucky) few who sign brand deals; now every photo you’ve ever taken is a commodity.
[ ‘Really scary territory’: AI’s increasing role in undermining democracyOpens in new window ]
Liz Carolan works on democracy and technology issues, and writes at thebriefing.ie