Child safety and privacy are again in the news with revelations that generative AI tools use a wide variety of online sources, including links to photos of Australian infants and toddlers, some of whom could be identified by name or school.
Subscribe now for unlimited access.
or signup to continue reading
Parents may well be outraged. But a majority are themselves compromising the privacy of their children.
Seventy-five per cent of parents share their kids' data on social media. Eight of 10 parents have followers they've never met.
Children are "growing up shared". In 2012, The Wall Street Journal first coined the term "sharenting" to describe this phenomenon.
Think for a moment about the implications.
Eight-year-old Ella (not her real name, here as a case study) lives in an Australian city. Her parents, who often post about their lives on social media, are oblivious to how it impacts their child's future.
For them, like other families, the posts are about memories but for people unknown to them and profit-making corporations, the posts are simply data. Ella's digital footprint will follow her all of her life.
With just one photo of Ella, artificial intelligence can be used to create a grown-up Ella. She could go to prison for things she would never do. Her voice could be copied to scam her parents. She could become a meme for humiliation at school.
A Europol report on law enforcement and the challenges of deep fakes estimates that as much as 90 per cent of online content may be synthetically generated by 2026 - just 18 months away. Generative AI technology is rapidly changing and will profoundly impact society in many ways.
Australian parents, as soon as they are expecting a child, should be supported to be more aware of the risks of generating posts that include images and details of their babies and children. They should be supported throughout their parenting journey with clear advice for good online habits.
The parent group, The Heads Up Alliance, of which I am a member, suggests parents are desperate for advice. The Alliance urges smartphones to be delayed for children as long as possible (only brave parents with a "village" behind them can defer a mobile device for long).
What a family models matters.
Governments and families cannot eliminate risk (in fact, children need some level of risk to learn how to mitigate risk for themselves) but they can be resourced with how to prevent some risks and support distressed children to learn from what may harm them.
We are at an inflection point with AI. We know of intentional harms, like deepfake videos, others perhaps unintended, like algorithms that reinforce racial and other biases. Legislation and regulation will not stop all the harm. Real-age verification must be a priority.
The Australian government knows its current regulatory frameworks do not address the risks of AI. It says it is working on adequate guardrails to help make the design, development and deployment of AI safe.
But AI product owners and model owners (namely technology companies like OpenAI, Google, Meta and Amazon) have provided too little transparency on their datasets to have confidence that governments can hold them in check.
There is a significant arms race led by the same platforms and just three companies who control the AI value chain (Dutch ASML, NVIDIA and TSMC) to turn basic or narrow AI (with products from face detection, text editors, and search algorithms to Chatbots and digital assistants like Siri, and their multiple applications) into artificial 'general' intelligence (more sophisticated forms, most likely by 2030) and artificial superintelligence. It's not just business but nation states shaping global policy. Back in 2017, China's President Xi Jinping declared that AI would be one of China's "strategic industries" and is today a significant rival of the United States.
AI has become a new frontier, like nuclear physics or space travel, in which global superpowers jockey for supremacy.
The global public cannot have confidence that their privacy will be protected or that copyright holders - including journalists, authors and artists - will be protected or compensated.
Recent models of GTP-4 show they can "dialogue" with a person in ways that make humans think they are sentient. The aim of many AI researchers is to make a computer what we might call "conscious". Should smart devices, social media and now AI come with disclosure warning labels, as part of a wide response that frames the real and philosophical concerns of the digital age as a public health challenge?
Maybe, but even addressing it as such does not go to the source problem: software and platforms designed to be used habitually. Behind the digital environment there are commercial determinants of health. Children have less capacity than adults to modulate their use. Their families, who have been marginalised by this revolution, need to be more informed and intentional to protect their young online.
We could start by labelling so called 'smart' technology, which uses us as we use it, dehumanising.
- Toni Hassan is an artist, author of Families in the Digital Age: Every Parent's Guide (Hybrid Press, 2019) and an adjunct research scholar with the Australian Centre for Christianity and Culture at Charles Sturt University.