19f6494f7b18cb2719f5ef43a3fe3c8b.jpeg

Google AI chatbot, Gemini, to be available to Aussie kids under 13 within months


Google will launch its Gemini AI chatbot for Australian children under 13 within months, the ABC can reveal.

The tech giant is rolling out the program in the US this week, with a worldwide launch to follow in the coming months, although no date has yet been specified.

The announcement has prompted calls for the government to consider banning AI chatbots for children, in the same way it banned social media for children under 16.

“It would’ve been better if we’d erred on the side of caution with social media, and we didn’t,” said Professor Toby Walsh, a leading expert in Artificial Intelligence University of New South Wales.

He’s urging leaders to “seriously consider putting age limits on this technology.”

The ABC understands the chatbot will be automatically available to children after the launch, although parents will have the option to switch it off in Google’s Family Link app.

“It’s unusual to me that this would be turned on by default,” said Professor Lisa Given, an expert in the social impact of technology at RMIT University.

“It relies on parents … or the child themselves, having the skill to navigate the controls and turn things off.

“And it may only be turned off at the point that it raises problems … but in a way it’s too late at that point.”

Google isn’t the only company whose AI chatbot is available to younger children.

cg AI-COMPANIES-OPENAI (1)

OpenAI’s ChatGPT is free on the open web. (Reuters: Dado Ruvic)

For example, OpenAI’s website states that ChatGPT is “not meant for” people younger than 13, even though it’s free on the open web.

But Google’s Gemini tool is one of the few mainstream tools explicitly targeted at users that age.

“The problem is that [the tech companies are] tone deaf to the concerns, I think, that many parents have,” Professor Walsh said.

“And the reason that they’re tone deaf is … the financial incentives that they’re looking at; how to onboard the next generation of users.”

The update will also make Gemini, currently a paid service, available to all Google users, not just children younger than 13.

AI chatbots for under-13s: What could go wrong?

Multiple experts expressed alarm at the plan, saying AI chatbots pose more acute risks for children.

They warned Google’s Gemini chatbot has the potential to confuse, misinform and manipulate children.

“Systems that are enabled by AI can certainly hallucinate or make up information,”

Professor Given said.

“You have to have some fairly sophisticated skills in terms of discerning truthfulness.”

A girl in school uniform shows a phone to two boys in uniform.

Australia doesn’t yet have AI safeguards in place, although the government has been developing them for more than two years. (Pexels: RDNE Stock project)

Every expert the ABC spoke to had concerns younger people may have difficulty understanding that the chatbots are not human.

“These systems really attempt to replicate or mirror how people engage with each other,” said Professor Given, adding that even adults weren’t immune to the illusion.

“I’ve done some research looking at Replika, where adults were actually very much taken in … and really came to believe that they had a relationship with the system itself, very much like a friend or even a romantic partner,” she said.

Google echoed some of those concerns in an email to parents in the US ahead of the rollout there, warning “Gemini can make mistakes” and suggesting they help their children think critically about their interactions.

“I think this is Google saying that we’re rolling this out, but it isn’t entirely safe,” said John Livingstone, director of digital policy for UNICEF Australia.

“If a tech platform is acknowledging that there may be risks in their own products … then yeah, we should sit up and take notice,”

he said.

Google is planning to include default protections for younger users, to filter out inappropriate content from Gemini’s responses, but experts remain wary.

“It’s very hard to deliver on that promise,” Professor Walsh said.

“Whenever we put protection safeguards in place, people quickly find ways to circumvent … those safeguards.”

“It may not be widespread, but there may be cases where children are still getting access to inappropriate content,” Professor Given said.

Calls for a ban and new laws to protect children from AI

Australia doesn’t yet have AI safeguards in place, although the government has been developing them for more than two years.

The government has also announced a “digital duty of care,” which would force tech companies to build their products safely from the ground up, but is yet to bring a bill before parliament.

“This is actually an excellent example of why Australia needs a digital duty of care legislation to come in,” Professor Given said.

“What we actually need is for the technology companies to manage content appropriately for all of us so that using these tools is as safe as it can be — no matter your age”.

Mr Livingstone said children stand to gain immensely from AI, if it’s offered safely.

“When you think about education, for example, how transformative it might be… but there’s also serious risks.

“AI is rapidly changing childhood, and Australia needs to get serious about it.”

But Professor Walsh said it wasn’t just a case of “putting in better filters and safeguards”.

“It’s asking fundamental questions about should this be age limited. We should be having a serious conversation as a society about that,” he said.

Shopping Cart
Scroll to Top