This week, Google reportedly notified parents via email about the upcoming availability of the Gemini AI chatbot for children under 13. The New York Times reported that the chatbot would become accessible next week for certain users, following an email sent to parents utilizing Google’s Family Link service. This service enables parental controls for Google products like YouTube and Gmail, and initially, only children using Family Link will have access to Gemini. The email mentioned that children could use Gemini to ask questions or help with tasks such as homework.
This announcement comes shortly after Common Sense Media labeled AI companions as an “unacceptable risk” for minors. The nonprofit collaborated with Stanford School of Medicine’s Brainstorm Lab for a report advising parents against allowing underage users to access tools like Character.ai, which engage in conversations and role-play, sometimes including inappropriate content.
Such interactions differ from AI chatbots like ChatGPT and Gemini, although there is some overlap. Recently, Mashable disclosed a bug that permitted children to generate inappropriate content with ChatGPT, and a similar issue was reported with Meta AI by The Wall Street Journal. Despite having safeguards, users often circumvent these measures, similar to how online restrictions on adult content are bypassed.
Consequently, parents striving to prevent their children from accessing AI face significant challenges. This is further complicated by an executive order from President Donald Trump aimed at integrating AI education into U.S. schools to improve AI literacy among K-12 students. Understanding AI’s abilities and limitations could be beneficial for educational purposes.
Google, in its communication to parents, acknowledged these concerns and encouraged critical thinking when children use Gemini, according to The New York Times.